Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Null screens to evaluate the shape of freeform surfaces: progressive addition lenses

Open Access Open Access

Abstract

We propose a method for measuring the shape of freeform surfaces such as Progressive Addition Lenses (PAL). It is based on optical deflectometry by considering a non-uniform pattern of spots computed by using the null-screen method. This pattern is displayed on a flat LCD monitor being reflected on the freeform under test and whose image is recorded on a CCD camera placed at a predefined off-axis position. We use one image to calibrate the experimental setup and another to measure the freeform surface. We develop an iterative algorithm to retrieve the surface under test and calculate the spherical and cylindrical dioptric powers of the frontal freeform of a commercial PAL under test.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The technological advance has allowed to design and fabricate optical surfaces with different shapes, beyond of the traditional surfaces with symmetry of revolution, such as conics and other aspherical surfaces. These new shapes are called freeforms, these are useful to reduce aberrations, dimensions and weight in optical system better than conventional optical systems does, because they have more degrees of freedom, i.e., lacking of axial or translational symmetry; therefore, an adequate representation of these surfaces is essential for designing and fabricating them [14]. The freeforms surfaces may be described with a wide variety of mathematical representations; they range from Taylor monomials [5], Fourier Series [6], B-Splines [7], Nurbs [8] and orthogonal polynomials such as Zernike, Chebyshev, Legendre and Forbes, among others. For instance in [9] includes a review of the techniques to represent freeforms. Alternatively, a particular example of these surfaces are present in the Progressive Addition Lenses (PAL). These lenses have been a better solution for presbyopia than bifocal lenses does; the PAL allow the observation of objects located on a wide range of distances [10,11]. With the bifocal lenses, only far and near objects can be seen sharply, because these are composed of two lenses with different refractive powers; this results in some discomfort when trying to focus objects at intermediate distances. Some techniques to manufacture PALs are glass molding [12] and diamond turning [13].

Since the first face of an earlier PAL can simply be considered as a freeform surface without symmetries and very different from the traditional surfaces used in spectacles (spherical and toric surfaces), their design, fabrication and testing have been a challenge for the opticians who had commonly proposed new methods for testing them; the first proposal with a very wide usage are the mechanical methods, mainly through a Coordinate Measuring Machine (CMM); in addition, optical methods are searching to avoid the damage to the surface by the stylus; the most used method are based on either Hartmann or Shack-Hartmann technique [1416]. These optical methods measure at one time with a single shot the power distribution of the PAL, but no the single freeform surface. As is well known the Hartmann test traditionally uses a screen with a uniform distribution of holes [17], which produces a non-uniform distribution of bright spots at the detection plane either by reflection or refraction, this non-uniformity can be very pronounced, complicating the subsequent analysis. Other methods use an object pattern constituted by black and white fringes equally spaced and with same width [1821]. However, the using of fringes introduces an inherent problem known as the skew ray error that consists in the difficulty to correlate an object point with its corresponding image point [22].

To overcome this problem, null tests have been developed for simple optical components having rotational symmetry [23]. Null tests are motivated by the fact that the interpretation of the surface shape is simplified, and the visual analysis turns out to be straightforward for both qualitative and quantitative tests. The null screens method has been extensively applied for testing different kinds of reflecting surfaces, such as fast convex and concave conics, including off axis surfaces [24,25]. The reconstruction of the surface under test is made by integration of the normal components, as is explained in [26]. In the present work we propose to design null screens onto a LCD screen that produces a uniform distribution of spots at the image plane in order to test quantitatively a PAL. The main advantage of using an LCD screen in the null-screen method is the capacity to change the shape and position of the objects over the LCD, in a dynamic way, until to obtain a reflected image without overlaps. The goal in this paper is to provide as simple as possible a general methodology to design and evaluate freeform of a PAL using just one reflected image onto the freeform and captured by a CCD camera; this method can be readily modified for testing other freeforms.

In this work some important changes are introduced to the null screen methodology. The first one, is that we have evaluated a commercial PAL, where the design parameters of the freeform are unknown, so that the design of the null screen cannot be done a priori as usually is considered; then, the process to design null screens is explained in Section 2. The second one is the proposal of an off-axis optical setup to test all the surface at once without obscured zones. In addition, a careful calibration procedure was done, to ensure we know the right position of every component of the setup using only one image, this is described in Section 3., including also a calibration of the distortion of the camera lens. In the Sections 4 and 5 we propose an iterative procedure for the reconstruction surface and we show its performance in ideal conditions, using simulations. Finally, the experimental results and the conclusions are presented in Sections 6 and 7.

2. Preliminaries

The diagram of the setup is performed as is shown in Fig. 1. The $\textbf{Z}$ axis is oriented along the external normal of the test surface at the center point of this surface. The $\textbf{Y}$ axis is oriented in such a way that the optical axis of the camera is within the $\textbf{Y}\textbf{-}\textbf{Z}$ plane. The LCD screen is located symmetrically to this plane. As an initial approximation to the unknown surface, we represent the freeform as the next conical surface

$$x^2+ y^2 - 2\, r\, z + \left( k_o+1 \right) z^2 =0,$$
where $r=1/c$ is the radius of curvature, considering that $r>0$ is for convex surfaces, $r<0$ is for concave surfaces, and $k_o$ is a conic constant. We use an inverse ray tracing in order to calculate the null-screen on the LCD that yield an ordered array of points on the CCD sensor. We assume that $\mathcal { \hat I}$ represents the direction of an incident ray and $\vec {P}=(x,\, y ,\, z)$ is the intersection point between the incident ray and the conical surface Eq. (1), in such a way that can be written in components simply by
$$ x = h_x + \xi \, {\mathcal{I}}_x,\quad y= h_y + \xi \, {\mathcal{I}}_y,\quad z= h_z + \xi \, {\mathcal{I}}_z ,$$
where the parameter $\xi$ is a real number and it represents the distance from ${\vec h}$ to $\vec P$. With the aim of explicitly obtain the points on the approximated surface we substitute the Eq. (2) into Eq. (1), and reducing further we explicitly obtain a quadratic equation for $\xi$ given by
$$A\, \xi^2 + 2\, B \, \xi +C =0 ,$$
where we have defined
$$A=1+ k_o \,{\mathcal{I}}_z^2 , \quad B={\vec h}\cdot {\hat {\mathcal{I}}} + {\mathcal{I}}_z \left( h_z\, k_o -r \right), \quad C={\vec h}\cdot {\vec h}+ h_z(h_z \, k_o - 2 r).$$

 figure: Fig. 1.

Fig. 1. Graphical depiction of exact ray tracing to compute a Null-Screen on the LCD.

Download Full Size | PDF

Thus by solving Eq. (3) for $\xi$ yields

$$\xi= \left({-}B\pm \sqrt{B^2 - A\, C}\right)/ A.$$

Substituting the value for $\xi$ into Eq. (2) (considering the negative square root) the coordinates of the incident point are obtained. The unit normal vector $\hat {\mathcal {N}}$ on the front surfaces of the approximated surface evaluated at $\vec P$, is given by the gradient of function in Eq. (1), which gives

$$\hat{\mathcal{N}} = \left( x,\, y,\, (k_o+1)\, z- r \right) \left( x^2+y^2+\left[ (k_o+1)\, z- r\right] ^2\right)^{-\frac{1}{2}}.$$

Then, the unit reflected vector calculated through the reflection law is given by

$$\hat{\mathcal{R}} = \hat{\mathcal{I}} - 2 \, \left(\hat{\mathcal{I}} \cdot \hat{\mathcal{N}} \right)\, \hat{\mathcal{N}} = \left( {\mathcal{R}}_x,\, {\mathcal{R}}_y,\, {\mathcal{R}}_z \right) \,.$$

Finally, the reflected ray impinging on the LCD can be represented by ${\vec Q}={\vec P}+\Lambda \, \hat {\mathcal {R}}$, or in components

$$Q_x = x+ \Lambda \, {\mathcal{R}}_x,\quad Q_y= y+ \Lambda \, {\mathcal{R}}_y,\quad Q_z= z + \Lambda \, {\mathcal{R}}_z \,.$$
where, ${\vec Q}= (Q_x, Q_y, Q_z)$ is a point on the LCD plane, $\Lambda$ is a real number and it represents the distance between ${\vec P}$ and $\vec Q$. The LCD in which we place the null screen is defined by
$${\hat V_n} \cdot \left( {\vec Q}-{\vec Q_0} \right) = 0,$$
where, the unit vector ${\hat V_n} = (V_{nx},\, V_{ny}, \, V_{nz})$ is normal to the LCD, ${\vec Q_0}= (Q_{0x},Q_{0y},Q_{0z})$ is the geometrical central point of the LCD screen. Substituting Eq. (8) into Eq. (9) and solving for $\Lambda$, we obtain the coordinates of the impinging rays on the LCD after reflection on the approximated surface, and reducing further they can be written as
$${\vec Q}=\vec{P} + \left( {\vec V_n} \cdot \hat{\mathcal{R}} \right)^{{-}1} \left[ \vec V_n \cdot \left( {\vec Q_0} - \vec{P} \right) \right] \hat{\mathcal{R}},$$
where we assume that ${\vec V_n} \cdot \hat {\mathcal {R}} \neq 0$.

3. Calibrating the experimental setup

3.1 Calibration of distortion

For the calibration of the setup, we have used a standard optical flat test plate with a flatness accuracy of $\lambda /10$, which is used as a first surface mirror with a diameter of $D=4 \textrm {in}$. The optical flat surface defines the X-Y plane of the global coordinate system and its center coincides with the origin of coordinates ${\mathcal {O}}$ (see Fig. 2(a)). The CCD sensor and the LCD are placed in such a way that the virtual image of the picture displayed onto the LCD produced by reflection on the flat surface, fulfill the next conditions: a) The image must be entirely seen by the CCD camera. b) The inclination angles $\theta$ and $\phi$ of the camera are selected in such a way that the virtual image is orthogonal to the $\textbf{Z}'$-axis, which defines the optical axis of the camera lens. In addition, we assume that the origin $\mathcal {O}^\prime$ of the system of reference $\textbf{X}'\textbf{Y}'\textbf{Z}'$, lies on the central point of the CCD sensor as shown in Fig. 2(a) i.e., the optical axis pass through the sensor center.

 figure: Fig. 2.

Fig. 2. (a) Schematic setup for the camera distortion calibration process. (b) Rectangular array of circular spots uniformly spaced displayed on LCD. (c) Spot pattern recorded at CCD. (d) Binary image. (e) Plot of $\rho _d^\prime$ against $\rho _o^\prime$.

Download Full Size | PDF

A square array of circular spots (2.11 mm in diameter), spaced 3.45 mm along the $x$ and $y$ directions, is displayed on the LCD, as is shown in Fig. 2(b). In addition, the image of the spot array is seen through the stop defined by the rim of the circular optical flat; due to the inclined setup, however, the pupil becomes elliptical. The camera is oriented in such a way that reflected image of the spot array is focused and symmetrical around the center of the image; one spot must be located there. The array of circular spots are, approximately, symmetrical and uniformly distributed, in reference to the central spot at the optical axis as is shown in Fig. 2(c). In this way the LCD image is orthogonal to the optical axis of the camera. In summary, the virtual image acts as an object for the camera lens. The distances $a_i$ (image) and $a_o$ (object) shown in Fig. 2(a) are simply related by

$$a_i=f \left( 1+ M_T \right), \quad \quad a_o = f \left( 1+ 1/M_T \right),$$
where $f$ is the focal distance of the camera lens coupled to the CCD sensor (we use the nominal value provided by Thorlabs) and $M_T$ is the transversal magnification. If we properly close the diaphragm of the camera lens, without loss of generality we exclusively consider the distortion effects to calibrate the experimental setup.

From Fig. 2(b) we define the ROI (Region of Interest) as the ellipse defined by the upper surface of the flat; binarizing the image we obtain the Fig. 2(d). We obtain the spots centroids coordinates $(p'_{x},\, p'_{y})$ utilizing the product of the images 2(c) and 2(d) and using statistical averaging, this procedure is similar to the one employed in [24]. We translate the origin of the system ${\mathcal {O}}'$ at the center of the CCD sensor as is shown in Fig. 2, then the spots centroids coordinates are given by

$$\left( x'_{d},\, y'_{d} \right)= \sqrt{ \frac{ \mathcal{L}_M^2+{\mathcal{L}}_N^2 }{M^2 + N^2} } \left( p'_{x} - M/2\, , \, N/2 - p'_{y} \right),$$
where $(M, N)$ and ${\mathcal {L}}_M, {\mathcal {L}}_N$ are the dimensions of CCD sensor in pixels and millimeters, respectively. For our purposes is enough to describe the distortion aberration an odd polynomial of third degree. Thus the distortion can be quantified by using the next formula [27]
$$\rho_d^\prime = M_T\rho_o^{\prime}+E\rho_o^{\prime 3},$$
where $E$ is the distortion coefficient, $\rho _{d}^\prime =\left ( x_{d}^{\prime 2}+ y_{d}^{\prime 2}\right ) ^{1/2}$ and $\rho _{o}^\prime =\left ( x_{o}^{\prime 2}+ y_{o}^{\prime 2}\right )^{1/2}$ are the image and object the radial distances to the optical axis, respectively. The quantity $\rho _d^\prime$ is affected by distortion aberration and $\rho _o^\prime$ is obtained from the displayed object on LCD. To obtain the values $E={-}6.601\times 10^{{-}8}$ and $M_T=0.042$, we fit the Eq. (13) using the nonlinear Levenberg-Marquardt method to the data $\rho _o^\prime$ against $\rho _d^\prime$ (see Fig. 2(e)). Finally we used the method of inversion of polynomials described in [28] and the paraxial relation $M_T=\rho _c^\prime /\rho _o^\prime$ to obtain the image radial distance $\rho _{c}^\prime$ without distortion as follows
$$\rho_{c} ^\prime= \rho_{d}^\prime -E \left( \rho_{d}^\prime \big / M_T\right)^3.$$

The image coordinates without distortion, in the plane $\textbf{X}'{-}\textbf{Y}'$, are given by

$$x_c^\prime=\rho_{c} ^\prime\cos(\alpha);\quad y_c^\prime=\rho_{c} ^\prime\sin(\alpha).$$
where $\alpha$ is the polar angle of the point $\left (x_d^\prime , y_d^\prime \right )$. We implement the Eq. (15) to compensate the distortion effects in the recorded images by CCD sensor. Also, using the Eq. (11), we can compute the position of the virtual image respect to the system of reference $\textbf{X}'\textbf{Y}'\textbf{Z}'$.

3.2 Extrinsic parameters of the camera

The coordinate system $\textbf{X}\textbf{Y}\textbf{Z}$ is related to the system $\textbf{X}'\textbf{Y}'\textbf{Z}'$ by two subsequent rotations around the defined axes, whose rotation angles $(\phi ,\, \theta )$ are called Euler angles [29,30]. They are defined as follows: The first rotation is performed around the $\textbf{Z}$-axis by the angle $\phi$, the second rotation is performed around the $\textbf{X}$-axis by the angle $\theta$; the rotations can be expressed by the product of the two rotation matrices as follows

$$\boldsymbol{M_R}= {\small \begin{pmatrix} \cos{\phi} & {-}\sin{\phi} & 0 \\ \sin{\phi} & \cos{\phi} & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos{\theta} & {-}\sin{\theta} \\ 0 & \sin{\theta} & \cos{\theta} \end{pmatrix}}$$

In order to obtain the values for $\phi$ and $\theta$, in the LCD is displayed a white screen (see Fig. 2(a)); a circular area given by the perimeter of the flat mirror under test produces an elliptical boundary registered at CCD sensor as is shown in Figs. 3(a) and 3(b). We have implemented a similar procedure as is shown in [31], to obtain the geometrical parameters of the transformation. The optical axis, however, do not pass through the center of the surface.

 figure: Fig. 3.

Fig. 3. (a) Circular area given by the perimeter of the flat mirror under test. (b) Binary image as a result of the image thresholding. (c) Illustration of the geometry to compute the values of $\theta$ and $\phi$.

Download Full Size | PDF

We define a third coordinate system $\textbf{X}^{\prime \prime }\textbf{Y}^{\prime \prime }\textbf{Z}^{\prime \prime }$ in such a way that the origin lies on the pinhole and the corresponding axes are parallel to $\textbf{X}\textbf{Y}\textbf{Z}$. The position of $\mathcal {O}^{\prime \prime }$ is $\left (0,\,0,\, {-}a_i \right )$ respect to the coordinate system $\textbf{X}^{\prime }\textbf{Y}^{\prime }\textbf{Z}^{\prime }$. As is explained in [32], any rotated ellipse displayed at $\textbf{X'}{-}\textbf{Y'}$ plane can be projected through an oblique cone of rays into a circle at the $\textbf{X}{-}\textbf{Y}$ plane and viceversa, the orientation of $\textbf{X}^{\prime }\textbf{Y}^{\prime }\textbf{Z}^{\prime }$ and position of $\textbf{X}\textbf{Y}\textbf{Z}$ respect to coordinate system $\textbf{X}^{\prime \prime }\textbf{Y}^{\prime \prime }\textbf{Z}^{\prime \prime }$ can be obtained by varying numerically the values of $\phi$, $\theta$ and $C_z$ until the oblique cone of rays intersects the plane $\textbf{X}{-}\textbf{Y}$ into a circle of diameter $D$ with center at $\left ( C_{x},\, C_{y} \right )$.

The geometry to compute the values $\phi$ and $\theta$ is illustrated in Fig. 3(c). The coordinates of pinhole respect to coordinate system $\textbf{X}\textbf{Y}\textbf{Z}$ is $\vec h=\left ({-}C_x,\, {-}C_y,\, {-}C_z \right )$. The normal vectors $\hat V_c$ and $\hat V_n$ shown in Fig. 2(a) are given by

$$\hat V_c^T=\boldsymbol{M_R}\left(\phi, \theta \right) \mathbf{\hat Z} ^T, \quad \quad \hat V_n^T=\boldsymbol{M_R}\left( \phi, {-}\theta\right) \mathbf{\hat Z} ^T,$$
where $\boldsymbol {M_R}$ is the matrix that appear in Eq. (16), $\boldsymbol{\hat Z}$ is the unit vector along the $\textbf{ Z} \textbf{-}$axis direction and the superscript $T$ denotes the transpose of a vector quantity. Additionally, the central points $\vec Q_v=\left (Q_{vx}, Q_{vy},Q_{vz} \right )$ of the virtual image and $\vec Q_0$ of LCD (see Fig. 2(a)) are given by
$$\vec Q_v=\vec h+a_o \hat V_c, \quad \vec Q_0=\left(Q_{vx},\, Q_{vy},\, {-}Q_{vz}\right).$$

4. Method of surface reconstruction

4.1 Design of the null-screen

In the image plane $\textbf{X}^{\prime} \textbf{Y}^\prime$ we have designed a square array of circular spots uniformly spaced, the separation in the $x$ and $y$ directions is $155\mu m$. Each spot has a radius of $52\mu m$, this is illustrated in the Fig. 4(a). The coordinates of each point $\vec Q$ on the Null-Screen are calculated by Eq. (10). For designing purposes we propose an ideal surface given by Eq. (1) with a paraxial radius $r= {-}100mm$ and conic constant $k_o=0$. The other necessary parameters for the construction of the Null-Screen are given in Section 3., the values obtained for the angles are $\phi =1.077^{\circ }$ and $\theta =34.214^{\circ }$, the distances $a_i=16.678mm$ and $a_o=393.144mm$, and the coordinates of pinhole are $\vec h=\left ({-}0.084mm, {-}172.981mm, 248.645mm\right )$. We make an exact ray tracing from CCD to the surface, and finally from the surface to LCD (see Fig. 1). Note how the spots on the $\textbf{X}^\prime \textbf{Y}^\prime$ are mapped onto the LCD screen. Each geometric object on the LCD screen have an almost elliptical shape, but with different orientations, as is illustrated in the Fig. 4(b). Once the camera calibration is done, we calculate the null-screen. This is displayed on the LCD. Also, we replace the optical flat with the surface under test in the measurement system as is illustrated in the Fig. 4(c). When the surface under test have departures respect to the proposed ideal surface, the experimental image pattern is not an equally spaced arrangement of bright spots as we expected.

 figure: Fig. 4.

Fig. 4. (a) Ideal image composed of circular spots uniformly spaced in $\textbf{X}^\prime$ and $\textbf{Y}^\prime$ directions. (b) Null-Screen constructed and displayed in the LCD. (c) Experimental setup for measuring the frontal surface of PAL.

Download Full Size | PDF

4.2 Reconstruction procedure

We perform an iterative algorithm which computes the slope data to reconstruct the surface under test. The slope values depend of the incident points on the surface under test; however, these points are unknown. We can propose an auxiliary surface to obtain a set of the approximate incident points as shown in Fig. 5. This implies that there are an infinite combination of auxiliary surfaces (heights) and normal vectors (slopes) to retrieve the surface that reproduce the image on CCD sensor, this is named height-slope ambiguity [19,21]. However, our algorithm avoids this ambiguity will be discussed below.

 figure: Fig. 5.

Fig. 5. Approximated calculation of normal vectors to surface under test.

Download Full Size | PDF

The reconstruction procedure computes the direction of the incident ray respect to $\textbf{X}^\prime \textbf{Y}^\prime \textbf{Z}^\prime$, this is calculated using the Eqs. (11) and (15), in the next form

$$\mathcal{\hat I}^\prime={-} \left( x^\prime_c, y^\prime_c, a_i\right)\left( x^{\prime 2}_c+y^{\prime 2}_c+a_i^2\right)^{-\frac{1}{2}}.$$

Using the Eq. (19), the direction $\mathcal {\hat I}^\prime$ respect to $\textbf{X}\textbf{Y}\textbf{Z}$ is given by

$$\mathcal{\hat I}^T=\boldsymbol{M_R }\mathcal{\hat I}^{\prime T},$$

The incident ray passes through the center of the lens stop at $\vec h$ and is intersected with the ideal surface used for the design of the null screen, this is a conic or plane, at a point $\vec P_a=\left (x_a, y_a, z_a \right )$, as shown in Fig. 5. Lacking the true incident point, $\vec P=\left (x, y, z \right )$, we approximate it by $\vec P_a$ and propose as an initial step that the direction $\mathcal {\hat R}$ of reflected ray is approximated by

$$\mathcal{\hat R}_a=\left( \vec Q-\vec P_a\right) \left({\parallel} \vec Q-\vec P_a\parallel\right) ^{{-}1},$$

The normal vector to the surface under test, then is approximated by

$$\hat n=\left( \mathcal{\hat I}-\mathcal{\hat R}_a\right) \left({\parallel}\mathcal{\hat I}-\mathcal{\hat R}_a\parallel \right)^{{-}1}.$$

The sagitta of the surface under study, is obtained through the next integral [26]

$$\displaystyle{z_{int}=t_k+\int_\mathcal{C}\left[ \left(- n_x/n_z\right) dx+ \left(- n_y/n_z\right) dy\right]} .$$

In the Eq. (23) the two quantities in the parentheses inside the square brackets are the surface slopes in the $x$ and $y$ directions, denoted $S_x={-}n_x/n_z$ and $S_y={-}n_y/n_z$, respectively. In addition, $n_x$, $n_y$, and $n_z$ are the components of the vector $\vec n$ that appear in the Eq. (22). The calculation of the integral in Eq. (23) is done along different paths $\mathcal {C}$ defined in the plane $\textbf{X}{-}\textbf{Y}$. These paths $\mathcal {C}$ were obtained by using the Dijkstra algorithm [33]. The quantity $t_k$ is the $z$ coordinate for the initial point of the path and is the same for all the paths.

As a numeric integration method, we use the trapezoidal rule for nonequally spaced data, considering some arbitrary value of $t_k$. As a result we obtain a point cloud $\left \lbrace \left ( x_a, y_a, z_{int}\right )_k\right \rbrace$. Here is well worth to recognize that the closer to the real surface is the chosen auxiliary surface, the better the normals will be approximated to the real ones. In the next section we will describe an iteration method to get a better evaluation of the surface.

4.2.1 Iterative procedure considering the auxiliary surface positioned at $t_k$

As it was said at the end of last section, the evaluation of the surface is better as the auxiliary surface is closer to the real surface. In this section, at fixed value $t_k$, we propose an iterative procedure to improve the calculation of $\mathcal {\vec P}$, the direction $\mathcal {\vec R}_a$ and the approximated normals $\vec n$. The value $t_k$ is the position of the auxiliary surface along the $\textbf{ Z}$-axis, as shown in Fig. 5.

We use a polynomial fit to express the slopes to the retrieved surface when auxiliary surface is positioned at $t_k$. Let us assume that the point cloud $\left \lbrace \left ( x_a, y_a, z_{int}\right )_k\right \rbrace$ can be represented by a linear combination of Taylor monomials as

$$\mathcal{G}\left(x,y \right)= \sum_{j=1}^{J} B_j \mathcal{F}_j(x,y),$$
where $\mathcal {F}_j$ is the $j$th Taylor monomial and $B_j$ is the corresponding coefficient that fits $\mathcal {G}$ to the cloud of points, $J$ is the number of Taylor monomials. The partial derivatives are given by
$$\mathcal{G}_x=\dfrac{\partial}{\partial x} \sum_{j=2}^{J} B_j \mathcal{F}_j(x,y), \quad \mathcal{G}_y=\dfrac{\partial}{\partial y} \sum_{j=2}^{J} B_j \mathcal{F}_j(x,y),$$
we propose that the first Taylor coefficient fulfills $B_1=t_k$ and assuming that $F_1=1$. We use all values of the measured slopes $S_x$ and $S_y$, the partial derivatives $\mathcal {G}_x$ and $\mathcal {G}_y$ in order to build the following linear systems
$$\boldsymbol{S_x}=\boldsymbol{ \mathcal{G}_x} \boldsymbol{M_B}, \quad \boldsymbol{S_y}=\boldsymbol{ \mathcal{G}_y} \boldsymbol{M_B},$$

In Eq. (26) $\boldsymbol {S_{x}}$ and $\boldsymbol {S_{y}}$ are $\mathcal {M}\times 1$ column vectors, $\boldsymbol { \mathcal {G}_{x}}$ and $\boldsymbol { \mathcal {G}_{y}}$ are $\mathcal {M}\times (J{-}1)$ matrices, each row contain the numerically values of the partial derivatives of Taylor monomials evaluated at the sampled points, and $\boldsymbol {M_B}$ is a $(J{-}1)\times 1$ column vector of the coefficients $B_j$, these are the unknowns. $\mathcal {M}$ is the number of data points. The Eqs. (26) can be written in the next matrix form

$${\small \begin{pmatrix} \boldsymbol{S_{x}}\\ \boldsymbol{S_{y}} \end{pmatrix}} = {\small \begin{pmatrix} \boldsymbol{ \mathcal{G}_{x}}\\ \boldsymbol{ \mathcal{G}_{y}} \end{pmatrix}} \, \boldsymbol{M_B}$$

The matrix equation shown in Eq. (27) can be solved numerically for the $B_j$ coefficients. We can improve the reconstruction of the surface under test using the surface in Eq. (24), with the coefficients $B_j$, as a new auxiliary surface, closer to the previous one. We propose to make the iterative procedure along the incident ray, because $\mathcal {\hat I}$ is the only known direction. We calculate again the point cloud using the next iteration expression

$$\mathcal{\vec P}^l=\mathcal{\vec P}^{l-1}+\sigma^{l-1} \, \mathcal{\hat I} \quad \quad l=1,2,\ldots , L,$$
where superscript $l$ is iteration number, the point $\mathcal {\vec P}^l=\left ( x^l, y^l, z^l\right )$ belongs to the reconstructed surface and $\sigma ^{l-1}$ is a scalar value. In the first surface reconstruction ($l=1$), the terms on the left side of Eq. (28) are $\sigma ^{0}=\left ( \mathcal {G}(x_a, y_a ){-}z_{int}\right )/\mathcal {I}_z$ and $\mathcal {\vec P}^0=\left (x_{a}, y_{a}, z_{int}\right )$.

In next iterations $l\geq 2$, $\sigma ^{l-1}=\left ( \mathcal {G}(x^{l-1}, y^{l-1} ){-}z^{l-1}\right )/\mathcal {I}_z$. We substitute $\mathcal {\vec P}^l$ into Eq. (22) to compute again the normal vectors $\hat n$ and the polynomial fit shown in Eq. (24). We compute unit normal vector in analytical form, evaluating the Eq. (25) in the point $\mathcal {\vec P}^{l}=\left ( x^{l}, y^{l}, z^{l}\right )$, as follows

$$\hat w=\left( - \mathcal{G}_x, - \mathcal{G}_y, 1 \right) \left( \mathcal{G}_x^2+ \mathcal{G}_y^2+ 1\right)^{-\frac{1}{2}}.$$

For each iteration, the algorithm compute the set of values $\left \lbrace \mu _i=\parallel \hat n_i- \hat w_i \parallel \right \rbrace _k$ these are deviations between the new normal vectors $\hat {n}$ and $\hat w$ in each $i$th measured point. One measure of how close is the retrieved surface in the $l$th iteration respect to the previous iteration is given by

$$\epsilon_l=\sqrt{ \sum_{i=1}^{\mathcal{M}} \; \left( \mu_i-\overline{\mu}\right)^ 2\left( \mathcal{M}-1\right)^{{-}1} },$$
$\overline {\mu }$ is the mean value of the set $\left \lbrace \mu _i \right \rbrace _l$. In order to determine the value of $l$ for which the reconstructed surface is the best approximation to surface under study, we calculate the minimum value of the set $\left \lbrace \epsilon _l \right \rbrace$ this value is denoted by $\delta _k= \textrm{mm}\left \lbrace \epsilon _l \right \rbrace$ where the superscript $k$ refers to the $k$th-position of the auxiliary surface.

4.2.2 Iterative procedure for the auxiliary surface at different heights

At last section, we show as to improve the surface retrieval for the value $t_k$. The values for the normal vectors Eq. (22) depend on the position $t_k$ of the auxiliary surface. It means that for each value of $t_k$ we can retrieve a surface, this is known as the slope-height ambiguity [19,21]. In order to guarantee that the retrieved surface is the best solution, we propose auxiliary surfaces positioned along the $\textbf{ Z}$-axis in each value of the set

$$\left\lbrace t_k\right\rbrace =\left\lbrace t_0, t_1, t_2, \ldots t_{K-1}, t_K \right\rbrace.$$

The set of values shown in Eq. (31) start at $t_0$ and end at $t_{K}$, it is divided into $K$ equispaced subintervals of width $\Delta t = (t_K-t_0)/K$. Therefore, $t_{k}=t_{k-1}+k \Delta t$ where $k=1,2,\ldots ,K-1$. We repeat the process described in Section 4.2.1 for each value $t_k$ of the set shown in Eq. (31). Then, the evaluation of the normals and the numeric integration is done for the same auxilary surface at different heights. As a result we obtain the set values $\left \lbrace \delta _k \right \rbrace$. The minimum value of the set $\left \lbrace \delta _k \right \rbrace$ gives us the value $t_k$ in which the measured normal vectors $\hat n$ are closest to the analytical normal vectors $\hat w$, and the retrieved surface is very close to the surface under test. To improve even more the reconstruction, we can change end values of the set 31 by $t_{k-1}$ and $t_{k+1}$. We can partition the interval $\left (t_{k-1}, t_{k+1} \right )$ into $K$ subintervals and apply again the iterative procedure, in each element of this interval. The general ideas implemented in the iterative procedure are presented in the flow diagram of Fig. 6.

 figure: Fig. 6.

Fig. 6. Flow Diagram of the iterative procedure.

Download Full Size | PDF

5. Simulations

To show that the reconstruction procedure is a feasible approach to retrieval of the surface under test, we simulate all the procedure to evaluate a surface. We start with an analytical well known surface, then, we calculate the positions of the centers of the spots at the image through an exact ray tracing procedure. We show the results of the simulations with three different surfaces.

5.1 Simulation of the spot patterns

The spot patterns onto the $\textbf{X}^\prime {-}\textbf{Y}^\prime$ plane are computed for three different surfaces. Each surface for the test simulation is represented through the function $\Phi (x,y)$ is a rotationally symmetric conic section with vertex which is the point $(0, 0, 0)$ plus deformations described by a linear combination of Zernike polynomials [6]. This surface is referred to a coordinate system $\textbf{X}\textbf{Y}\textbf{Z}$ according to Fig. 1 and its analytical expression is given by

$${\displaystyle \Phi(x,y)=c \left(x^2+ y^2\right) \left( 1+\sqrt{1-(k_o+1)c^{2}\left( x^2+y^2\right) }\right)^{{-}1} + \sum_{i=0}C_i Z_i(x,y), }$$
where $r=1/c$ and $k_o$ are the radius of curvature and conic constant, respectively. In Eq. (32) $Z_i$ and $C_i$ are $i$th Zernike polynomial and its coefficient or weight. The normal vector $\mathcal {\hat N}$ to the surface at point $\vec P=\left (x,y,\Phi \right )$ can be obtained in two ways
$$\mathcal{\hat N}=\left( \mathcal{ \hat R}-\mathcal{ \hat I}\right) \left({\parallel} \mathcal{ \hat R}-\mathcal{ \hat I} \parallel\right) ^{{-}1}, \quad \quad \mathcal{\hat N}=\left( -\Phi_x, -\Phi_y, 1\right) \left( \Phi_x^2+ \Phi_y^2+ 1\right)^{-\frac{1}{2}} ,$$
where $\Phi _x$ and $\Phi _y$ are the partial derivatives of $\Phi$ with respect to $x$ and $y$, respectively. Using the rays shown in Fig. 1, the directions $\mathcal {\hat I}$ and $\mathcal {\hat R}$ can be written explicitly as follows
$$\mathcal{\hat I}=\left( \vec P-\vec h\right) \left({\parallel} \vec P-\vec h \parallel\right) ^{{-}1}, \quad \quad \mathcal{\hat R}=\left( \vec Q-\vec P\right) \left({\parallel} \vec Q-\vec P \parallel\right) ^{{-}1}.$$

Assuming that the coordinates of pinhole $\vec h$ and each point $\vec Q$ that composes the Null-Screen are known, the only unknown is $\vec P=\left (x,y,\Phi \right )$. By combining the components of expressions shown in Eq. (33) we obtain a nonlinear system of two equations

$$\begin{aligned} \left( \mathcal{R}_x-\mathcal{I}_x\right) +\left( \mathcal{R}_z-\mathcal{I}_z\right)\Phi_x=0, \end{aligned}$$
$$\begin{aligned} \left( \mathcal{R}_y-\mathcal{I}_y\right) +\left( \mathcal{R}_z-\mathcal{I}_z\right)\Phi_y=0.\end{aligned}$$

Since $\Phi$ is a function of $x$ and $y$ this implies that $x$ and $y$ are unknowns. We solve simultaneously the Eqs. (35) and 36 in numerical form for $x$ and $y$. Once the point $\vec P=\left (x,y,\Phi \right )$ is known, we can construct $\mathcal { \hat I}$ using Eq. (34). We trace the incident ray from $\vec h$ towards $\textbf{X}^\prime {-} \textbf{Y}^\prime$ plane. Following a similar procedure employed in the Section 2 for obtaining Eq. (10), the point of intersection $\vec P_c=\left (x_c, y_c,z_c\right )$ between a reflected ray and CCD plane is given by

$$\vec P_c=\vec{h} +a_i \left( {\hat V_c} \cdot \hat{\mathcal{I}}\right)^{{-}1} \hat{\mathcal{I}},$$
where we assume that ${\hat V_c} \cdot \hat {\mathcal {I}}\neq 0$. It is easy to demonstrate that each point $\vec P_c^{\prime }=\left (x_c^{\prime }, y_c^{\prime },0 \right )$ on the detection plane respect to $\textbf{X}^\prime \textbf{Y}^\prime \textbf{Z}^\prime$ is given by
$$\vec P_c ^{\prime T}=a_i \left[\textrm{inv}\left( M_R\right) \hat{\mathcal{I}}^{T} \left( {\hat V_c} \cdot \hat{\mathcal{I}} \right)^{{-}1}-\mathbf{\hat Z}^{\prime T} \right] ,$$
where $\textrm{inv}\left ( \boldsymbol {M_R}\right )$ is inverse matrix of $\boldsymbol {M_R}$, $\boldsymbol{\hat Z}^{\prime }$ indicates the unit vector along $\textbf{Z}^{\prime } \textbf{-}$axis direction and the superscript $T$ denotes the transpose of a vector quantity.

5.2 Simulation results

In this section we use three freeform surfaces to generate the corresponding spot patterns on the CCD plane. In each case the radius of curvature and conic constant for the first term in the right hand of Eq. (32) are $r={-}100mm$ and $k_o=-0.53$. The Zernike coefficients are listed in Table 1 (in units of $mm$), the choice of these coefficients is simply to illustrate the algorithm’s performance. Figures 7(a)-(c) represents a color map at the surfaces’s height $\Phi (x,y)$ of each freeform surface defined by Eq. (32). We compute the spot patterns on the $\textbf{X}^\prime {-}\textbf{Y}^{\prime}$ image plane using the procedure described in sections 5.1; these are shown in the Figs. 7(d)-(f). The lack of symmetry of each freeform is really evident.

 figure: Fig. 7.

Fig. 7. Height color map of the freeform surfaces used to compute the synthetic image. Considering the values listed in the Table 1(a) case 1, (b) case 2 and (c) case 3. Image patterns corresponding to the surfaces of (d) case 1, (e) case 2 and (f) case 3.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of the surfaces to generate the spot patterns.

The simulated spot patterns are considered as if they had been obtained experimentally in order to retrieve each surface which are defined by the set of parameters that are listed in the Table 1. The surface retrieval is done using the iterative algorithm explained in Section 4 assuming that the only known information is these spot patterns, on the CCD, the null screen on the LCD and the geometry at the experimental setup. In the Eq. (24) the number of Taylor monomials used is $J=45$ and the highest degree of the monomials is 8. The quantities that appear in the interval 31 are $t_0=-5mm$, $t_K=5mm$ and $K=100$. The maximum number of iterations used to compute the Eq. (28) is $L=10$. When we apply the iterative algorithm the result is the point cloud $\left \lbrace \left ( x^l, y^l, z^l\right ) \right \rbrace$ that approximate to the test surface (see Eq. (28)). Figures 8(a)-(c) show in each case the discrepancies $\Delta \Phi$ between the retrieved surface and the true surface defined by Eq. (32), the corresponding parameters of the retrieved surface are listed in the Table 1. Using the deviation values $\Delta \Phi$ we may also compute the peak to valley ($pv$) and the root mean square ($rms$) errors which are of the order of some fraction of nanometer, these values are shown in the top plot of Figs. 8(a)-(c). This implies that the proposed iterative algorithm retrieves a point cloud very close to theoretical surface, Eq. (32), for ideal conditions where all the geometric parameters of the experimental setup are well known and these are no misalignment at all. In Figs. 8(d)-(f), we show the computing time, the behavior of the error $\delta _k$ as function of the position of auxiliary surface and the height values in which the best solution is obtained. In addition, the Figs. 8(g)-(i) show the convergence of the reconstruction algorithm in each case.

 figure: Fig. 8.

Fig. 8. Discrepancies color maps between the surface reconstructed and true surfaces (a) Case 1, (b) Case 2 and (c) Case 3. Behavior of $\delta _k$ as function of auxiliary surface position (d) Case 1, (e) Case 2 and (f) Case 3. Convergence of the proposed method (g) Case 1, (h) Case 2 and (i) Case 3.

Download Full Size | PDF

Other way to quantify how well is the surface reconstruction is by using a fit of Eq. (32) to the data points $\left \lbrace \left ( x^l, y^l, z^l\right ) \right \rbrace$ through the least-squares method. The fitting values are, $r$, Zernike coeficients $C_i$ (in units of $mm$) and $k_o$, when these are compared with the values that appear in the Table 1, the corresponding percentage errors are listed in the Table 2. These values are better than $2\times 10^{-3}\%$. Hence, we can conclude that the proposed method is converging very close to the true surface at least in the performed simulations.

Tables Icon

Table 2. Fitting parameters and percentage errors.

6. Experimental results

For the experimental part, the surface under test is the front surface of a PAL for which the only known information is its diameter, the refraction index $n_l=1.497$, and progression $ADD$=2 Diopters. We use the null-screen shown in the Fig. 4(b) and this is displayed in the LCD as is shown in the measurement system in the Fig. 4(c). The LCD (HP Compaq LE1711) having $1280\times 1024$ pixels$^2$ of $0.264 mm$ by side. The images were recorded with a CCD sensor (Thorlabs model DCU224C), and the lens has an Effective Focal Length $f = 16$mm (Thorlabs model MVL16L). The dimensions of the CCD sensor are $M \times N = 1280 \, \times \, 1024\, \textrm {pixels}^2$ or ${\mathcal {L}}_M \times \, {\mathcal {L}}_N =5.95 \times 4.76\, \textrm {mm}^2$.

The reflected rays on the surface under test generate a set of bright spots which is recorded by the CCD sensor on the plane $\textbf{X}^\prime {-}\textbf{Y}^\prime$ as shown in the Fig. 9(a). A binary mask image 9(b) is generated by applying threshold to the image 9(a). We compute the centroid coordinates of each bright spot in the resulting image which is obtained as a product of the images 9(a) and 9(b). In Fig. 9(c) the centroids computed by Eqs. (12) and (15), are shown. In order to correlate the image spots with the objects on LCD, we display an image only with the object that generate the central bright spot of the ideal image (see Fig. 4(a)). Once the corresponding bright spot has been identified in the experimental image, we look for the experimental centroid (see Fig. 9(c)) closest to each ideal centroid. We use the measured points to find the direction $\mathcal { \hat I}$ of the incident rays considering the results of calibration Section 3 and using the Eqs. (20) and (21). In order to apply the iterative algorithm, we have proposed that the auxiliary surfaces are positioned at different heights along the $\textbf{ Z}$-axis. The positions are contained in the interval $\left (t_0, t_K \right )=\left (-5mm,5 mm\right )$. We partition the interval into $K=100$ subintervals of equal width $\Delta t=(t_K-t_0)/K=0.1mm$. At each value $t_k$, we perform a limit of $L=10$ iterations to calculate the Eq. (28). Figure 9(d) shows the dependence between $\delta _k$ and $t_k$. The closest retrieved surface to true surface is obtained when the auxiliary surface is positioned at $t_k={-}3.5mm$, for this value $\textrm{mm}\left \lbrace \delta _k\right \rbrace =1.0\times 10^{{-}5}$. In addition, Fig. 9(e) shows the convergence of iterative procedure, it happens when $l=9$. The computing time is of 28.25s for all reconstruction procedure.

 figure: Fig. 9.

Fig. 9. (a) Experimental image of the spot pattern. (b) Binary image as a result of the image thresholding. (c) Centroids obtained of the experimental image. (d) Behavior of $\delta _k$ as function of auxiliary surface position. (e) Convergence of the proposed method.

Download Full Size | PDF

After using the reconstruction algorithm we obtain a point cloud $\left \lbrace \left ( x, y, z\right ) \right \rbrace$. The Fig. 10(a) represents a color map of the height of the best retrieved surface, obtained with the proposed iterative algorithm. We use the least-squares method to fit a conical surface described by the first term on the right-hand side of Eq. (32) to measured points. The best-fitting conical surface has a curvature radius and conic constant of $r={-}90.00mm$ and $k_o={-}0.50$, respectively. The differences in the sagitta between the retrieved surface and the best-fitting conical surface are shown in Fig. 10(b).

 figure: Fig. 10.

Fig. 10. (a) Retrieved surface. (b) Differences between the retrieved surface and the best-fitting conical surface. (c) Zernike coefficients of the retrieved surface.

Download Full Size | PDF

The retrieved surface can also be expanded in terms of a set Zernike polynomials; the Fig. 10(c) shows the weighting coefficients of the first 19 Zernike polynomials. Using this representation we compute the spherical ($\textbf{F}_e$) and cylindrical ($\textbf{F}_c$) power distribution as follows [34]

$$\mathbf{F}_e=(n_l-1)\left( \kappa_{min}+ \kappa_{max} \right) \big /2, \quad \mathbf{F}_c= (n_l-1)\left( \kappa_{min}- \kappa_{max}\right).$$
where $\textbf{F}_e$ and $\textbf{F}_c$ are measured in diopters ($m^{-1}$), $\kappa _{max}$ and $\kappa _{min}$ are the maximum and minimum curvatures measured along the $x$ and $y$ directions, respectively. In Ref. [35], the explicit calculations of $\kappa _{max}$ and $\kappa _{min}$ are shown. The distribution maps of spherical and cylindrical power of the retrieved surface are shown in the Figs. 11(a) and 11(b), respectively. In the maps 11(a) and 11(b) two circular regions denoted by $\boldsymbol A$ and $\boldsymbol B$ are drawn, these correspond to long-distance and near-distance vision regions of PAL. Looking at Fig. 11(b) we notice the existence of an elongated region denoted as corridor that joins the $\boldsymbol A$ and $\boldsymbol B$ circular regions. In the Fig. 11(c) the values of spherical and cylindrical power along the corridor are plotted. The dotted curve show that the dioptric power smoothly increases with coordinate $y$ while the dashed curve confirm that astigmatism is small and it is contained within the interval 0.13 to 0.57 diopters. We do not known the true geometrical shape or analytical expression of the PAL under test; however, we can measure the addition ($\boldsymbol { ADD}$) of the PAL which is defined as the dioptric power difference between the zones $\boldsymbol A$ and $\boldsymbol B$. We measured the addition using a commercial lensometer (ESSILOR ALM 700) the result is $\boldsymbol { ADD}= 1.97$ diopters. We may also compute the addition utilizing the values contained in $\boldsymbol A$ and $\boldsymbol B$ circular regions, on the spherical power map of Fig. 11(a). In this case the computed value is $\boldsymbol { ADD}_{ex}= 1.90$ diopters. The percentage error with respect to the measurement provided by the lensometer is $\Delta \boldsymbol { ADD}=3.55\%$.

 figure: Fig. 11.

Fig. 11. Distribution (a) spherical power and (b) cylindrical power on the reconstructed surface. (c) Spherical and cylindrical powers along the corridor.

Download Full Size | PDF

7. Conclusions

We have proposed and implemented the null screen method using an exact ray tracing that allow us to design the suitable spot pattern on LCD for testing aspherics and freeform surfaces. We proposed an iterative algorithm to numerically retrieve the shape of surfaces under study. In order to validate the algorithm, we proposed some analytical freeform surfaces and we showed how to compute their synthetic image patterns, these are considered as if they had been obtained experimentally. When the algorithm is applied to the simulations, the retrieved surface is very close to the true surfaces, the percentage errors associated to the parameters that described to the retrieved surface is smaller than $2\times 10^{-3}\%$; in addition, the $pv$ and $rms$ are of the order of some fraction of a nanometer. In an experimental case, we propose to apply a simple calibration method to compute the orientation and position of CCD and LCD. We utilized the proposed algorithm to calculate the front surface of a PAL. Also, we obtained the distribution maps of spherical and cylindrical power by PAL in which present the typical behavior observed in the literature. From the map of spherical power we compute the addition with a percentage error of $3.55\%$ as compared to the measured values with a commercial lensmeter; this proves that we have reconstructed a surface close to the true surface. The proposed evaluation method is a simple and easy technique for measuring the shape of the convex freeform surface of a PAL; it is not difficult to extend the method to other similar surfaces, but it will be shown in future work.

Funding

Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México (IN116420, IT101218, IT102520); Consejo Nacional de Ciencia y Tecnología (293411, 299028, 710606, A1-S-44220).

Acknowledgments

The authors acknowledge the economic support received by PAPIIT-UNAM under project No. IT101218, No. IT102520 and No. IN116420; Laboratorio Nacional de Óptica de la Visión (LANOV-CONACyT) under projects No. 293411, No. 299028 and CONACyT, project No. A1-S-44220.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. P. Thompson and J. P. Rolland, “Freeform Optical Surfaces: A revolution in imaging optical design,” Opt. Photonics News 23(6), 30–35 (2012). [CrossRef]  

2. F. Z. Fang, X. D. Zhang, A. Weckenmann, G. X. Zhang, and C. Evans, “Manufacturing and measurement of freeform optics,” CIRP Ann. 62(2), 823–846 (2013). [CrossRef]  

3. S. Wills, “Freeform Optics: Notes from the revolution,” Opt. Photonics News 28(7), 34–41 (2017). [CrossRef]  

4. A. Bauer, E. M. Schiesser, and J. P. Rolland, “Starting geometry creation and design method for freeform optics,” Nat. Commun. 9(1), 1756 (2018). [CrossRef]  

5. F. Duerr, Y. Meuret, and H. Thienpont, “Potential benefits of free-form optics in on-axis imaging applications with high aspect ratio,” Opt. Express 21(25), 31072–31081 (2013). [CrossRef]  

6. G. M. Dai, Wavefront Optics for Vision Correction, (SPIE, 2008).

7. W. Y. Hsu, Y. L. Liu, Y. C. Cheng, C. H. Kuo, C. C. Chen, and G. D. Su, “Design, fabrication, and metrology of ultra-precision optical freeform surface for progressive addition lens with B-spline description,” Int. J. Adv. Manuf. Technol. 63(1-4), 225–233 (2012). [CrossRef]  

8. R. Wu, J. Sasián, and R. Liang, “Algorithm for designing free-form imaging optics with non rational B-spline surfaces,” Appl. Opt. 56(9), 2517–2522 (2017). [CrossRef]  

9. J. Ye, L. Chen, X. Li, Q. Yuan, and Z. Gao, “Review of optical freeform surface representation technique and its application,” Opt. Eng. 56(11), 1 (2017). [CrossRef]  

10. G. H. Guilino, “Design philosophy for progressive addition lenses,” Appl. Opt. 32(1), 111–117 (1993). [CrossRef]  

11. B. Bourdoncle, J. P. Chauveau, and J. L. Mercier, “Traps in displaying optical performances of a progressive-addition lens,” Appl. Opt. 31(19), 3586–3593 (1992). [CrossRef]  

12. D. Lochegnies, P. Moreau, F. Hanriot, and P. Hugonneaux, “3D modelling of thermal replication for designing progressive glass moulds,” New J. Glass Ceramics 03(01), 34–42 (2013). [CrossRef]  

13. H. Feng, R. Xia, Y. Li, J. Chen, Y. Yuan, D. Zhu, S. Chen, and H. Chen, “Fabrication of freeform progressive addition lenses using a self-developed long stroke fast tool servo,” Int. J. Adv. Manuf. Technol. 91(9-12), 3799–3806 (2017). [CrossRef]  

14. C. Zhou, W. Wang, K. Yang, X. Chai, and Q. Ren, “Measurement and comparison of the optical performance of an ophthalmic lens based on a Hartmann-Shack wavefront sensor in real viewing conditions,” Appl. Opt. 47(34), 6434–6441 (2008). [CrossRef]  

15. J. Yu, F. Fang, and Z. Qiu, “Aberrations measurement of freeform spectacle lenses based on Hartmann wavefront technology,” Appl. Opt. 54(5), 986–994 (2015). [CrossRef]  

16. Z. Jia, K. Xu, and F. Fang, “Measurement of spectacle lenses using wavefront aberration in real view condition,” Opt. Express 25(18), 22125–22139 (2017). [CrossRef]  

17. D. Malacara, Optical Shop Testing (Wiley, 2007), Chap. 10.

18. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Modal phase measuring deflectometry,” Opt. Express 24(21), 24649–24664 (2016). [CrossRef]  

19. M. C. Knauer, J. Kaminski, and G. Häusler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366–376 (2004). [CrossRef]  

20. C. Faber, E. Olesch, R. Krobot, and G. Hausler, “Deflectometry challenges interferometry: the competition gets tougher!” Proc. SPIE 8493, 84930R (2012). [CrossRef]  

21. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

22. S. A. Klein, “Axial curvature and the skew ray error in corneal topography,” Optom. Vis. Sci. 74(11), 931–944 (1997). [CrossRef]  

23. R. Díaz-Uribe and M. Campos-García, “Null-screen testing of fast convex aspheric surfaces,” Appl. Opt. 39(16), 2670–2677 (2000). [CrossRef]  

24. M. Campos-García, C. Cossio-Guerrero, V. I. Moreno-Oliva, and O. Huerta-Carranza, “Surface shape evaluation with a corneal topographer based on a conical null-screen with a novel radial point distribution,” Appl. Opt. 54(17), 5411–5419 (2015). [CrossRef]  

25. M. Avenda no-Alejo, V. I. Moreno-Oliva, M. Campos-García, and R. Díaz-Uribe, “Quantitative evaluation of an off-axis parabolic mirror by using a tilted null screen,” Appl. Opt. 48(5), 1008–1015 (2009). [CrossRef]  

26. R. Díaz-Uribe, “Medium precision null screen testing of off-axis parabolic mirrors for segmented primary telescope optics; the case of the Large Millimetric Telescope,” Appl. Opt. 39(16), 2790–2804 (2000). [CrossRef]  

27. A. K. Ghatak and K. Tahyagarajan, Contemporary Optics (Plenum, 1978), Chap. 2.

28. M. González-Cardel and R. Díaz-Uribe, “An analysis on the inversion of polynomials,” Rev. Mex. Fís. E 52, 163–171 (2006).

29. W. Greiner, Classical Mechanics: Systems of Particles and Hamiltonian Dynamics (Springer, 2010), Chap. 13.

30. J. Rodríguez, M. T. Martín, J. Herráez, and P. Arias, “Three-dimensional image orientation through only one rotation applied to image processing in engineering,” Appl. Opt. 47(35), 6631–6637 (2008). [CrossRef]  

31. Z. Gong, Z. Liu, and G. Zhang, “Flexible global calibration of multiple cameras with nonoverlapping fields of view using circular targets,” Appl. Opt. 56(11), 3122–3131 (2017). [CrossRef]  

32. G. Salmon, Treatise on conic sections, 6 edition, (Chelsea Publishing Company, 1960), Chap. XVIII.

33. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT, 2009), Chap. 24.

34. G. W. Griffiths, L. Plociniczak, and W. E. Schiesser, “Analysis of cornea curvature using radial basis functions-Part I: Methodology,” Comput. Biol. Med. 77, 274–284 (2016). [CrossRef]  

35. O. Huerta-Carranza, R. Díaz-Uribe, and M. Avenda no-Alejo, “Exact equations to measure highly aberrated wavefronts with the Hartmann test,” Opt. Express 28(21), 30928–30942 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Graphical depiction of exact ray tracing to compute a Null-Screen on the LCD.
Fig. 2.
Fig. 2. (a) Schematic setup for the camera distortion calibration process. (b) Rectangular array of circular spots uniformly spaced displayed on LCD. (c) Spot pattern recorded at CCD. (d) Binary image. (e) Plot of $\rho _d^\prime$ against $\rho _o^\prime$ .
Fig. 3.
Fig. 3. (a) Circular area given by the perimeter of the flat mirror under test. (b) Binary image as a result of the image thresholding. (c) Illustration of the geometry to compute the values of $\theta$ and $\phi$ .
Fig. 4.
Fig. 4. (a) Ideal image composed of circular spots uniformly spaced in $\textbf{X}^\prime$ and $\textbf{Y}^\prime$ directions. (b) Null-Screen constructed and displayed in the LCD. (c) Experimental setup for measuring the frontal surface of PAL.
Fig. 5.
Fig. 5. Approximated calculation of normal vectors to surface under test.
Fig. 6.
Fig. 6. Flow Diagram of the iterative procedure.
Fig. 7.
Fig. 7. Height color map of the freeform surfaces used to compute the synthetic image. Considering the values listed in the Table 1(a) case 1, (b) case 2 and (c) case 3. Image patterns corresponding to the surfaces of (d) case 1, (e) case 2 and (f) case 3.
Fig. 8.
Fig. 8. Discrepancies color maps between the surface reconstructed and true surfaces (a) Case 1, (b) Case 2 and (c) Case 3. Behavior of $\delta _k$ as function of auxiliary surface position (d) Case 1, (e) Case 2 and (f) Case 3. Convergence of the proposed method (g) Case 1, (h) Case 2 and (i) Case 3.
Fig. 9.
Fig. 9. (a) Experimental image of the spot pattern. (b) Binary image as a result of the image thresholding. (c) Centroids obtained of the experimental image. (d) Behavior of $\delta _k$ as function of auxiliary surface position. (e) Convergence of the proposed method.
Fig. 10.
Fig. 10. (a) Retrieved surface. (b) Differences between the retrieved surface and the best-fitting conical surface. (c) Zernike coefficients of the retrieved surface.
Fig. 11.
Fig. 11. Distribution (a) spherical power and (b) cylindrical power on the reconstructed surface. (c) Spherical and cylindrical powers along the corridor.

Tables (2)

Tables Icon

Table 1. Parameters of the surfaces to generate the spot patterns.

Tables Icon

Table 2. Fitting parameters and percentage errors.

Equations (39)

Equations on this page are rendered with MathJax. Learn more.

x 2 + y 2 2 r z + ( k o + 1 ) z 2 = 0 ,
x = h x + ξ I x , y = h y + ξ I y , z = h z + ξ I z ,
A ξ 2 + 2 B ξ + C = 0 ,
A = 1 + k o I z 2 , B = h I ^ + I z ( h z k o r ) , C = h h + h z ( h z k o 2 r ) .
ξ = ( B ± B 2 A C ) / A .
N ^ = ( x , y , ( k o + 1 ) z r ) ( x 2 + y 2 + [ ( k o + 1 ) z r ] 2 ) 1 2 .
R ^ = I ^ 2 ( I ^ N ^ ) N ^ = ( R x , R y , R z ) .
Q x = x + Λ R x , Q y = y + Λ R y , Q z = z + Λ R z .
V ^ n ( Q Q 0 ) = 0 ,
Q = P + ( V n R ^ ) 1 [ V n ( Q 0 P ) ] R ^ ,
a i = f ( 1 + M T ) , a o = f ( 1 + 1 / M T ) ,
( x d , y d ) = L M 2 + L N 2 M 2 + N 2 ( p x M / 2 , N / 2 p y ) ,
ρ d = M T ρ o + E ρ o 3 ,
ρ c = ρ d E ( ρ d / M T ) 3 .
x c = ρ c cos ( α ) ; y c = ρ c sin ( α ) .
M R = ( cos ϕ sin ϕ 0 sin ϕ cos ϕ 0 0 0 1 ) ( 1 0 0 0 cos θ sin θ 0 sin θ cos θ )
V ^ c T = M R ( ϕ , θ ) Z ^ T , V ^ n T = M R ( ϕ , θ ) Z ^ T ,
Q v = h + a o V ^ c , Q 0 = ( Q v x , Q v y , Q v z ) .
I ^ = ( x c , y c , a i ) ( x c 2 + y c 2 + a i 2 ) 1 2 .
I ^ T = M R I ^ T ,
R ^ a = ( Q P a ) ( Q P a ) 1 ,
n ^ = ( I ^ R ^ a ) ( I ^ R ^ a ) 1 .
z i n t = t k + C [ ( n x / n z ) d x + ( n y / n z ) d y ] .
G ( x , y ) = j = 1 J B j F j ( x , y ) ,
G x = x j = 2 J B j F j ( x , y ) , G y = y j = 2 J B j F j ( x , y ) ,
S x = G x M B , S y = G y M B ,
( S x S y ) = ( G x G y ) M B
P l = P l 1 + σ l 1 I ^ l = 1 , 2 , , L ,
w ^ = ( G x , G y , 1 ) ( G x 2 + G y 2 + 1 ) 1 2 .
ϵ l = i = 1 M ( μ i μ ¯ ) 2 ( M 1 ) 1 ,
{ t k } = { t 0 , t 1 , t 2 , t K 1 , t K } .
Φ ( x , y ) = c ( x 2 + y 2 ) ( 1 + 1 ( k o + 1 ) c 2 ( x 2 + y 2 ) ) 1 + i = 0 C i Z i ( x , y ) ,
N ^ = ( R ^ I ^ ) ( R ^ I ^ ) 1 , N ^ = ( Φ x , Φ y , 1 ) ( Φ x 2 + Φ y 2 + 1 ) 1 2 ,
I ^ = ( P h ) ( P h ) 1 , R ^ = ( Q P ) ( Q P ) 1 .
( R x I x ) + ( R z I z ) Φ x = 0 ,
( R y I y ) + ( R z I z ) Φ y = 0.
P c = h + a i ( V ^ c I ^ ) 1 I ^ ,
P c T = a i [ inv ( M R ) I ^ T ( V ^ c I ^ ) 1 Z ^ T ] ,
F e = ( n l 1 ) ( κ m i n + κ m a x ) / 2 , F c = ( n l 1 ) ( κ m i n κ m a x ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.