Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Form determination of optical surfaces by measuring the spatial coherence function using shearing interferometry

Open Access Open Access

Abstract

We present a new method for the form measurement of optical surfaces using the spatial coherence function, which enables a shearing interferometer in combination with an LED multispot illumination to function as a measurement device. A new evaluation approach connects the measured data with the surface form by inverse raytracing. First measurement results with the inverse evaluation procedures are shown. We present the whole measurement in combination with the evaluation procedure. In addition, the convergence and stability of the implemented optimization task is investigated.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The requirements for the measurement of the form of optical elements increase with the complexity of optical imaging systems employed in industrial imaging, consumer photography or photolithography. The optical industry has taken a great step forwards in manufacturing aspheres and freeform optics, which have become indispensable e.g. for high quality objectives. The form of new optical elements can deviate from a sphere up to a few millimetres. Additionally, high-end applications require form accuracies in the nanometre range. Typical non-contact measurement systems [1] applied for flatness or spherical metrology may provide nanometre uncertainties, but cannot handle the high dynamic range of the topography of aspheric or freeform surfaces. Therefore, special new measurement setups for the form characterization of complex optical surfaces are needed. Commonly used approaches to meeting these challenges include null-tests with computer generated holograms (CGHs) [2,3] or deformable mirrors [4], using techniques to stitch multiple subapertures of the surface under test [5] or use multiple light sources in combination with model-based evaluation algorithms [6]. The model-based evaluation of the TWI [7] calculates interferograms corresponding to a virtual specimen. The alignment of the measured and calculated interferograms is the quality criterion. In case of the shearing interferometer the transition of the k-vectors due to a virtual specimen is fitted until the measured and calculated k-vectors match. These approaches have certain limitations, e.g. CGHs are not flexible and are expensive, stitching technologies often use scanning stages which introduces misalignments and systems based on laser sources receive additional disturbing interferences.

To meet these challenges and to be very flexible with respect to the form of aspheres and freeform optics, we use the spatial coherence function as the main measurand instead of the typically used wavefront. The spatial coherence function represents the correlation between two points of the wavefield [8]. It is capable of describing a number of different mutually incoherent wavefields, i.e. originating from several independent light sources, simultaneously. This can be exploited in the case of using multiple light sources for interferometric measurements. As an additional benefit, the spatial coherence function can be measured by means of a shearing interferometer, which is inherently insensitive to mechanical distortions due to the common path approach. The combination with a multispot LED illumination enables a measurement technique by which the illumination arrangement can be adapted to the form of the specimen [9,10]. This ensures resolvable interferograms all over the measurement area even for more complex surface geometries. Hence even steep slopes and asymmetrical designs become measurable without a scanning process of any sort. A measurement with slopes up to 14° is demonstrated in section 5. Finally, the employed measurement setup allows LED light to be used, which significantly reduces measurement uncertainties arising from the coherent amplification of spurious reflections within the imaging system.

The spatial coherence function is measured with the shearing interferometer by taking interferograms for different shears. This data contains the direction of light crossing the measurement plane and can be used to determine the form of a refracting specimen. We present the process from the acquisition of the interferograms followed by extracting ray direction information to the reconstruction of the refracting surface under test due to inverse raytracing.

2. The setup of the shearing interferometer

The shearing interferometer setup we use is based on a 4f-configuration provided by two lenses [9,10]. A sketch of the setup is presented in Fig. 1. The surface under test is placed in the first focal plane of the first lens and sharply imaged to the second focal plane on the image sensor. Between the lenses at the Fourier plane [11], a spatial light modulator (Holoeye Pluto VIS-006-A) is placed which represents the shearing element. We exploit the birefringent properties of this device for the beam splitting and shear generation. A polarizer ensures that the light reaches the spatial light modulator (SLM) with 45° polarization. In this case, half of the light passes the SLM without any influence of the SLM. The other half is diffracted to the first diffraction order of a blazed grating imaged by the SLM. Due to the slight angle difference, the shear is realized. A huge advantage of this particular realization of a shearing interferometer, is the vibration stability. Both interfering images are derived from the same element which avoids movement between the images and yields a stable fringe pattern. A second polarizer in front of the sensor leads to a shearing interferogram at the image sensor plane.

 figure: Fig. 1

Fig. 1 Sketch of the measurement setup [9,10]. The surface under test (SUT) is illuminated by multiple LEDs and imaged to the image sensor plane by two lenses in a 4f-configuration. In the focal plane between the lenses, the spatial light modulator (SLM) is located which provides the shear. Two polarization filters are used to exploit the birefringent properties of the SLM.

Download Full Size | PDF

An essential step towards a highly flexible measurement system is LED multispot illumination. Due to the common path setup of the shearing interferometer low coherence light as emitted from typical LEDs can be used. On the one hand, LED light has sufficiently high spatial coherence to use shear values of some 10 µm, which is necessary to obtain high quality interferograms, on the other hand, it offers all the benefits of low coherent light sources like an avoidance of disturbing speckle interferences. Each LED light source images another part of the surface under test on the sensor. In overlapping areas only, the light intensities are superimposes; no parasitic interferences are possible because of the low coherent light. With a sufficiently large number of light sources, individually arranged in space, resolvable fringe patterns covering the whole measurement area are simultaneously recorded. Some examples are shown in Figs. 2 and 3. Thus, the measurement range for even complex curvatures - as in aspheric or freeform shaped surfaces - can be extended.

 figure: Fig. 2

Fig. 2 Photo of the laboratory setup of the shearing interferometer with the multispot illumination (1). The light array consists of seven LED coupled fiber tips. In this example an aspheric lens with a focal length of 50 mm is placed in the measurement plane (2). Each light source yields an interferogram patch on the CCD sensor (5), which results in an overall resolvable interferogram shown on the screen. The interferometer is based on a spatial light modulator (4) as shearing element in the Fourier plane of a 4-f-configuration provided by two lenses (3).

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Example of a measured mutual coherence function across the central region of a spherical lens (f = 50 mm): Amplitude (a) and phase (b) measured by the shear interferometer with the shear set to x = 240 µm and y = 160 µm. Each light source creates one sub-aperture in the sensor domain. The yellow rectangles show examples of three overlapping sub-apertures. Depending on the exact position of the light sources, we obtain dark or bright areas across the overlapping regions, showing the destructive and constructive superposition of the contributions from the individual sub-apertures in the complex mutual coherence function.

Download Full Size | PDF

For the measurement, we record interferograms for different shears in x- and y-direction and also for different shearing directions. By laterally shifting the blaze grating at the SLM in the direction of the shear phase shifting is realized. In our case a 4-step phase shifting algorithm [12] is used. To extract the surface form, we perform two sets of measurements: one without the specimen to determine the positions of the light sources, and one with the specimen.

3. The spatial coherence function for independent light sources

As seen from Fig. 1, the specimen under test is illuminated by several independent light sources at the same time. In this situation, we cannot assign a consistent wavefront or wave field across the measurement plane. However, we can still find a time-independent description of the light by means of the spatial coherence function or mutual intensity, which is given by:

G(x1,x2)=U*(x1,t)U(x2,t)t=limT1Tt=T/2T/2U*(x1,t)U(x2,t) dt,
where <…>t denotes a time average. Hence, across a plane, the spatial coherence function is a 4-dimensional statistical property. Denoting the wave fields corresponding to the independent light sources by Un, we can now insert the sum of the independent wave fields
U(x,t)=nUn(x,t)
into Eq. (1) which yields [13]:
G(x1,x2)=nUn*(x1)Un(x2).
To arrive at Eq. (3), we have used that either two wave fields are independent and therefore all cross terms with light originating from different light sources are canceled out. At this point, it is sufficient to conclude that the spatial coherence function is a time-independent complex property which is well defined even in cases in which several mutually incoherent wave fields are superposed. Later in Section 4, we will see how Eq. (3) can be exploited to measure the shape of an optical element.

For now, we will concentrate on the measurement process. A convenient way of measuring the spatial coherence function is a shearing interferometer. By denoting the shear by s, we can write down the intensity in the image plane of a shearing interferometer by

IS(x)=|U(x,t)+U(x+s,t)|2T
Using Eqs. (1) and (2) it is straight forward to show that
IS(x)=I(x)+I(x+s)+2{G(x,x+s)}.
From Eq. (5), we see that the shearing interferometer provides the spatial coherence function for any two positions x and x + s at any point across the measurement plane.

4. Shape measurement based on the spatial coherence function

In the following we will describe how to derive the shape of an optical element, e.g. a lens, from the measured spatial coherence function. The procedure consists of two steps. In the first step we assign a set of rays to the measured spatial coherence function. In the second step, we solve an inverse ray tracing problem, which eventually yields the surface shape of the specimen.

For the first step, we measure a subspace of the coherence function in close vicinity to each point in the measurement plane. In the experiments we used an 11 x 11 matrix of shears in x- and y-direction with the zero shear in the centre, i.e. we performed 121 phase shifted measurements. To this end, we have - for every point x0 in the measurement plane - the coherence function [13]

G(x0,x0+s)=nUn*(x0)Un(x0+s)=ψ(s;x0).
Please note, that for a given point x0, Eq. (6) only depends on the shear s and therefore ψ(s;x0) represents a summation of all wavefields superposed at position x0, with each of them multiplied by an unknown complex valued constant Un*(x0).

If the shears s are chosen to be sufficiently small, we can approximate the wavefields in Eq. (6) as plane waves. Having this in mind, we may determine the corresponding k-vectors by means of a plane wave decomposition in frequency space using the Fourier transform

Ψ(kx,ky;xo)=ψ(s;x0)ei(kxsx+kysy)dsxdsy,
where sx and sy are the components of the shear vector. This result can be interpreted within the model of geometrical optics. The direction of each k-vector defines the direction of a ray that pierces the measurement plane at the position x0, i.e. we get one ray for each of the wavefields superposed at x0 [13]. Please note that deviations from the plane wave assumption in Eq. (7) are tolerable as long as the superposing wave fronts can be assumed symmetric (e.g. parabolic). In this case the maximum that defines the direction of the corresponding k-vector will still be found at the same position in Fourier space.

After extracting all the rays from any position x in the measurement plane, we can, as the second step, proceed with the inverse raytracing procedure. At this point, we have two sets of k-vector information: one measured with the specimen (krefracted) and one measured without (klightsource). By extrapolating rays in negative direction of the klightsource-vectors to their origin, which is the crossing point of all rays of one light source, we can determine the position of the corresponding light sources, as shown in Fig. 4(a). To get a kincident-vector, see Fig. 4(b), representing the incident ray, for every krefracted-vector in x, we connect the light source with the positions of the krefracted-vectors, resulting in Fig. 4(c). The incident rays in combination with the refracted rays in direction of the krefracted-vectors represent the path of light from the interferometer through the specimen to the light source.

 figure: Fig. 4

Fig. 4 a) Sketch of the determination of a light source position. Rays with the direction of the klightsource-vectors, which are measured without the specimen, are propagated from the measurement plane to their crossing points; b) A kincident-vector is determined for every position of a krefracted-vector due to the knowledge of the corresponding light source position; c) Sketch of the form determination of the surface under test (SUT) by inverse raytracing. The solid line behind the measurement plane represents the rays in the direction of the krefracted-vectors, which are measured with the specimen, given by the spatial coherence function at its observation point. The dashed lines represent calculated rays refracted by the two shown exemplary surface forms of the assumed specimen. The form which provides the best match between measured and calculated rays corresponds to the surface under test.

Download Full Size | PDF

Target of the evaluation is to find a surface form, which corresponds to the refraction of the light in the measurement plane. The shape of the surface is described by a Zernike polynomial which can be combined with a spherical basic form. The number of included orders can be adapted to the type of the surface under test. To obtain the surface form, the surface parameters are optimized in an iterative process. It is based on a Monte-Carlo algorithm, which picks random values for each surface parameter in a defined interval centred around the previous solution. In the ongoing process the interval size decreases. In each iteration step the refraction of the incident rays is calculated by raytracing to get the krefrected-vectors, as illustrated in Fig. 4(c). By minimizing the deviation of the direction of the measured and calculated k-vectors the form of the surface under test is determined. For measurements in transmission mode a homogeneous refracting index of the object under test is assumed and, in addition, the reverse surface form needs to be known for reconstructing the form of the surface under test.

Figure 5 shows a flow chart to illustrate the evaluation process. The maximum number of iteration steps can be chosen in advance with regard to the maximum calculation time. In addition, two other termination conditions are implemented to avoid unnecessary calculation. In case one, the evaluation ends if the deviation of the measured and modelled rays falls below a required value. In case two, the interval size in which the surface parameters are picked becomes too small and stays in a range far below the expected measurement uncertainty.

 figure: Fig. 5

Fig. 5 Flow chart of the form determination by means of an optimization method in step two. The estimated surface form is changed by picking a set of random parameters from a given interval.

Download Full Size | PDF

5. Experimental example

To test the sequence of measurement and evaluation steps from the first data acquisition to the surface form determined, we use a spherical plan-convex lens with a 50 mm focal length as a test object, as shown in Fig. 6. The basic form and radius of curvature of this lens are also measured at the radius measuring bench at PTB [14] which serves for comparison with regard to spherical specimen.

 figure: Fig. 6

Fig. 6 Photo of the measured spherical plan-convex lens with 50mm focal length and 25.4 mm diameter.

Download Full Size | PDF

As described in Section 2, the measurement interferograms are taken with the shearing interferometer for different shears in different directions. The “Fringe Processor” software [15] provides an automatically capturing mode for this purpose. As described in Section 4, first a measurement without the specimen is performed. This is evaluated to obtain the light source positions, which are needed to calculate the incident ray directions. The ray directions determined at the measurement plane are traced back to their origins, which are the crossing points of multiple rays. These are the positions of the spotlight sources. In a second step, the directions of the rays passed through the specimen are measured. By connecting the starting points of the refracted rays with the position of their related light sources the incident rays are calculated.

Figure 7(a) presents the calculated incident rays originating from the light source, positions which are determined by the first interferometric measurement. In Fig. 7(b), additionally the set of rays (blue) from the measurement with the specimen are shown. In the measurement plane at z = 0 mm, the refraction of the specimen is visible.

 figure: Fig. 7

Fig. 7 a) Calculated incident rays (red) originating from the light source positions which are determined by the first measurement of the coherence function. The black circles indicate the light source positions; b) Combined ray information of the measurements with (blue) and without (red) the specimen. Refraction at the measurement plane at z = 0 becomes visible; c) Result of the evaluation process. The known incident rays (red) are refracted at the surface determined (blue surface, only periphery visible) and also at the known reverse surface (yellow surface). We aim for the best match between the calculated rays (blue) and the measured exiting rays (blue Fig. 7(a)).

Download Full Size | PDF

After about fifty thousand iteration steps within the surface form reconstruction, the surface form is determined. In this case, the evaluation has been stopped by reaching parameter picking intervals which are too small, because the value changes fall far below the expected measurement uncertainty. Figure 7(c) shows the rays, which are refracted at the determined surface (blue surface, only periphery visible), and also the already known incident rays similar to Fig. 7(a) (red). In between the rays the reconstructed specimen is visible. The curved side (blue surface) faces the light sources, comparable with the measurement situation. The reverse surface (yellow) is assumed to be known and set to a flat plane, whereby other geometries, e.g. spherical shapes, may also be considered.

We carried out multiple evaluations of the beam information for different starting parameters. In a first investigation the radius of curvature of the best fit sphere and the tilt in the x (Z(1,-1)) and y (Z(1,1)) direction, which corresponds to a lateral shift for a sphere, are calculated. Table 1 gives an overview of five evaluations of the same measurement data for different starting parameters. All result in the same parameter values, which confirms the independence of the starting parameters. The radius of curvature has been determined to be 25.982 mm in each case. A well-founded measurement uncertainty for the measurement technique is currently under investigation and reaches beyond the scope of this publication. The PTB radius measuring bench determined the radius of curvature of (26.043680.00093) mm which corresponds to a difference of 0.062 mm. The area of interest has a diameter of 25.4 mm. The maximum height difference of the compared spheres is about 8µm at the outer regions and less in the middle. For a smaller area of interest with half of the diameter, the deviation is determined to about 1.8 µm. Figure 8 shows the reconstructed surface form.

Tables Icon

Table 1. Multiple evaluation of the same data set for different starting parameters considering the radius of the best fit sphere and tilts. Every evaluation results in the same radius of curvature although the starting parameter is significantly changed. All values are in mm.

 figure: Fig. 8

Fig. 8 Reconstructed surface form of the plan-convex lens with 50 mm focal length.

Download Full Size | PDF

The convergence of the optimization procedure is also under investigation. Figure 9 shows the progress of the radius of curvature over the iteration steps for evaluation 3 in Table 1. Every blue circle represents a new and better matching result. The red line represents the result of the PTB radius measuring bench. Besides the small difference of the determined value of the radius between the measurement systems, a convergence of the radius determination is observed.

 figure: Fig. 9

Fig. 9 Progress of the radius with ongoing iteration steps (measurement 3 of Table 1). The red line represents the measurement result from the PTB radius measurement bench.

Download Full Size | PDF

In a second investigation, the Zernike polynomial is used to describe the surface under test, see Table 2. Therefore, eleven orders are adapted, which are able to represent the surface more flexibly and yield a better coincidence of the compared beam directions at the optimization procedure. This approach increased the evaluation time from 1.5 hours to 4.5 hours, which is also dependent on the termination conditions and the used computer (Intel i5-3470, RAM 32GB). As expected from a spherical surface, we see a particularly large contribution of Z(2,0), which is the most spherical like term. Compared to the form description per tilted sphere, the Zernike polynomial is more flexible, whereby a better result in terms of lower deviation between the measured and calculated rays can be achieved.

Tables Icon

Table 2. Multiple evaluations of the same data set considering a Zernike polynomial to describe the surface form. All values are in mm. Start values in all evaluations for parameters set to 0.

6. Conclusion

We demonstrate the use of the spatial coherence function to determine the form of an optical surface. By exploiting the possibility to describe multiple distinct wavefronts within a wavefield, the measurement principle offers the possibility to illuminate the specimen from different angles at the same time. Hereby, typical challenges within the asphere and freeform metrology, like unresolvable fringe densities due to steep slopes or large dynamics in height, can be solved. Even aperture clipping can be bypassed due to a larger number of light sources.

We also demonstrate the capabilities of the new evaluation procedure based on inverse raytracing to determine the surface form of the specimen by a comparison measurement with an established measurement system and show the convergence and consistency of multiple evaluations for different starting parameters. The shearing interferometer cannot distinguish between the radius and a defocus, i.e. a z-shift, which may be the reason for the constant radius difference to 62 µm of the PTB radius measurement bench.

A current drawback is the long evaluation time of more than one hour for a larger number of parameters, e.g. due to inclusion of a higher number of Zernike orders, which is subject of further work. To increase the vertical and lateral resolution, we will increase the number of rays included in the evaluation. Extending the interferometer with a multispot illumination to reflective measurements and the determination of the measurement uncertainty are future development steps. Also, comparison measurements to asphere measurements systems are planned in the near future.

Funding

Financial support by the Deutsche Forschungsgemeinschaft (DFG, see http://gepris.dfg.de/gepris/projekt/258565427) under contract nos. BE1924/22-2 and EH400/5-2 is gratefully acknowledged.

Acknowledgments

We thank Julian Heine for his help with the experiments.

References

1. Z. Malacara and M. Servin, Interferogram Analysis for Optical Testing (CRC, 2016, Vol 84).

2. P. Zhou and J. H. Burge, “Fabrication error analysis and experimental demonstration for computer-generated holograms,” Appl. Opt. 46(5), 657–663 (2007). [CrossRef]   [PubMed]  

3. J. C. Wyant and V. P. Bennett, Using Computer Generated Holograms to Test Aspheric Wavefronts (Springer, 2014).

4. K. Fuerschbach, K. P. Thompson, and J. P. Rolland, “Interferometric measurement of a concave, φ-polynomial, Zernike mirror,” Opt. Lett. 39(1), 18–21 (2014). [CrossRef]   [PubMed]  

5. P. Murphy, G. Forbes, J. Fleig, P. Dumas, and M. Tricard, “Stitching interferometry: A flexible solution for surface metrology,” Opt. Photonics News 14(5), 38–43 (2003). [CrossRef]  

6. E. Garbusi, C. Pruss, and W. Osten, “Interferometer for precise and flexible asphere testing,” Opt. Lett. 33(24), 2973–2975 (2008). [CrossRef]   [PubMed]  

7. I. Fortmeier, M. Stavridis, A. Wiegmann, M. Schulz, W. Osten, and C. Elster, “Evaluation of absolute form measurements using a tilted-wave interferometer,” Opt. Express 24(4), 3393–3404 (2016). [CrossRef]   [PubMed]  

8. U. Schnars, C. Falldorf, J. Watson, and W. Jüptner, Digital Holography and Wavefront Sensing (Springer, 2015).

9. C. Falldorf, A. Simic, G. Ehret, M. Schulz, C. von Kopylow, and R. B. Bergmann, “Precise optical metrology using Computational Shear Interferometry and an LCD monitor as light source,” in Proceedings of Fringe 2013, ed. (Springer, 2014), pp.729–734.

10. J.-H. Hagemann, G. Ehret, R. B. Bergmann, and C. Falldorf, “Realization of a shearing interferometer with LED multisport illumination for form characterization of optics,” in Proceedings of DGaO (2016).

11. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill Book Company, 1988).

12. D. Malacara, Optical Shop Testing, 3rd ed. (Wiley-VCH, 2007).

13. C. Falldorf, J.-H. Hagemann, G. Ehret, and R. B. Bergmann, “Sparse light fields in coherent optical metrology [Invited],” Appl. Opt. 56(13), F14–F19 (2017). [CrossRef]   [PubMed]  

14. M. Schulz, I. Fortmeier, D. Sommer, and G. Blobel, “Concept of metrological reference surfaces for asphere and freeform metrology,” in Proceedings of the 17th International Conference of the European Society for Precision Engineering and Nanotechnology2017, pp. 365 −366.

15. “Fringe Processor,” http://www.bias.de/wp-content/themes/bias/assets/pdf/OMOS_Flyer_v4_WEB.pdf

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Sketch of the measurement setup [9,10]. The surface under test (SUT) is illuminated by multiple LEDs and imaged to the image sensor plane by two lenses in a 4f-configuration. In the focal plane between the lenses, the spatial light modulator (SLM) is located which provides the shear. Two polarization filters are used to exploit the birefringent properties of the SLM.
Fig. 2
Fig. 2 Photo of the laboratory setup of the shearing interferometer with the multispot illumination (1). The light array consists of seven LED coupled fiber tips. In this example an aspheric lens with a focal length of 50 mm is placed in the measurement plane (2). Each light source yields an interferogram patch on the CCD sensor (5), which results in an overall resolvable interferogram shown on the screen. The interferometer is based on a spatial light modulator (4) as shearing element in the Fourier plane of a 4-f-configuration provided by two lenses (3).
Fig. 3
Fig. 3 Example of a measured mutual coherence function across the central region of a spherical lens (f = 50 mm): Amplitude (a) and phase (b) measured by the shear interferometer with the shear set to x = 240 µm and y = 160 µm. Each light source creates one sub-aperture in the sensor domain. The yellow rectangles show examples of three overlapping sub-apertures. Depending on the exact position of the light sources, we obtain dark or bright areas across the overlapping regions, showing the destructive and constructive superposition of the contributions from the individual sub-apertures in the complex mutual coherence function.
Fig. 4
Fig. 4 a) Sketch of the determination of a light source position. Rays with the direction of the klightsource-vectors, which are measured without the specimen, are propagated from the measurement plane to their crossing points; b) A kincident-vector is determined for every position of a krefracted-vector due to the knowledge of the corresponding light source position; c) Sketch of the form determination of the surface under test (SUT) by inverse raytracing. The solid line behind the measurement plane represents the rays in the direction of the krefracted-vectors, which are measured with the specimen, given by the spatial coherence function at its observation point. The dashed lines represent calculated rays refracted by the two shown exemplary surface forms of the assumed specimen. The form which provides the best match between measured and calculated rays corresponds to the surface under test.
Fig. 5
Fig. 5 Flow chart of the form determination by means of an optimization method in step two. The estimated surface form is changed by picking a set of random parameters from a given interval.
Fig. 6
Fig. 6 Photo of the measured spherical plan-convex lens with 50mm focal length and 25.4 mm diameter.
Fig. 7
Fig. 7 a) Calculated incident rays (red) originating from the light source positions which are determined by the first measurement of the coherence function. The black circles indicate the light source positions; b) Combined ray information of the measurements with (blue) and without (red) the specimen. Refraction at the measurement plane at z = 0 becomes visible; c) Result of the evaluation process. The known incident rays (red) are refracted at the surface determined (blue surface, only periphery visible) and also at the known reverse surface (yellow surface). We aim for the best match between the calculated rays (blue) and the measured exiting rays (blue Fig. 7(a)).
Fig. 8
Fig. 8 Reconstructed surface form of the plan-convex lens with 50 mm focal length.
Fig. 9
Fig. 9 Progress of the radius with ongoing iteration steps (measurement 3 of Table 1). The red line represents the measurement result from the PTB radius measurement bench.

Tables (2)

Tables Icon

Table 1 Multiple evaluation of the same data set for different starting parameters considering the radius of the best fit sphere and tilts. Every evaluation results in the same radius of curvature although the starting parameter is significantly changed. All values are in mm.

Tables Icon

Table 2 Multiple evaluations of the same data set considering a Zernike polynomial to describe the surface form. All values are in mm. Start values in all evaluations for parameters set to 0.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

G ( x 1 , x 2 ) = U * ( x 1 , t ) U ( x 2 , t ) t = lim T 1 T t = T / 2 T / 2 U * ( x 1 , t ) U ( x 2 , t )   d t ,
U ( x , t ) = n U n ( x , t )
G ( x 1 , x 2 ) = n U n * ( x 1 ) U n ( x 2 ) .
I S ( x ) = | U ( x , t ) + U ( x + s , t ) | 2 T
I S ( x ) = I ( x ) + I ( x + s ) + 2 { G ( x , x + s ) } .
G ( x 0 , x 0 + s ) = n U n * ( x 0 ) U n ( x 0 + s ) = ψ ( s ; x 0 ) .
Ψ ( k x , k y ; x o ) = ψ ( s ; x 0 ) e i ( k x s x + k y s y ) d s x d s y ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.