Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Mach-Zehnder diffracted beam interferometer

Open Access Open Access

Abstract

We use a Mach-Zehnder interferometer with two afocal systems to generate the interference between an image and a diffracted copy of a test beam. The interference data is inserted in an iterative algorithm for phase retrieval. We carry out a numerical study of the performance of this wavefront reconstruction technique with simulated data and we present also experimental results obtained using several lenses as test objects.

©2007 Optical Society of America

1. Introduction

Although most of optical applications need only the information extracted from the irradiance of an optical field some special tasks require also the knowledgement of its phase; some examples are laser beam diagnosis, optical testing, aberration correction, recovery of blurred images and so on. So far, interferometric methods offer the greatest resolution in phase reconstruction [1]. In order to infer the phase of an unknown beam many interferometers use a reference beam with known phase (typically, a plane wave). However, some interferometers are self-referred, i.e., they benefit from the unknown beam as its own reference. Examples of such interferometers are lateral shearing [2-6] and radial shearing interferometer [7-12].

In reference [13] we proposed a new self-reference interferometric technique for wavefront measurement and we called it diffracted beam interferometry (henceforth DBI). Unlike shearing interferometers which use a copy of the test beam displaced or modified in some way in a plane transverse to the beam propagation direction, DBI employs a diffracted copy of the test beam, i.e., a copy displaced along the propagation direction. We can tell that in DBI technique we applied an axial or longitudinal shearing to one of the interfering beams. But in contrast to shearing interferometers the sheared beam is not an image of the test beam because diffraction affects to the amplitude and phase of the beam.

In Fig. 1 we sketch the basis of this technique. The object plane refers to the plane where we want to estimate the wavefront. The beam from this plane (the object beam) is divided in two replicas. These two replicas are processed by different imaging systems and then recombined and superimposed. In some plane after the superposition (observation plane in Fig. 1) we obtain the image of the object beam given by one copy and a diffracted beam provided by the other copy. The total field in this plane corresponds to the superposition of these two beams. As we have shown in [13] we can infer the phase of the object beam by means of an iterative algorithm whose input is the information provided by the interference between these beams in the observation plane.

 figure: Fig. 1.

Fig. 1. Scheme of a diffracted beam interferometer.

Download Full Size | PDF

In this paper we consider a simple experimental device for measuring the wavefront of an optical beam by the DBI technique. This device corresponds to a Mach-Zehnder interferometer with an afocal lens system in each arm. In section two we highlight the more relevant characteristics of this sensor and we consider an adapted iterative algorithm to estimate the phase. This algorithm is similar to the algorithm presented in reference [13] but it is adapted to the Mach-Zehnder configuration. In section three we analyze the experimental set-up and we show the first experimental results attained with this new technique. Finally, in section four we present the conclusions derived from this work.

2. The wavefront sensor and the phase retrieval algorithm

To apply the DBI method we propose the modified Mach-Zehnder configuration depicted in Fig. 2. Images of the object plane are performed by means of two afocal systems, L0L1 and L0L2. The first lens of these afocal systems, L0, is shared by the two imaging systems. The second lens, L1 or L2, is placed in one arm (arm 1 or arm 2 respectively) of the interferometer. The image formed by arm 1 of the interferometer is located at plan P1 whereas the image formed by arm 2 is located at plane P2. The distance between these two planes is equal to two times the difference between the back focal distances of lens L1 and L2. The quotient between these focal lengths gives us the magnification ratio between the two images. We consider afocal systems for imaging because in this case the optical field in both images planes reproduces the amplitude and also the phase of the optical field in the object plane (of course, with some magnification). The use of simple lenses would add a different parabolic phase in each image plane. Any plane after the interferometer can be chosen as observation plane. However, the choice of an image plane (that is P1 or P2) as the observation plane simplifies the phase retrieval of the object beam. Hereafter we consider the plane P2 as the observation plane. In this plane, arm 2 of the interferometer provides a magnified image of the object beam while arm 1 provides a diffracted beam. We can relate these two beams by the Fresnel diffraction integral modified with a suitable scaling factor. This relation between beams will be used in the numerical algorithm for phase retrieval.

 figure: Fig. 2.

Fig. 2. Mach-Zehnder configuration for a diffracted beam interferometer.

Download Full Size | PDF

Let be u0(x,y) the complex amplitude of the object beam and let be u1(x,y) and u2(x,y) the complex amplitudes of the image beams at P1 and P2, respectively. They are related by an scaling factor:

ui(x,y)=f0fiu0(f0fix,f0fiy)(i=1,2)

Given that to know any image field means to know the object field we can state our problem as the phase recovery of u2(x,y).

Let be u1d(x,y) the complex amplitude of the beam coming from arm 1 at P2. Taking into account the scaling factor, it is determinated from u2(x,y) by

u1d(x,y)=Cu2(f2f1x´,f2f1y´)×exp{ik[(xx´)2+(yy´)2]2d}dx´dy´

where k is the wave number, d is the defocusing distance (distance between the image planes P1 and P2) and C is a complex constant. The irradiance distribution of an interferogram at the plane P2 is given by

I(x,y)=I2(x,y)+I1d(x,y)+2I2(x,y)I1d(x,y)cos(Δϕ)

where I2, I1d are the irradiances of the beams in the observation plane and Δϕ = ϕ2 – ϕ1d is the phase difference between them. In Fig. 3 we show some typical interferograms. These examples correspond to gaussian or uniform object beams with simple phase aberrations.

 figure: Fig. 3.

Fig. 3. Typical interferograms obtained with the DBI sensor presented in the text. Left, for gaussian illumination and, right, for uniform illumination.

Download Full Size | PDF

Let us now consider the iterative algorithm which has been developed to retrieve de phase of the image beam u2. This algorithm takes as inputs I2, I1d, and the phase difference Δϕ (which is obtained from the measured interferograms) and uses the relation between amplitudes given by the Fresnel diffraction formula, i.e., Eq. (2). One iteration of the algorithm has the following structure (see also Fig. 4):

Step 1. In the k iteration we make an initial estimate of the complex amplitude of the image beam u2(x,y) starting from its real irradiance (that is the irradiance measured in the observation plane) and from the phase ϕ2 which is obtained in the previous iteration:

u2(ie)=I2exp(2)

To obtain an initial guess in the first iteration we take the real irradiance and an arbitrary phase.

Step 2. We estimate the complex amplitude of the diffracted beam u1d(x,y). In order to do this, we apply the Fresnel diffraction integral to u2 (ie) after taking into account the magnification factor introduced by the imaging systems. To implement the diffraction integral we use the angular spectrum formula [14]:

u1d(e)(x,y)=IFFT{FFT[u2(ie)(,)]exp[iπλd(ηx2+ηy2)]}

where x' = -xf 2/ f 1, y' = -yf 2/f 1 are the rescaled spatial coordinates and ηx=x/λd, ηy = yd are the spatial frequencies. The direct (FFT) and inverse (IFFT) Fourier Transform are calculated using a Fast Fourier Transform algorithm.

 figure: Fig. 4.

Fig. 4. Flow chart of the DBI algorithm.

Download Full Size | PDF

Step 3. Let us define If=I2I1deiΔϕ=u2u1d*. This magnitude is built from the experimental data. We can write the complex amplitude of beam 2 as u 2 = u 1d If/∣u 1d2. By inserting in this formula the current estimate of beam 1, that is u (e) 1d, we can calculate another approximate expression for the complex amplitude of beam 2 as

u2(ne)=u1d(ie)If(u1d(ie)2+α).

where α is a small parameter which is introduced to avoid division by cero.

Step 4. We make a new estimate of the amplitude of the image field starting from the previous estimates obtained in steps 1 and 3. This is the final estimate in iteration k:

u2(fe)=βu2(ie)+(1β)u2(ne)

In this expression β is a parameter which can take values from 0 to 1. Its current value is set at the beginning of the algorithm. Lower values of β makes the algorithm to converge fast but larger values of β strengthen convergence.

We have to take care with the relative phase of u 2 (ie) and u (ne) 2 in Eq. (7). In any experiment, the two arms of the interferometer can be hardly adjust to have the same length so the phase of this two estimates can differ in a simple constant even when the algorithm tends to converge. In a hypothetical but unlucky situation where the two estimates have a phase difference close to π they tend to cancel each other and convergence fails. To avoid this problem, before calculating u2 (fe) from Eq. (7), we put u2 (ie) and u2 (ne) ‘in phase’ by subtracting each one a constant phase: the phase value taken at the center of the corresponding image map.

Step 5. To judge the fitness of the algorithm we compare the experimental data with the numerical ones. To make this comparison we can use the irradiance of the image beam, the irradiance of the diffracted beam and the phase difference between them (or any combination of these). After performing numerous numerical simulations we have noted that when the algorithm converges the major difference corresponds to the experimental and numerical irradiances of the diffracted beam. So, to quantify the convergence we compute the rms error between these two irradiances:

Q=1Ni,j=1N(Id(xi,yj)ud(e)(xi,yj)2)2

When the algorithm converges the value of Q decreases until it stabilizes in a small value. Typically this occurs after tens iterations.

 figure: Fig. 5.

Fig. 5. Results obtained with the numerical examples explained in the text.

Download Full Size | PDF

In Fig. 5 we show some numerical results achieved with this algorithm. In these simulations, we assign the following values to the lens focal lengths: F0 = 100 mm, F1 = 200 mm and F2 = 250 mm. With these values, the distance between the image planes P1 and P2 is 100 mm. We consider also that the object plane is illuminated by an He-Ne laser. In all plots we represent an arbitrary radial direction in the observation plane. The first column corresponds to the phase of the image beam, ϕ2, the second column to its irradiance, I2, and the third column to the diffracted beam irradiance, I1d. In the first example [Figs. 5(a), 5(b), and 5(c)] we simulate the phase reconstruction of a defocused gaussian beam with spherical aberration. As it can be seen in the plots, the algorithm retrieves the real phase with great accuracy. A rms phase error less than 5 10-5 waves is reached. Furthermore the retrieved and real irradiances are practically indistinguishable. The Q value achieved in this simple case is 2.6 10-4. The convergence of the algorithm is very fast: we obtain a Q value less than 10-3 after 30 iterations. In Figs. 5(d), 5(e), and 5(f) we show an example where the algorithm fails. This unsuccessful result can be expected. With respect to the first example, we have change the sign of defocus, so the diffraction beam in the observation plane has a smaller spatial area than the image one. Thus, the algorithm in step 3 makes an approximation of a broad beam starting from a narrow one [see Fig. 5(e) and Fig. 5(f)]. In this case, possible divisions by very small numbers can result in erroneous values in Eq. (6) and the algorithm stagnates. To prevent stagnation, we can interchange the role of the image and the diffracted beam in our algorithm. This ‘inverted’ algorithm starts with a first guess of the diffracted beam amplitude. This estimate is used to calculate the amplitude of the image beam by means of the Fresnel diffraction integral which serves as well to refine the amplitude of the diffracted field. With this simple change we avoid division by too small numbers in step 3 of the algorithm. To avoid confusions, hereafter we refer to this algorithm as the inverse algorithm whereas the algorithm presented above is called the direct algorithm. We have checked in many different numerical simulations that the application of the inverse algorithm helps convergence to the real phase when the diffracted beam is the narrow one. In the last three plots [Figs. 5(g), 5(h), and 5(i)] we display the results achieved with the inverse algorithm for the same object beam as example two. Now, the phase is retrieved. Note that an alternative solution to avoid stagnation in similar situations could be to change the observation plan to another one in which the diffracted beam overlaps the image beam.

 figure: Fig. 6.

Fig. 6. Same as Fig. 5 but with noisy signals.

Download Full Size | PDF

In Fig. 6 we present further numerical simulations that show the performance of the algorithm. In these examples (that correspond to the same inputs as in Fig. 5) we have added some noise to the test interferograms. We consider that the maximum detected signal has a value of 16000 (arbitrary units) and we apply to each interferogram multiplicative Poisson noise. We have also added additive Gaussian noise with mean cero and standard deviation 40. Finally, we consider 1024 gray levels. The achieved Q values for these noisy images with 128×128 data points are about 0.009 in both examples. This Q value can be two orders of magnitude greater than the value attained in the corresponding ideal case; in spite of this and since the numerical irradiance resemble the experimental irradiance, we can consider that it is a good value. At the same time, the retrieved phase also maintains a good similitude to the real phase. We obtain in both examples a rms phase error about 0.006 waves. We must note that in order to calculate the phase error we use a circular pupil to neglect the contribution of data points with a small irradiance (in such points the phase is meaningless).

3. Experimental set-up and results

In Fig. 7 we sketch the experimental set-up implemented to validate the DBI method. As it has been said in the previous section, it consists of a Mach-Zehnder interferometer with two afocal systems. The object plane corresponds to the object focal plane of the first lens L0 (an achromatic doublet of back focal length equal to 100 mm). This lens is located before the interferometer so it belongs to the path of the two beams generated in the interferometer. Each arm in the interferometer contents another lens (L1 in arm 1 with a back focal length of 200 mm, and L2 in arm 2 with a back focal length of 250 mm) which completes the corresponding afocal system. The object plane is directly illuminated with a linear polarizing He-Ne laser. A half-wave plate allows us to change the polarization direction of light at the input of the interferometer and its rotation allows us to balance the energy from one arm of the interferometer to the other. To acquire the data we use a 12 bits CCD camera with 7.4 μm square pixels which is placed at the observation plane, P1 or P2.

 figure: Fig. 7.

Fig. 7. Experimental set-up. L0, L1, L2, achromatic doublets; PL1, PL2, PL3 linear polarizers; BS1, BS2 beam splitters, M1, M2 mirrors.

Download Full Size | PDF

To retrieve the phase difference between the image and the diffracted beams we recorded a set of four interferograms with a phase-shift π/2. This phase shift is produced by means of a rotating polarizing technique [15-16]. To apply this technique we insert in our set up some polarizing elements. We introduces a linear polarizer in each arm of the interferometer. The axes of this two polarizers are crossed so at the output we have two linearly polarized beams with orthogonal polarizations. The two linearly polarized beams are converted into right and left handed circularly polarized beams by means of a quarter wave plate with its axis at 45° with respect to the polarization axes of the beams. A final polarizer set in a rotation mount transforms back the beams to linearly polarized beams. Now, rotation of this last polarizer an angle θ adds a phase-shift of 2θ to the phase difference induced in the interferometer, as it can be determinated by simple polarimetric calculus [16]. So, successive rotations of equal steps of θ = π/4 gives the desired phase shifts of π/2. The phase difference Δϕ is obtained after processing the measured four interferograms with a four point phase stepping algorithm: the Carré technique [17].

To demonstrate the potential of this sensor we placed at the object plane several spherical and astigmatic lenses illuminated directly by the laser beam. In Fig. 8 we show the results for a convergent spherical lens with nominal back focal length 88,7 mm. The detection plane is P2, that is, the image plane for the beam that travels along arm 2 of the interferometer. In this case the image beam is broader than the diffracted beam so we use the inverse algorithm to retrieve the phase. The algorithm converges with a Q value of 0.011 after 15 iterations. This value is similar to the one achieved in the previous examples with noisy signals. In Fig. 8(e) we plot the phase of the image beam after subtraction of the phase of the laser (which was also determinated with our method). A simple quadratic fit allows us to determinate the lens focal length. The value obtained was 86 ± 2 mm, close to the nominal value.

 figure: Fig. 8.

Fig. 8. Experimental results for a spherical lens.

Download Full Size | PDF

We also perform measurements at plane P1, the image plane for the beam that propagates along arm 1 of the interferometer. We have got similar results as in plane P2. In Fig. 9 we compare both measurements. In Fig. 9(a1) and Fig. 9(a2) we plot the beam irradiance in the object plane determinated from its images taken at planes P1 and P2, respectively. Fig. 9(b1) and Fig. 9(b2) show also the beam irradiances in the object plane, but this time they are calculated from the beam irradiances which have been obtained numerically by applying our algorithm in planes P1 and P2, respectively. The rms difference between the object irradiances determinated directly from the experimental measurements is 0.0085. This value can be regarded as a lower limit to the Q value which gives back the DBI algorithm (0.011 in both planes) and also to the rms difference between the numerical object irradiances plotted in Figs 9(b1) and 9(b2) (this value is 0.012). Fig. 9(c1) and Fig. 9(c2) corresponds to the object phase retrieved from the measurements taken in planes P1 and P2, respectively. The rms difference between these phases is 0.045 waves.

 figure: Fig. 9.

Fig. 9. Comparison between measurements performed in plan P1 and P2 (see text).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a). Continuous line: experimental irradiance of beam 1 measured at its image plane (P1); dotted line: irradiance of beam 1 at its image plane calculated from its numerical irradiance at plane P2. (b). Continuous line: experimental irradiance of beam 2 measured at its image plane (P2); dotted line: irradiance of the beam 2 at its image plane calculated from its numerical irradiance at plane P1. (c). Numerical phase of beam 1 at its image plane obtained from the measurement at this plane (continuous line) and from the measurement at plane P2 (dotted line). (d). Numerical phase of beam 2 at its image plane obtained from the measurement at this plane (continuous line) and from the measurement at plane P1 (dotted line).

Download Full Size | PDF

Moreover, we have determinated the complex amplitude of the image beams in plane P1(u1) and in plane P2(u2) starting from the complex amplitude of the diffracted beam in plane P2 (u1d) given by the DBI algorithm, in the first case, and starting from the complex amplitude of the diffracted beam given by the DBI algorithm in plane P1 (u2d). In Figs. 10(a) and 10(b) we show in dots the irradiance of the calculated image beams together with the measured ones (in continuous line). We also plot in continuous line the phases of the image beams at each plane provided directly by the numerical algorithm and in dots the phase calculated after propagating the diffracted beams from the corresponding plane of diffraction to its image plane.

In Fig. 11 we present results obtained with a negative cylindrical lens located at the object plane. The Q value is 0.0099. The retrieved phase is plotted along the axis of the cylinder and along the orthogonal direction. While the phase along one axis can be well fitted with a quadratic polynomial the phase along the orthogonal axis is practically constant. We note that in this case the phase returned by the numerical algorithm exhibits a small slope. This small slope can be attributed to some misalignment in the position of the lens. We have dropped the slop in the plots.

 figure: Fig. 11.

Fig. 11. Results for the cylindrical lenses along the axes of the cylinder and along the orthogonal direction. Continuous line: measured irradiances, dotted line: numerical irradiances, dashed line: numerical phases.

Download Full Size | PDF

Finally, we note that these experiments not only show the feasibility of the DBI technique but also they illustrate the importance of making the phase constraint given by Eq. (6). We tried to retrieve the phase without this constraint by using an iterative algorithm that only uses the irradiances of the image and the diffracted beam. We were not able to retrieve the phase in any of the two experimental examples.

4. Conclusion

We have implemented a new self-referenced interferometric technique for wavefront sensing by using a Mach Zehnder configuration with an afocal lens system located at each arm of the interferometer. The main feature of this technique faced to other interferometric techniques is that each point of the observation plane provides information about the phase of the whole beam. This results from the fact that DBI measures the phase difference of an image of the test beam with a diffracted copy. This diffracted beam carries information about the phase evolution. There are other wavefront sensing techniques that provides information of beam evolution [18-21], but they are non interferometric so they do not use phase measurements as DBI does. As it was noted at the end of previous section, the phase difference measurement is crucial in the process of phase retrieval.

As any self referenced technique DBI use an numerical algorithm to infer the phase. We have proposed an algorithm adapted to the Mach-Zehnder configuration and we have tested it with numerical simulations. The numerical simulations have shown that the algorithm can reconstruct precisely the phase even with noisy signals. Rms phase error less than 0.01 waves is easily achieved. We have also identified situations where the algorithm fails and we have proposed solutions to overcome these situations.

Finally, we presented some experimental results to validate the DBI technique. To estimate the quality of the results we use a merit function (the Q function) that measures the difference between numerical and experimental beam irradiances. The Q values obtained in our experiments were as low as 0.01, similar to the Q values attained with numerical noisy signals. We have also carried out several tests that indirectly confirm that the phase is retrieved with good accuracy.

In summary, DBI has been shown to be a promising technique for phase reconstruction. However, very much work has to be done to explore the performance of this new technique.

Acknowledgments

This work was partially supported by the Spanish Ministry of Education and Science and FEDER funds through contract AYA2004-07773-C02-02.

References and links

1. D. Malacara, M. Servin, and Z. Malacara. Interferogram analysis for Optical testing. (Marcel Dekker, New York, 1998).

2. W. J. Bates “A wavefront shearing interferometer,” Proc. Phys. Soc. 59,940–650 (1947). [CrossRef]  

3. J. C. Wyant “Use of an ac heterodyne lateral shear interferometer with real-time wavefront correction systems,” Appl. Opt. 14,2622–2626 (1975). [CrossRef]   [PubMed]  

4. J. C. Wyant and F. D. Smith “Interferometer for measuring power distribution of ophthalmic lenses,” Appl. Opt. 14,1607–1612, (1975). [CrossRef]   [PubMed]  

5. P. Liang, J. Ding, Z. Jin, C. -S. Guo, and H. -T. Wang, “Two-dimensional wave-front reconstruction from lateral shearing interferograms,” Opt. Express 14,625–634 (2006). [CrossRef]   [PubMed]  

6. A. Dubra, C. Paterson, and C. Dainty, “Study of the tear topography dynamics using a lateral shearing interferometer,” Opt. Express 12,6278–6288 (2004). [CrossRef]   [PubMed]  

7. P. Hariharan and D. Sen, “Radial shearing interferometer,” J. Sci. Instrum. 11,428–432 (1961). [CrossRef]  

8. D. S. Brown “Radial shear interferometry” J. Sci. Instrum. 39,71–72(1962). [CrossRef]  

9. W. H. Steel, “A radial shear interferometer for testing microscope objectives,” J. Sci. Instrum. 42,102–104 (1965). [CrossRef]  

10. D. Li, H. Chen, and Z. Chen “Simple algorithms of wavefront reconstruction for cyclic radial shearing interferometer,” Opt. Eng. 41,1893–1898 (2002). [CrossRef]  

11. M. Li, P. Wang, X. Li, H. Yang, and H. Chen, “Algorithm for near-field reconstruction based on radial-shearing interferometry,” Opt. Lett. 30,492–494 (2005). [CrossRef]   [PubMed]  

12. C. -Y. Chung, K. -C. Cho, C. -C. Chang, C. -H. Lin, W. -C. Yen, and S. -J. Chen, “Adaptive-optics system with liquid-crystal phase-shift interferometer,” Appl. Opt. 45,3409–3414 (2006). [CrossRef]   [PubMed]  

13. E. López-Lago and R. de la Fuente, “Wavefront sensing by diffracted beam interferometry,” J. Opt. A: Pure Appl. Opt. 4,299–302 (2002). [CrossRef]  

14. D. Mendlovic, Z. Zalevsky, and N. Konforti, “Computation considerations and fast algorithms for calculating the diffraction integral,” J. of Mod. Opt. 44],407–414 (1997). [CrossRef]  

15. H. Z. Hu “Polarization heterodyne interferometry using a simple rotating analyzer: 1. theory and error analysis,” Appl. Opt. 22,2052–2056 (1983). [CrossRef]   [PubMed]  

16. E. M Frins, W. Dultz, and J. A. Ferrari, “Polarization shifting method for step interferometry,” Pure Appl. Opt 7,53–60 (1998). [CrossRef]  

17. K. Creath, ‘Phase measurement interferometry techniques’ in Progress in Optics XXVI, E. Wolf Ed.,349–393 (Elsevier Science, 1988).

18. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21,2758–2769 (1982). [CrossRef]   [PubMed]  

19. R. A. Gonsalves, “Phase retrieval and diversity in adaptative optics,” Opt. Eng. 21,829–832 (1982).

20. F. Roddier, C. Roddier, and N. Roddier, “Curvature sensing: a new wavefront sensing method,” Proc. Soc. Photo-Opt. Instrum. Eng. 976,203–209 (1988).

21. G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express 14,474–486 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Scheme of a diffracted beam interferometer.
Fig. 2.
Fig. 2. Mach-Zehnder configuration for a diffracted beam interferometer.
Fig. 3.
Fig. 3. Typical interferograms obtained with the DBI sensor presented in the text. Left, for gaussian illumination and, right, for uniform illumination.
Fig. 4.
Fig. 4. Flow chart of the DBI algorithm.
Fig. 5.
Fig. 5. Results obtained with the numerical examples explained in the text.
Fig. 6.
Fig. 6. Same as Fig. 5 but with noisy signals.
Fig. 7.
Fig. 7. Experimental set-up. L0, L1, L2, achromatic doublets; PL1, PL2, PL3 linear polarizers; BS1, BS2 beam splitters, M1, M2 mirrors.
Fig. 8.
Fig. 8. Experimental results for a spherical lens.
Fig. 9.
Fig. 9. Comparison between measurements performed in plan P1 and P2 (see text).
Fig. 10.
Fig. 10. (a). Continuous line: experimental irradiance of beam 1 measured at its image plane (P1); dotted line: irradiance of beam 1 at its image plane calculated from its numerical irradiance at plane P2. (b). Continuous line: experimental irradiance of beam 2 measured at its image plane (P2); dotted line: irradiance of the beam 2 at its image plane calculated from its numerical irradiance at plane P1. (c). Numerical phase of beam 1 at its image plane obtained from the measurement at this plane (continuous line) and from the measurement at plane P2 (dotted line). (d). Numerical phase of beam 2 at its image plane obtained from the measurement at this plane (continuous line) and from the measurement at plane P1 (dotted line).
Fig. 11.
Fig. 11. Results for the cylindrical lenses along the axes of the cylinder and along the orthogonal direction. Continuous line: measured irradiances, dotted line: numerical irradiances, dashed line: numerical phases.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

u i ( x , y ) = f 0 f i u 0 ( f 0 f i x , f 0 f i y ) ( i = 1 , 2 )
u 1 d ( x , y ) = C u 2 ( f 2 f 1 x ´ , f 2 f 1 y ´ ) × exp { ik [ ( x x ´ ) 2 + ( y y ´ ) 2 ] 2 d } dx ´ dy ´
I ( x , y ) = I 2 ( x , y ) + I 1 d ( x , y ) + 2 I 2 ( x , y ) I 1 d ( x , y ) cos ( Δ ϕ )
u 2 ( ie ) = I 2 exp ( 2 )
u 1 d ( e ) ( x , y ) = IFFT { FFT [ u 2 ( ie ) ( , ) ] exp [ iπλd ( η x 2 + η y 2 ) ] }
u 2 ( ne ) = u 1 d ( ie ) I f ( u 1 d ( ie ) 2 + α ) .
u 2 ( fe ) = βu 2 ( ie ) + ( 1 β ) u 2 ( ne )
Q = 1 N i , j = 1 N ( I d ( x i , y j ) u d ( e ) ( x i , y j ) 2 ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.