Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Laser beam complex amplitude measurement by phase diversity

Open Access Open Access

Abstract

The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

© 2014 Optical Society of America

1. Introduction

The optical quality of the beam is a critical issue for intense lasers: a good optical quality is necessary to optimize stimulated amplification in optical amplifiers, to prevent optical surfaces from deteriorating due to hot spots, and to optimize the flux density at focus. For these reasons the optical quality of the beams of intense lasers must be monitored. To this end, wavefront analysis has to cope with the presence of high spatial frequencies in the beam profile due to speckle patterns. Commonly used wavefront sensors, whether Shack-Hartmann [1] (SH) or shearing interferometers [2], rely on the assumption of a continuous phase structure. In order to measure high spatial frequencies, the sampling of the wavefront has to be fine. This requires additional optical components and many measurement points, which are manageable assuming large focal plane sensors. Moreover, these concepts require the reconstruction of the wavefront from wavefront gradient measurements. The reconstruction process is only valid if the wavefront is continuous and can be measured on a domain that is a connected set in the topological sense.

To bypass the above limitations, far field wavefront sensing techniques such as Phase Diversity are an appealing alternative. Phase Diversity, i.e., the recovery of the field from a set of intensity distributions in planes transverse to propagation, is a well-established method. The vast majority of the work on this technique has concentrated on the estimation of the phase for applications related to imaging, assuming that the modulus is known, as is often reasonable in astronomy at least—see in particular [3, 4] for seminal contributions, [5] for a review on phase diversity, and [6] for an application of phase diversity to reach the diffraction limit with Strehl Ratios as high as 98.7%.

Yet, with intense lasers, the wave profile may present strong spatial variations [7]. For this reason, in this application of phase diversity one needs to estimate the complex amplitude. Early work on phase and modulus estimation was performed for the characterization of the Hubble Space Telescope (HST). Roddier [8] obtained non-binary pupil modulus images using an empirical procedure combining several Gerchberg-Saxton (GS) type algorithms [9], while Fienup [10] used a metric minimization approach [11] to estimate the binary pupil shape, parametrized through the shift of the camera obscuration.

The GS algorithm belongs to a larger class of methods based on successive mathematical projections, studied in [12]. Although there is a connection between projection-based algorithms and the minimization of a least-square functional of the unknown wavefront [13], the use of an explicit metric to be optimized is often preferable to projection-based algorithms for several reasons. Firstly, it allows the introduction of more unknowns (differential tip-tilts between images, a possibly extended object, etc); secondly, it allows the incorporation of prior knowledge about the statistics of the noise and/or of the sought wavefront; thirdly, projection based algorithms are often prone to stagnation.

The first work on estimating a wave complex amplitude from phase diversity data with an explicit metric along with experimental results was published by Jefferies [14], in view of incoherent image reconstruction in a strong perturbation regime. The authors encountered difficulties in the metric minimization, which may in part be due to the parametrization of the complex amplitude with its polar form rather than its rectangular one. Experimental results in a strong perturbation regime have also been obtained by Almoro et al. [15] using a wave propagation-based algorithm. This algorithm, which can be interpreted as successive mathematical projections [12], required many measurements recorded at axially-displaced detector planes. The said technique was adapted for smooth test object wavefronts using a phase diffuser and, for single plane detection, with a spatial light modulator [16]. More recently, Thurman and Fienup [17] studied, through numerical simulations, the influence of under-sampling in the reconstruction of a complex amplitude from phase diversity data with an explicit metric optimization, in view of estimating the pupil modulus and aberrations of a segmented telescope, but this metric was not derived from the data likelihood.

This paper aims at presenting a likelihood-based complex amplitude retrieval method relying on phase diversity and requiring few images, together with an experimental validation. As in [17], the complex amplitude is described by its rectangular form rather than the polar one to make the optimization more efficient. The method, named camelot for Complex Amplitude Measurement by a Likelihood Optimization Tool, is described in Section 2. In Section 3 we validate it experimentally and assess its performance on a laboratory set-up designed to shape the complex amplitude of a laser beam and record images of the focal spot at several longitudinal positions. In particular, a cross-validation with conventional phase diversity is presented. Finally, experimental results are confronted to carefully designed simulations taking into account many error sources in Section 4. In particular, the impact of photon, detector and quantization noises on the estimation precision is studied.

2. camelot

2.1. Problem statement

The schematic diagram for phase diversity measurement is presented on Fig. 1. An imaging sensor is used to record the intensity distributions. As with any wavefront sensor, the laser beam is focused by optics in order to match the size of the beam with that of the sensor. With phase diversity, the sensor area is installed at the image focal plane of a lens. Note that it is advisable to install a clear aperture at the front focal plane of the optics in order to ensure a correct sampling of the intensity distributions. The focal length of the optics and the diameter of the aperture are chosen in order to satisfy the Shannon criterion with respect to the pixel spacing.

 figure: Fig. 1

Fig. 1 Schematic diagram for phase diversity measurement.

Download Full Size | PDF

In order to retrieve the complex amplitude of an electromagnetic field, the relationship between the unknowns and the measurements must be described mathematically. This description is called the image formation model, or direct model.

Let Ψk denote the complex amplitude in plane Pk. Ψk is decomposed onto a finite orthonormal spatial basis with basis vectors {bj,k(x, y)}j=[1,Nk]:

Ψk(x,y)=j=1Nkψj,kbj,k(x,y).
The coefficients of this decomposition are stacked into a single column vector, denoted by ψk = [ψj,k]j=[1, Nk] ∈ ℂNk. In the following, a pixel basis is used without loss of generality.

The field complex amplitude in the plane of the above-mentioned clear aperture, P0, is supposed to be the unknown. P0 is called hereafter the estimation plane. We assume that the phase diversity is performed by measuring intensity distributions in NP planes, perpendicular to the propagation axis. Pk (1 ≤ kNP) refer to these planes. The transverse intensity distributions of the field are measured by translating the image sensor along the optical axis around the focal plane. The measured signal in plane Pk is a two dimensional discrete distribution concatenated formally into a single vector of size Nk denoted by ik. As the detection of the images is affected by several noise sources, denoting nk the noise vector, the direct model reads:

ik=|ψk|2+nk,
where |X|2 = XX* and ⊙ represents a component-wise product. The component-wise product of two complex column vectors of size N denoted X=[Xj]j=[1,N]T and Y=[Yj]j=[1,N]T is defined as the term by term product of their coordinates:
XY=[(XjYj)]j=[1,N]T,
In Eq. (2), the spatial integration of the intensity distribution by the image sensor is not taken into account. In practice, this assumption will remain justified as long as the spatial sampling rate exceeds the Shannon criterion.

Each ψk can be expressed as a linear transformation (a transfer) of ψ0 and therefore be described by the product of the propagation matrix Mk ∈ ℂN0×Nk by ψ0.

Unfortunately, the transverse registration of the different measurements is experimentally difficult to obtain with accuracy. In order to take these misalignments into account, differential shifts between planes are introduced in the direct model via the dot product of ψ0 by a differential shift phasor sk:

ψk=Mk[ψ0sk].
The k-th differential shift phasor is decomposed on the Zernike tip and tilt polynomials Z2 and Z3 expressed in the pixel basis [18], a2,k and a3,k being their respective coefficients:
sk=ei(a2,kZ2+a3,kZ3).

The misalignment vector a is defined as: a = {ai,k}. Without loss of generality, the first measurement plane (k=1) is chosen as the reference plane, so that a2,1 = a3,1 = 0.

Finally the image formation model is:

ik=|Mk[ψsk]|2+nk.

The propagation from the estimation plane down to the reference plane is simulated numerically using a discrete Fourier transform (DFT), Ψ1 being the far field of Ψ0s1:

ψ1=DFT[ψ0s1]

The propagation between the reference plane (k = 1) and plane Pk>1 is simulated by a Fresnel propagation performed in Fourier space:

ψk>1=exp(i2πλd1k)iλd1kIDFT[DFT[ψ1]exp(iπλd1kν2)].
where d1k is the distance between plane 1 and plane k, ν is the norm of the spatial frequency vector in discrete Fourier space, and IDFT the inverse of the DFT operator.

2.2. Inverse problem approach

In order to retrieve an estimation of ψ0 from the set of measurements, {ik}, the basic idea is to invert the image formation model, i.e., the direct model. For doing so, we adopt the following maximum a Posteriori (map) framework [19]: the estimated field ψ̂0 and misalignment coefficients â are the ones that maximize the conditional probability of the field and misalignment coefficients given the measurements, that is the posterior likelihood P(ψ0, a|{ik}). According to Bayes’ rule:

P(ψ0,a|{ik})P({ik}|ψ0,a)P(ψ0)P(a)
where P(ψ0), respectively P(a), embody our prior knowledge on ψ0, respectively a.

The MAP estimate of the complex field corresponds to the minimum of the negative logarithm of the posterior likelihood:

(ψ^0,a^)=argminψ,aJ(ψ0,a)
which, under the assumption of Gaussian noise, takes the following form:
J(ψ0,a)=k=1N12(ik|Mk[ψ0sk]|2)TCk1(ik|Mk[ψ0sk]|2)lnP(ψ0)lnP(a),
where Ck = 〈nk · nkT〉 is the covariance matrix of the noise on the pixels recorded in plane k (diagonal if the noise is white) and −lnP(ψ) and −lnP(a) are regularization terms that embody our prior knowledge on their arguments.

In the experiment and in the simulations presented hereafter, we have not witnessed the need for a regularization, so in the following we shall take P(ψ) = P(a) =constant, and the MAP metric of Eq. (11) reduces to a Maximum-Likelihood (ML) metric.

This metric is similar to but different from the intensity criterion suggested by Fienup in [11]. Indeed, in our likelihood-based approach, Ck1 enables us to take into account not only bad pixels (either saturated or dead) but also noise statistics.

2.3. Minimization

In Eq. (10) J is a non-linear real-valued function of Ψ0 and a. In order to perform its minimization a method based on a quasi-Newton algorithm, called variable metric with limited memory and bounds method (VMLM-B) [20] is used. The VMLM-B method requires the analytical expression of the gradient of the criterion. The complex gradient of J with respect to ψ [21, 22], denoted ∇J(ψ), is defined in Eq. (12) as the complex vector having the partial derivative of J with respect to ℜ(ψ) (respectively ℑ(ψ)) as its real (respectively imaginary) part.

J(ψ)=J(ψ)+jJ(ψ)=k=1N2sk*Mk*T4(Ck13[(Mk[ψsk])2(ik|Mk[ψsk]|2)1]).

Four factors can be identified in Eq. (12), which allows a physical interpretation of this somewhat complex expression:

  1. the computation of the difference between measurements and the direct model;
  2. weighing this difference by the result of the direct model;
  3. whitening the noise to take into account noise statistics;
  4. a reverse propagation that enables the projection of the gradient into the space of the unknowns and takes into account differential tip/tilts.

The minimization of J must also be performed with respect to misalignment coefficients. The following analytical expression has been obtained and implemented:

Jai,k=[(ψskZi)T(2Mk*TCk1[(Mk[ψsk])(ik|Mk[ψsk]|2)])*].

3. Experiment

3.1. Principle of the experiment

The objective of the experiment is to validate camelot experimentally on a perturbed laser beam. The experiment is performed using a low power continuous fibered laser whose complex amplitude is modulated to create spatial perturbations. The experimental setup is illustrated on Fig. 2. The spatial modulation of the laser beam (phase and modulus) is controlled by a field control module and conjugated with the clear aperture plane.

 figure: Fig. 2

Fig. 2 Experimental setup for camelot validation.

Download Full Size | PDF

The beam going through the clear aperture crosses a beam splitter and is focused on the camelot camera of Fig. 1. The camera is used to record three near focal plane intensity distributions, which are given as inputs to camelot. The field estimated by the latter will be denoted ψc = ACeC in the following. The beam reflected on the beam-splitter is used to record the intensity distribution IM with an image sensor conjugated with the aperture plane.

camelot’s estimation of the complex field is cross-validated in the two following ways: regarding field modulus, AC is compared with AM, called measured modulus of the field hereafter, which is computed as the square-root of image IM:AM=IM.

Regarding phase, camelot’s estimate is compared to the result of a straightforward adaptation of a classical phase diversity algorithm [5] that will be referred to as conventional phase diversity. Phase diversity is now a well established technique: it has been used successfully for a number of applications [5], including very demanding ones such as extreme adaptive optics [6, 23], and its performance has been well characterized as a function of numerous factors, such as system miscalibration, image noise and artefacts, and algorithmic limitations [24]. In classical phase diversity, only the pupil phase is sought for, while the pupil transmittance is supposed to be known perfectly. This is probably the main limitation of this method, regardless of how the transmittance is known, whether by design or by measurement. In the case of a transmittance measurement, a dedicated imaging subsystem must be used, hence increasing the complexity of the setup. Misalignment issues (scaling, rotation of the pupil) have to be addressed due to the potential mismatch between the true pupil (which gives rise to the recorded image) and the pupil that is assumed in the image formation model. In this paper, for conventional phase diversity, the pupil transmittance is set to the measured modulus AM and the data used are two of the three near focal plane intensity distributions: the focal plane image and the first defocused image.

3.2. Control of the field

In this section, we present the modulation method used to control the field and how the result of the modulation is measured in the aperture plane.

The modulation of the complex amplitude is obtained using the field control method suggested by Bagnoud [25]: a phase modulator is followed by a focal plane filtering element.

The phase modulation is performed with a phase only SLM (Hamamatsu LCOS SLM x10-468-01) with 800×600 20μm pixels. The laser source is a 20mW continuous laser diode at λ = 650nm injected into a 4.6μm core single-mode fiber. At the exit of the fiber, the beam is collimated, linearly polarized and then reflected on the SLM. The clear aperture diameter is D = 3mm on the surface of the SLM. It is conjugated with a unit magnification onto the clear aperture plane. The spatial modulation and filtering are designed to control 15×15 resolution elements in the clear aperture i.e., 15 cycles per aperture. The effective result of the control in the clear aperture plane is called the true field. Its modulus is denoted AT.

The spatial modulation and filtering have been modeled by means of an end-to-end simulation. The result of the simulation is denoted by ψS. Its modulus, denoted AS, is presented on the left of Fig. 3. It shows smooth variations as can be typically observed on an intense pulsed laser [7]. It has been truncated in the upper right corner to simulate a strong vignetting effect. The phase of the field is presented on Fig. 4. It is dominated by a Zernike vertical coma (Z7) of 11 rad Peak-to-Valley (PV), typical of a strong misalignment of a parabola for instance.

 figure: Fig. 3

Fig. 3 Left: modulus of the simulated field (AS), center: measured modulus (AM), right: 2.5 × |AMAS|.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Phase of the simulated field.

Download Full Size | PDF

The modulus of the true field is measured with the aperture plane imaging camera (see Fig. 2). The image is recorded with a high spatial resolution (528 pixels in a diameter) and with a high SNR per pixel (60 in average). The result of the measurement, AM, is presented at the center of Fig. 3.

In order to quantify the proximity of the simulated modulus AS with the measured one AM, the following distance metric is defined:

εMS2=j=1N0|AM,jAS,j|2j=1N0|AS,j|2.

To enable their comparison, AS and AM have been normalized in flux (j=1N0|AS,j|2=j=1N0|AM,j|2=1).

The spatial distribution of the modulus of the difference between AS and AM is presented on the right of Fig. 3. It has been multiplied by a factor 2.5 to present a dynamic comparable to AM.

The corresponding distance εMS2 is found to be εMS2=0.044, whereas in the case of a photon noise limited measurement simulations show that one should rather obtain εMS2=5105.

A preliminary Fourier-based analysis of the difference between AS and AM enables to identify a clear cut separation between spatial frequencies below the cut-off frequency of the spatial modulation (15 cycles per aperture) and higher spatial frequencies. Due to the spatial filtering by the control module, the low spatial frequencies part of the difference between AS and AM can be attributed mostly to model errors between the simple simulation performed to compute AS and the effective experimental setup of the control module. For instance in the simulation, the spatial filter is assumed to be centered on the optical axis, the SLM illumination is assumed to be perfectly homogeneous, and lenses and optical conjugations are supposed to be perfect. This part of εMS2 is evaluated to 0.04, that is more than 80% of the total. The high frequencies part of εMS2 comes from optical defects on relay optics and noise influence on AM measurement. It is evaluated to 4 10−3. This gives insight on the measurement precision of AM, which should thus be of the order of 4 10−3.

3.3. camelotestimation

3.3.1. Practical implementation of phase diversity measurement

The intensity distributions in the different planes are recorded by translating the sensor along the optical axis around the focal plane. The amount of defocus must be large enough to provide significantly different intensity distributions so as to facilitate the estimation process. However, small translation distances are preferred for the sake of experimental easiness. For conventional phase diversity a defocus of λ PV is often chosen as it maximizes the difference between the focal plane intensity distribution and the defocused image. Therefore, the amplitude of defocus between the position of each plane is fixed to λ PV. Considering such a defocus, it appears that no less than three different measurement planes are required with camelot. The first image is located in the focal plane, the second one is at a distance corresponding to λ of defocus PV, the third one at a distance of 2λ PV from the focal plane. The relation between the PV optical path difference δOPD in the aperture plane and the corresponding translation distance between two successive planes dkk+1 is:

dkk+1=8(f/D)2δOPD.
For the experiment, the focusing optics focal length is f = 100mm and the aperture diameter is D = 3mm. Consequently the translation distances are d12 = d23 = 5.78mm between the successive recording planes.

The intensity distributions are recorded with a Hamamatsu CMOS Camera (ORCA R2). Its characteristics are the following: a pixel size and spacing spixel = 6.452μm, a readout noise standard deviation σron = 6 e rms, a full well capacity of 18000 e, and a 12 bit digitizer.

As explained in Section 2.1, the sampling must fulfill the Shannon criterion. The focusing optics focal length and the aperture diameter combined with the pixel size lead to a theoretical Shannon oversampling factor λf/(2spixelD) = 1.68 for λ = 650nm.

The total number of photo-electrons must be large enough to prevent the estimation from being noise limited. Simulations of the system that will be detailed in Section 4 demonstrate that Nphe = 5 × 107 total photo-electrons are sufficient. Due to the limited full well capacity of the sensor, p = 10 short exposures images are added to reach this number. For each image, background influence has been removed by a subtraction of an offset computed from the average of pixels located on the side of the images (hence not illuminated).

The noise covariance matrix Ck is approximated by a diagonal matrix whose diagonal terms, [Ck]jj, correspond to the sum of the variances of the photon noise, read out noise and quantization noise:

[Ck]jj=[σph,k2]j+p(σron2+q2/12)
where q is the quantization step. In practice, the readout noise variance map σron2 is calibrated beforehand and the photon noise variance σph,k2 is approximated from the image by: [σ^ph,k2]j=max([ik]j,0) on pixel j [26].

The three recorded near-focal intensity distributions are presented at the top of Fig. 5. The full size of one image is Npix = 214 × 214 pixels. For the figure, a region of interest of 140×140 pixels, centered on the optical axis, is selected. From left to right the focal plane image, the first defocused image and the second one are displayed. The colorscale is logarithmic.

 figure: Fig. 5

Fig. 5 Left: focal plane images (k = 1); center: first defocused images (k = 2); right: second defocused image (k = 3). Top: experimental images; middle: direct model result for |Mkψc|2; bottom: residuals, i.e., |ik − |Mkψc|2|. Logarithmic scale, 140 × 140 pixels ROI centered on the optical axis.

Download Full Size | PDF

3.3.2. Results

The three focal plane images are now used in the camelot estimation. The minimization process is initiated with a homogeneous modulus field, a phase set to zero and differential tip/tilts also set to zero. The number of estimated points in the estimation plane is N0 = 64 × 64. The current implementation of the algorithm is written in the IDL language.

A relevant measurement of the success of our inversion method is the quality of the match between the data ik and the model that can be computed from the estimated field through Eq (6). This latter map is presented on the middle row of Fig. 5. The moduli of the differences between the measurements and the direct model, that is to say estimation residuals, are displayed on the bottom row of the same figure. These residuals are below 1% of the maximum of the measurements on the three planes.

The estimated shifts are (−0.73, −0.56) and (0.97, 0.61) pixel for x and y directions for P2 and P3 respectively.

The modulus of camelot’s estimated field, AC, is presented on the left of Fig. 6. It has been normalized in flux to enable the comparison with the measured modulus AM, shown in the center of the figure. For the comparison, AM has been sub-sampled and resized to 64 pixels. The modulus of the difference between AC and AM is represented on the right. The main spatial structures of AM are well estimated by the method. The estimation residuals are below 20% of the maximum of the measured modulus, even in the zones where the flux is low (top right corner). The distance between the two moduli is εA2=0.01 only. This must be compared to the error between AM and AS reported in Section 3.2 which was found to be 0.044. camelot thus delivers a modulus estimation that is several times closer to AM than AS is.

 figure: Fig. 6

Fig. 6 From left to right: Ac, AM and |ACAM|.

Download Full Size | PDF

We now compare the phase estimated by camelot to the result of the conventional phase diversity method described in Section 3.1. The comparison is presented on Fig. 7. The phase of ψC, φC, is presented on the left of the figure, the phase of the conventional phase diversity, φPD, is in the center of the figure, the modulus of their difference is presented on the right. For comparison of these phase maps, their differential piston has been set to 0. In the zones where the modulus is greater than 10% of the maximum modulus of ψM, the maximum of the phase residuals is below 2π/10 rad.

 figure: Fig. 7

Fig. 7 From left to right: φc, φPD and |φCφPD|.

Download Full Size | PDF

Additionally concerning conventional phase diversity, setting AC as the pupil transmittance instead of AM enables a better fit to the data with a 5% smaller criterion at convergence of the minimization, which is yet another indicator of the quality of the modulus AC estimated by camelot.

camelot and conventional phase diversity algorithms have about the same complexity. The former converges after less than 300 iterations and requires less than 10min of computation, while the latter takes slightly more iterations and time. The main difference between the two methods lies in the modeling of the defocus between the measurement planes: in camelot it is performed by a Fresnel propagation, hence requiring two additional DFTs for each defocused plane. However, camelot appears here as fast as conventional phase diversity to achieve comparable results. The convergence time demonstrated here makes camelot suitable for the measurement and control of slowly varying aberrations such as those induced by thermal expansion of mechanical mounts for instance. The use of an appropriate regularization metric should contribute to speed up the computation. Recent work on real time conventional phase diversity demonstrated that few tens of Hertz are achievable with a dedicated (but commercial off-the-shelf) computing architecture [27]. According to the author of the latter reference, several hundreds of Hertz could even be achieved in a very near future, making phase diversity compatible with the requirements of the measurement and control of atmospheric turbulence effects. We believe that these conclusions can be generalized to camelot considering the similar convergence demonstrated here compared to conventional phase diversity and the the possibilities brought by Graphical Processing Units to speed up Fresnel propagation computations [28].

4. Performance analysis by simulations

We now analyze the way these results compare with simulations. We simulate focal plane images from ψC while taking into account the main disturbances that affect image formation: misalignments, photon and detector noises, limited full well capacity, quantization and miscalibration.

For the numerical simulation of the out-of-focus images, a random tip/tilt phase is added to the field before propagation to the imaging plane in order to simulate the effect of misalignment. The standard deviation of the latter corresponds to half a pixel in the focal plane.

Then two cases are considered. The first case, hereafter called perfect detector, assumes a detector with noise but an infinite well capacity and no quantization. Each image is first normalized by its mean number of photo-electrons Nphe,k, then for each pixel a Poisson occurrence is computed, and a Gaussian white noise occurrence with variance σron2 is then added to take into account the detector readout noise. The second case corresponds to a more realistic detector: for a given value of the desired number of photo-electrons Nphe,k, the corresponding image is computed as the addition of as many “short-exposures” as needed in order to take into account the finite well capacity, and each of these short exposures is corrupted with photon noise, readout noise and a 12 bit quantization noise. The same number of photo-electrons is attributed to each image : Nphe,k = Nphe/NP where Nphe designates the total number of photo-electrons.

The simulated long exposure focal plane image is presented on the left of Fig. 8. This image is obtained for Nphe,1 = 1.6 107 photo-electrons by adding 10 short exposures. It can be visually compared with the experimental focal plane image recorded in comparable conditions (right): the similarity between the two images illustrates the relevance of the image simulation.

 figure: Fig. 8

Fig. 8 Focal plane images, left: simulation (Nphe,1 = 1.6 107, 10 short exposures), right: experiment.

Download Full Size | PDF

The modulus estimation error εA2 is plotted on Fig. 9 as a function of Nphe for the two detector cases. Ten uncorrelated occurrences are averaged to compute εA2. The result of the comparison of AC with AM obtained from the experiment is also reported on the figure (abscissa Nphe = 5 107 photo-electrons).

 figure: Fig. 9

Fig. 9 εA2 as a function of average photo-electrons per pixel Nphe. Black diamonds (⋄): perfect detector, green triangles (▵): finite well capacity and quantization, red asterisk (*): experiment result, dotted line: Nphe2, dashed line: Nphe1.

Download Full Size | PDF

For the perfect detector case and Nphe small compared to 5 108, εA2 follows a Nphe2 power law. For greater values, the power law turns to Nphe1. This can be explained by the relative weights of the readout and the photon noise. For Nphe ≤ 5 108, the average flux on illuminated pixels, that can be approximated by Nphe/(NP ni) where ni is the average number of illuminated pixels per image plane (ni ≈ 100), is smaller than the total readout noise contribution for one image plane, that is Npix2σron2. Thus, readout noise dominates. For Nphe ≥ 5 108, the photon noise contribution becomes predominant and the noise propagation of εA2 follows a Nphe1 power law. It is confirmed here, as stated in Section 3.3, that εA2 for the experiment is not limited by photon noise.

The analysis of εA2 for the case with finite well capacity and quantization shows that it follows the perfect detector case up to about Nphe = 5 106 photo-electrons, then starts to follow a Nphe1 power law. Nphe = 5 106 corresponds to the flux necessary to saturate the well capacity of the sensor. Above this limit, images are added to emulate the summation of “short-exposures” images. Hence, the noise level in the measurements starts to depend on the number of summed images, that is to say on Nphe, with a Nphe1 power law.

The error level obtained from the comparison between the modulus measurement and camelot estimation ( εMC2=0.013) is comparable to the error level between camelot estimation and the true modulus in the case of the more realistic detector ( εA2=5103): they differ approximately by a factor two. A significant part of this difference comes from the fact that the measured modulus AM too is imperfect i.e., is only an estimate of the true modulus AT, due to experimental artefacts, notably differential optical defects that affect image formation on the aperture plane imaging setup and noise influence. This claim is also supported by the fact that the phase estimated by conventional phase diversity fits the measurements better when the pupil transmittance is set to the modulus estimated by camelot, AC, instead of the measured modulus AM. The Fourier-based analysis mentioned at the end of Section 3.2 delivers an estimate of the measurement precision of AM that is evaluated to 4 10−3.

As a final remark, one can note that Fig. 9 can also be useful to evaluate not only the estimation error on the field modulus but the total error on the complex field itself. Indeed, we have calculated that this total error is on average simply twice the error on the modulus, or 2εA2. This can be particularly useful for designing a complex field measurement system based on camelot.

5. Conclusion

In this paper, we have demonstrated the applicability of phase diversity to the measurement of the phase and the amplitude of the field in view of laser beam control. The resolution of the inverse problem at hand has been tackled through a MAP/ML approach. An experimental setup has been designed and implemented to test the ability of the method to measure strongly perturbed fields representative of misaligned power lasers. The estimated field has been confronted to measured modulus using pupil plane imaging and to phases estimated with classical two-plane phase diversity (using the measured aperture plane modulus). It has been shown that the estimation accuracy is consistent with carefully designed numerical simulations of the experiment, which take into account several error sources such as noises and the influence of quantization. Noise propagation on the field estimation has been studied to underline the capabilities and limitations of the method in terms of photometry.

Several improvements to the method are currently considered. They include the estimation of a flux and of an offset per image, adding a regularization metric in the criterion to be optimized for the reconstruction of the complex field on a finer grid and speeding up the computations.

The work presented in this paper focuses on the case of a monochromatic beam. The application of camelot to the control of intense lasers with a wider spectrum, as it is the case for femtosecond lasers, must be investigated. Another limitation of the method lies in the computation time. For intense lasers, the correction frequency is small (typically a fraction of Hertz). camelot could therefore be used assuming a reasonable increase of computation speed. For real-time compensation of atmospheric turbulence in a an Adaptive Optics loop, a significant effort is requested to manage control frequencies above several hundred of Hertz. Application of the method to imaging systems with complicated pupil transmittance is also of interest.

Acknowledgments

The authors thank Baptiste Paul for fruitful discussions on classical phase diversity. This work has been performed in the framework of the Carnot project SCALPEL.

References and links

1. R. B. Shack and B. C. Plack, “Production and use of a lenticular Hartmann screen (abstract),” J. Opt. Soc. Am. 61, 656 (1971).

2. J. Primot, “Three-wave lateral shearing interferometer,” Appl. Opt. 32, 6242–6249 (1993). [CrossRef]   [PubMed]  

3. D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D Appl. Phys. 6, 2200–2216 (1973). [CrossRef]  

4. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982). [CrossRef]  

5. L. M. Mugnier, A. Blanc, and J. Idier, “Phase diversity: a technique for wave-front sensing and for diffraction-limited imaging,” in Advances in Imaging and Electron Physics, P. Hawkes, ed. (Elsevier, 2006), Vol. 141, Chap. 1, pp. 1–76. [CrossRef]  

6. J.-F. Sauvage, T. Fusco, G. Rousset, and C. Petit, “Calibration and pre-compensation of non-common path aberrations for extreme adaptive optics,” J. Opt. Soc. Am. A 24, 2334–2346 (2007). [CrossRef]  

7. S. Fourmaux, S. Payeur, A. Alexandrov, C. Serbanescu, F. Martin, T. Ozaki, A. Kudryashov, and J. C. Kieffer, “Laser beam wavefront correction for ultra high intensities with the 200 tw laser system at the advanced laser light source,” Opt. Express 16, 11987–11994 (2008). [CrossRef]   [PubMed]  

8. C. Roddier and F. Roddier, “Combined approach to the Hubble space telescope wave-front distortion analysis,” Appl. Opt. 32, 2992–3008 (1993). [CrossRef]   [PubMed]  

9. R. W. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta 21, 709–720 (1974). [CrossRef]  

10. J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble space telescope characterized by using phase-retrieval algorithms,” Appl. Opt. 32, 1747–1767 (1993). [CrossRef]   [PubMed]  

11. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32, 1737–1746 (1993). [CrossRef]   [PubMed]  

12. H. Stark, ed., Image Recovery: Theory and Application (Academic, 1987).

13. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

14. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41, 2095–2102 (2002). [CrossRef]   [PubMed]  

15. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596–8605 (2006). [CrossRef]   [PubMed]  

16. M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using slm-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid Publ. 7, 12046 (2012). [CrossRef]  

17. S. T. Thurman and J. R. Fienup, “Complex pupil retrieval with undersampled data,” J. Opt. Soc. Am. A 26, 2640–2647 (2009). [CrossRef]  

18. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66, 207–211 (1976). [CrossRef]  

19. J. Idier, ed., Bayesian Approach to Inverse Problems, Digital Signal and Image Processing Series (ISTE, 2008). [CrossRef]  

20. E. Thiébaut, “Optimization issues in blind deconvolution algorithms,” Proc. SPIE, 4847, 174–183.

21. K. B. Petersen and M. S. Pedersen, “The matrix cookbook” (2008), Version 20081110.

22. K. Kreutz-Delgado, “The Complex Gradient Operator and the CR-Calculus,” ArXiv e-prints (2009).

23. B. Paul, J.-F. Sauvage, and L. M. Mugnier, “Coronagraphic phase diversity: performance study and laboratory demonstration,” Astron. Astrophys. 552, 1–11 (2013). [CrossRef]  

24. A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A 20, 1035–1045 (2003). [CrossRef]  

25. V. Bagnoud and J. D. Zuegel, “Independent phase and amplitude control of a laser beam by use of a single-phase-only spatial light modulator,” Opt. Lett. 29, 295–297 (2004). [CrossRef]   [PubMed]  

26. L. M. Mugnier, T. Fusco, and J.-M. Conan, “MISTRAL: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images.” J. Opt. Soc. Am. A 21, 1841–1854 (2004). [CrossRef]  

27. J. J. Dolne, P. Menicucci, D. Miccolis, K. Widen, H. Seiden, F. Vachss, and H. Schall, “Advanced image processing and wavefront sensing with real-time phase diversity,” Appl. Opt. 48, A30–A34 (2009). [CrossRef]  

28. T. Nishitsuji, T. Shimobaba, T. Sakurai, N. Takada, N. Masuda, and T. Ito, “Fast calculation of fresnel diffraction calculation using amd gpu and opencl,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2011), p. DWC20.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Schematic diagram for phase diversity measurement.
Fig. 2
Fig. 2 Experimental setup for camelot validation.
Fig. 3
Fig. 3 Left: modulus of the simulated field (AS), center: measured modulus (AM), right: 2.5 × |AMAS|.
Fig. 4
Fig. 4 Phase of the simulated field.
Fig. 5
Fig. 5 Left: focal plane images (k = 1); center: first defocused images (k = 2); right: second defocused image (k = 3). Top: experimental images; middle: direct model result for |Mkψc|2; bottom: residuals, i.e., |ik − |Mkψc|2|. Logarithmic scale, 140 × 140 pixels ROI centered on the optical axis.
Fig. 6
Fig. 6 From left to right: Ac, AM and |ACAM|.
Fig. 7
Fig. 7 From left to right: φc, φPD and |φCφPD|.
Fig. 8
Fig. 8 Focal plane images, left: simulation (Nphe,1 = 1.6 107, 10 short exposures), right: experiment.
Fig. 9
Fig. 9 ε A 2 as a function of average photo-electrons per pixel Nphe. Black diamonds (⋄): perfect detector, green triangles (▵): finite well capacity and quantization, red asterisk (*): experiment result, dotted line: N phe 2, dashed line: N phe 1.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Ψ k ( x , y ) = j = 1 N k ψ j , k b j , k ( x , y ) .
i k = | ψ k | 2 + n k ,
X Y = [ ( X j Y j ) ] j = [ 1 , N ] T ,
ψ k = M k [ ψ 0 s k ] .
s k = e i ( a 2 , k Z 2 + a 3 , k Z 3 ) .
i k = | M k [ ψ s k ] | 2 + n k .
ψ 1 = DFT [ ψ 0 s 1 ]
ψ k > 1 = exp ( i 2 π λ d 1 k ) i λ d 1 k IDFT [ DFT [ ψ 1 ] exp ( i π λ d 1 k ν 2 ) ] .
P ( ψ 0 , a | { i k } ) P ( { i k } | ψ 0 , a ) P ( ψ 0 ) P ( a )
( ψ ^ 0 , a ^ ) = arg min ψ , a J ( ψ 0 , a )
J ( ψ 0 , a ) = k = 1 N 1 2 ( i k | M k [ ψ 0 s k ] | 2 ) T C k 1 ( i k | M k [ ψ 0 s k ] | 2 ) ln P ( ψ 0 ) ln P ( a ) ,
J ( ψ ) = J ( ψ ) + j J ( ψ ) = k = 1 N 2 s k * M k * T 4 ( C k 1 3 [ ( M k [ ψ s k ] ) 2 ( i k | M k [ ψ s k ] | 2 ) 1 ] ) .
J a i , k = [ ( ψ s k Z i ) T ( 2 M k * T C k 1 [ ( M k [ ψ s k ] ) ( i k | M k [ ψ s k ] | 2 ) ] ) * ] .
ε M S 2 = j = 1 N 0 | A M , j A S , j | 2 j = 1 N 0 | A S , j | 2 .
d k k + 1 = 8 ( f / D ) 2 δ O P D .
[ C k ] j j = [ σ p h , k 2 ] j + p ( σ ron 2 + q 2 / 12 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.