The fundamental issue of residual phase variance minimization in adaptive optics (AO) loops is addressed here from a control engineering perspective. This problem, when suitably modeled using a state-space approach, can be broken down into an optimal deterministic control problem and an optimal estimation problem, the solution of which are a linear quadratic (LQ) control and a Kalman filter. This approach provides a convenient framework for analyzing existing AO controllers, which are shown to contain an implicit phase turbulent model. In particular, standard integrator-based AO controllers assume a constant turbulent phase, which renders them prone to the notorious wind-up effect.
©2006 Optical Society of America
Adaptive Optics (AO) systems  are used to compensate for time-varying wavefront distortions using noisy and delayed measurements. In astronomy, these distortions are due to atmospheric turbulence and lead to a loss of resolution and detectability. The correction aims then at improving the image global quality through minimizing the residual phase variance. Classic AO control loops do not address this problem directly, leading to suboptimal correction, but still giving satisfactory performance for current AO systems on very large telescopes. However new AO concepts are under development to answer new astronomical objectives: XAO for exo-planet detection , multi-conjugate AO/multi-object AO (MCAO/MOAO) for large field of view correction (see [3, 4] and references in ), required for instance for the study of galaxy formation. It is important to revisit the control issues to meet the high performance specifications imposed by these new applications.
Starting from standard considerations on AO systems (linear regime of all components), we explore some aspects of closed-loop control that bring to the fore fundamental limitations and priors for control optimization. Minimum variance optimal control can then be derived without major difficulty when describing the system by an equivalent state model. This approach provides a general framework which has the merit to describe explicitly all the elements of the system, including turbulent phase dynamics. When classical correctors are plunged into such a framework, hypothesis on turbulent phase dynamics that are implicitly present through correctors’ structure can then be made explicit and analyzed.
For the sake of brevity and clarity, this paper presents the state-space approach for a standard AO configuration, assuming that all subsystems in the loop are linear. It is also assumed that the time response of the deformable mirror (DM) is fast compared with the sampling rate of the AO loop, and thus can be neglected altogether. These simplifying assumptions turn out to be quite acceptable for many existing AO systems. However, it should be stressed that the methods and results presented here can be extended to cope with more complex DM’s dynamics, including saturations and other relevant classes of nonlinearities.
This paper is organized as follows. It addresses the residual phase variance minimization in adaptive optics loops by examining first, in Sec. 2, whether it can be tackled in discrete time without loss of optimality. Based on frequency domain considerations, Sec. 3 examines the rejection transfer function and its fundamental limitations, and shows that priors on turbulent phase and measurement noise are unavoidable. Then, relying on priors and separation theorem, it is shown in Sec. 4 that, under realistic assumptions, this problem can be broken down into an optimal deterministic control problem and an optimal estimation problem, the solution of which are linear quadratic (LQ) control and Kalman filter. This combination provides what is called linear quadratic Gaussian (LQG) control, where the Gaussian term refers to the Gaussian stochastic processes in presence. An illustration of LQG control is proposed in Sec. 5 on an end-to-end AO bench simulator, and performances are compared with Optimized Modal Gain Integrator (OMGI, used in NAOS for example ). The approach presented here, which is based on a state-space representation, gives a convenient framework for analyzing standard AO controllers. The particular case of the integrator-based control is studied in Sec. 6 which brings out its implicit and unstable turbulent phase model. Finally, extensions and conclusions are presented in Sec. 7.
2. AO closed-loop: continuous or discrete time?
Consider the classical AO block-diagram in Fig. 1, where ϕtur, ϕres and ϕcor represent respectively the turbulent, residual and correction phases, w the measurement noise, y the measurements and u the control voltages. In this setup, one has to define an optimality criterion, to be minimized by the controller. In classical AO, one usually aims at minimizing the residual phase variance, that is, a quadratic criterion on the phase. For the sake of simplicity the formalism presented in this paper is restricted to this case. Nevertheless, it can be easily generalized to MCAO, where the relevant criterion is usually the minimization of the residual phase variance in a given field of view of interest . Other criteria could be considered for specific applications. Non-quadratic ones are beyond the scope of this paper.
The empirical variance of the residual phase ϕres = ϕtur -ϕcor, averaged over a sufficiently large exposure time, has thus to be minimized by the controller, which is realized by minimizing criterion Jc (u) with respect to the control u,
where ϕres(t) actually depends on u (omitted for the sake of notation simplicity), and ∥ ∙ ∥2 is the Euclidean norm, assuming that all phases are expanded on a suitable basis. This minimization is done by adjusting the mirror voltages u according to noisy measurements y provided by the wave-front sensor (WFS) from integrated and delayed ϕres. Integration is assumed to be performed during a time interval of length ΔT. Moreover, because AO devices are computer-controlled, the control u remains constant over time intervals of length ΔT′ ≤ ΔT. Time intervals ΔT and ΔT′ are usually chosen equal, and we shall not depart from the rule. (Note however that the case ΔT ≠ ΔT′ could be considered as well, provided that there exists some integers ℓ1,ℓ2 > 0 and a time period ΔT″ such that ΔT = ℓ1ΔT″ and ΔT′ = ℓ2ΔT″, which means that ΔT and ΔT′ are commensurable.) The fact that u is piecewise constant induces a loss of optimality, as the turbulent phase evolves over ΔT. What we prove in this section is that there is no additional loss of optimality by considering a complete AO system in discrete time, with discrete variables corresponding to their temporal average over a time interval of length ΔT.
Let us consider first the measurement equation. We assume that the WFS provides a linear relationship between phase and measurements. Usual hypothesis are that the wave-front sensor integrates the residual phase ϕres during a time interval ΔT, and that WFS measurements are deduced linearly from this with some additional delay, leading to a total discrete measurement delay dm ≥ 1 (in time unit ΔT). This corresponds for example to a Shack-Hartmann WFS in linear regime. The overall operation produces noisy measurements, where the measurement noise w is supposed to be additive. Furthermore, using the following notation for any averaged value of a continuous variable, e.g. for the residual phase
the measurement equation averaged over one time interval can be written as
where D is the WFS matrix and wk a discrete zero-mean white noise.
Correction phase ϕcor is assumed to be a linear function of the control input u with a delay dc ≥ 1, i.e
where N stands for the influence matrix (interaction matrix is thus DN), and uk is the control computed from y k-dc +1 and applied on time interval [kΔT, (k+1)ΔT] (a chronogram for dm = 1 and dc = 1 is given in Fig. 2). We consider here that the mirror’s response time is negligible (which is justified as soon as its response time is small compared to ΔT), so there is no mirror’s dynamics.
Let us now take a look to performance criterion Jc , Eq. (1). It can be equivalently written by replacing τ with nΔT:
Writing that ϕres(t) = ϕtur(t)- + - for t ∊ [(k-1)ΔT, kΔT], and using Eq. (4) and the fact that ϕtur(t)- has an average value of zero, a simple calculation shows that
As the right-hand terms in Eq. (6) are split in two parts, one that does not depend on u and the other that only depends on discrete variables, minimizing Jc (u) leads exactly to the same control value as minimizing
which depends only on discrete variables averaged over the sampling period ΔT. In the sequel, we shall thus describe in discrete-time all the constitutive elements and processes that make up the AO closed-loop.
3. Limitations and mandatory prior information
We focus here on control limitations and prior information that appear before any criterion minimization, when considering the subsystems in Fig. 1. If they are all linear, z-transforms can be used to compute the transfer functions (TF) that appear when closing the loop. In the sequel, we shall denote by x̃ the z-transform of a discrete temporal process x. Therefore, res(z) can be written in closed-loop as a function of tur(z), w̃(z), and as a function of the controller’s z-transform C(z):
where Id denotes the identity matrix and d ≜ dm + dc is the total delay in the loop. Leaving aside the influence of measurement noise w, the closed loop performance of this discrete-time system can be analyzed through the closed loop matrix transfer function from ϕtur to ϕres, i.e. the rejection TF H(z) ≜ (Id + L(z))-1, where L(z) ≜ NC(z)Dz -d is the open loop TF between ϕres and ϕcor, including delays.
Consider the simple case where L, and hence H, are diagonal, where the diagonal of H is formed with scalar TFs (as in modal approaches) denoted by H ℓ. In order to minimize the residual phase variance, the controller’s TF should be selected so as to render each scalar TF H ℓ as small as possible at all frequencies, while stabilizing the feedback loop. At this critical juncture, Bode’s integral theorem  enters the stage; it states that for any choice of stabilizing controller, for d = dm + dc > 1, the integral of the logarithm of the modulus of every H ℓ(ejω ) over the normalized frequency range ω ∊ [0, π] is zero:
(Note that in discrete time, exploring the frequency domain is done by setting z = ejω for normalized frequency ω ∊ [0, π], see e.g.  for more details.)
In practice, this means that the controller cannot make ϕres smaller than ϕtur at all frequencies, and that better attenuation at some frequency will have to be repaid in kind with disturbance amplification in another part of the spectrum. This so-called “water bed effect” is inherent to the feedback loop, whatever the stabilizing controller. In the sequel, we shall not select a particular controller structure, as the goal is to find among all stabilizing controllers the optimal one with respect to performance criterion (7) and hence to (1). This optimal controller will not escape from water bed effect, but will provide an optimal compromise between disturbance rejection and amplification.
Let us put this aside and focus now on criterion J in Eq. (7). Parseval’s theorem states that energy is conserved between time and frequency domains, so that J can be equivalently written in the frequency domain as
where for any process x, Sx stands for the power spectral density (PSD) of x (note that in the case of a vector process of finite dimension, Sx is a matrix-valued function).
At this point, one needs to throw in additional information on ϕtur and w. Assume that ϕtur and w are mutually independent zero-mean stationary ergodic processes of finite energy, with PSDs S ϕtur and Sw . A standard result from stochastic filtering theory (Birkhoff’s theorem, see for example ) is that ϕres is also stationary with variance almost surely equal to J(u), i.e.
Using the fact that w and ϕtur are not correlated, one can write the PSD of ϕres as
where * denotes the conjugate transpose and Hw (z) ≜ H(z)NC(z)z -1 is the closed-loop TF from w to ϕres (which is obtained from Eq. (8)). The frequency-domain identity (10), when replacing S ϕres by its expression in (12), leads directly to
Therefore, minimizing J(u) requires in one way or another the knowledge of both S ϕtur and Sw .
We are now faced with the following problem: given a/ the wavefront sensor’s and deformable mirror’s TFs; b/ the turbulent phase’s and measurement noise’s PSDs, minimize trace(Var(ϕres)) over the set of all controllers which stabilize the AO loop.
4. Optimal solution is obtained by separating estimation and control
Because the order of these controllers (that is, the order of the associated difference equations) is not a priori bounded, this optimization problem may appear at first sight intractable. Quite understandably, suboptimal approaches have been pursued to select the controller’s TF C(z). A popular one is to use a static decoupling gain multiplied with a diagonal matrix of scalar dynamic compensators C ℓ with fixed structure, and to separately tune the resulting series of hopefully independent feedback loops (this corresponds to modal approaches evoked in previous section, with diagonal H(z)). Thus, the H ℓ may be pure integrators, as in [11, 12], or higher order filters as in [13, 14]. The extension to real-time tuned parameters, as in , leads to a non-linear control loop, which is beyond the scope of this presentation. Yet, the general multi-variable optimization problem (that is without imposing any decoupling nor particular structure) does have a solution, which can be explicitly computed using standard results from modern control theory, and is properly described as an LQG control, that is a state feedback combined with a Kalman filter.
To grasp this, consider the simple situation where the delay of the deformable mirror can confidently be reduced to one sampling period (dc = 1, which means that the computation of yk and uk can be performed within [(k - 1)ΔT,kΔT], as illustrated in Fig. 2), so that the DM equation including control delay is
leading to the TF cor(z) = Nz -1 ũ(z). Let us make now the obviously totally unrealistic assumption that future values of ϕtur can be predicted with perfect accuracy. Under this fantasy-world “full information” hypothesis, a perfect solution would be to make ϕcor equal to ϕtur by solving Nuk = . However, N being generally non-invertible, the optimal control corresponds to the solution of the least-squares minimization of - Nuk , i.e.
where ∙t stands for transposition. This optimal control uk corresponds to the orthogonal projection of onto the mirror’s space, with orthogonal projector P defined as
To obtain an implementable control in the “incomplete information” hypothesis, one may simply replace in (15) by some estimated value. Let us now assume that we are able to compute k+1|k tur, the minimum-variance estimator of based on ℐk , the set of all prior information and measurements available until time k. This optimal estimate is the conditional expectation given ℐk :
where E[∙] denotes the mathematical expectation. In this case it is immediately checked that the control
does indeed minimize trace Var(|ℐk ) conditionally to all information available at time k. This result is known in control literature as the stochastic separation theorem [17, 18, 19], because it demonstrates that the optimal control u can be constructed by separately solving a deterministic optimal control problem (the full information case) and a stochastic minimum-variance estimation problem (the incomplete information case). Furthermore, the deterministic optimal control subproblem turns out to be quite simple, as seen in Eq. (18).
Consequently, the original residual phase variance minimization problem is for all practical purposes reduced to a standard recursive minimum-variance estimation/prediction problem.
4.1. Equivalent state-space modelization
To solve this estimation/prediction problem, however, one is required to construct a model of the AO loop incorporating the deterministic dynamics of its various components, the variance of the measurement noise and the spatial and temporal correlation structure of the turbulent phase, as Sw and S ϕtur have been shown to be necessarily known when dealing with criterion J. This model, called a state-space model, is based on a full description of the system’s state, and should thus include an explicit description of the turbulent phase dynamics that matches the priors.
The state vector of a system at time k, denoted by xk , is generally defined as follows. It represents all the knowledge needed at time k to compute next state x k+1 and output (WFS measurement) yk , when inputs are known and if noises are neglected. The state vector dynamics correspond thus to an input-output description of the system, that is a set of equations which gives x k+1 and yk as a function of xk , uk , and of the noises. Such a state-space model is usually described in the linear time-invariant case in the form
where A, B and C are matrices of appropriate dimensions, v and w are decorrelated zero-mean white Gaussian noises with covariance matrices ∑v and ∑w. They are also decorrelated from the initial state x 0. These processes are assumed to be independent because they account for two different sources of unpredictability: w is a measurement noise, while v drives the stochastic drift of the turbulent phase. We propose here to construct a state-space model in the form (19–20) using the material introduced in Sections 2 and 3. At the end of this Section, the constructed state model will be completely equivalent to a set of equations describing the AO system.
The choice for the state vector is not unique, and different state vectors with different dimensions may be used to describe the same input-output behavior. What variables should then enter the state vector xk ? It should fulfill two requirements: firstly, the state must summarize the entire knowledge on the system including turbulence, and secondly, the optimal control law must be a function of the state only. In view of performance criterion (7), the residual phase ϕres should then be part of the state vector, or equivalently ϕtur and u.
At time k, considering one period latency for read-out and computation time in the WFS (dm = 1), the measurement yk is obtained following Eq. (3) by
It is clear that at least and u k-2 enter the state vector, hence and u k-1 for memory storing. As said before, this choice of state vector is not the only possibility, but is motivated by the possible and direct extension to MCAO . A convenient choice for the state is then
When considering Eq. (19), all quantities in x k+1 shall be described from xk and uk through A and B (the values of which will be given below).
It is time now to look into turbulent phase modelization. In a linear context, any model which gives a good description of spatial and temporal correlations could be considered. This means that any given turbulent phase spatial correlation matrix ∑ϕ, defined as
can be obtained through a white noise filtered by a rational filter. The dynamics of the turbulent phase (which usually follow a Taylor’s hypothesis) can be approximated in such a way that temporal correlations match correctly the chosen filtered process. A one-order auto-regressive model has been shown to be a good approximation of the turbulent phase dynamics [7, 20, 21],
where vk is a zero-mean white Gaussian noise with covariance matrix ∑v, and A is the matrix defining the dynamical characteristics of the turbulent phase (temporal correlations that depend on turbulence speed, see Sect. 5). For any given ∑ϕ, the model described in Eq. (25) leads to ∑ϕ = A t∑ϕ A + ∑v (using definition (24)). So, based on energy conservation principle, the turbulent phase covariance matrix will indeed be equal to ∑ϕ if we take ∑v = ∑ϕ - A t∑ϕ A. While simple, this model can thus simultaneously match the spatial correlation structure of the turbulent phase through ∑ϕ and its short-term temporal correlation through A.
Knowing ∑ϕ and ∑v, the stochastic state-space model is now completely defined, using (22–25), in the form (19–20) with
The optimal control u (see Eq. (18)) has then the general state feedback form
where K ≜ (P, 0, 0, 0) and P is defined in (16).
Note also that the model under consideration is stationary, but non-stationary models (with A, B, C depending on time) could be considered as well, leading to the same conclusions.
4.2. Kalman’s optimal filter
The minimum variance estimate of x k+1 using all the measurements until time k is obtained in the Gaussian case as the output of a Kalman filter [22, 23] (if the Gaussian assumption is released, the Kalman filter gives the best linear unbiased estimator). This filter is an observer, that is, has the general structure
where ŷ k|k-1 is the best estimate of the model output given ℐ k-1, obtained as
The Kalman optimal observer corresponds to a particular value of the gain Lk given by
where ∑k|k-1 is the covariance matrix of the state vector and is obtained by solving the following Riccati matrix equation:
At first sight, this equation may plunge us into despair, as it seems to be incompatible with realtime constraints. But the careful reader would have noticed that Eq. (31) does not depend on measurements. It can thus be computed off-line, and even replaced by its constant asymptotic solution L (by letting ∑k+1|k in (31) converge to its asymptotical value) with non-significant loss of optimality, as in . Furthermore, as all control values are known until time k, the coordinates of the state x corresponding to delayed values of u need not be estimated; the optimal gain L has thus corresponding coordinates equal to zero, so that it is computed using a reduced order Riccati equation.
5. Illustration of LQG control based on an end-to-end simulator
The asymptotic LQG control has been implemented to control a classical AO system on an end-to-end simulator. This experiment is intended mainly as a feasibility study, especially with respect to constraints and limitations of real-time implementation. For such a simple experimental setup, LQG control provides better overall closed-loop stability but only limited performance improvement over well tuned integrator control. However, extensive simulations and ongoing experimentations show that dramatic performance enhancement is attained in more complex cases such as off-axis AO, MCAO or vibration filtering [7, 26, 27].
The results presented here correspond to a D = 8 m telescope observing in the near-infrared (2.2 μm), equipped with a 8×8 subaperture Shack-Hartmann WFS and a 9×9 actuator Stacked Actuator Mirror. The turbulence is set to 8 (0.61 arcsec seeing at 0.5μm). The wind-speed is V = 9ms-1. A 250 Hz correction (which corresponds to ΔT = 4.10-3s) with a total delay d = 2 (dΔT = 8.10-3s, with dc = dm = 1), is then simulated.
The turbulence is simulated thanks to the translation of a large Kolmogorov phase screen (Taylor hypothesis). The Shack-Hartmann measurement is calculated by averaging the phase derivative on each sub-aperture (geometric approximation). A white Gaussian noise is added to the slope measurements. The DM influence functions are modeled through Gaussian functions, with a coupling factor of 25%.
For LQG control, the phase is estimated on a Zernike basis restricted to the first 160 modes. The turbulence model is a one-order auto-regressive. Following Le Roux et al. , A is assumed to be a diagonal matrix. The diagonal elements ai are related to the correlation time of each individual Zernike coefficient. We choose here the following law:
where n is the radial order of the Zernike number i. This law allows to account for the decrease of the correlation time with radial order .
The WFS matrix is obtained by applying the same geometric approximation to the Zernike modes. The correction by the DM is computed by projecting the estimated phase on the mirror sub-space using the DM influence functions. Finally, a classic OMGI is also implemented for comparison. Gains are optimized according to the procedure proposed by Dessenne .
Figure 3 shows the Point Spread Functions obtained with both control laws. The Strehl Ratios are respectively 69% and 71% for the OMGI and the LQG control. This result proves the good behavior and the potential gain brought by the LQG control.
6. Integrators: observer form and hidden turbulent phase model
Model errors exist in the definition of the state-space representation model: one-order auto-regressive model instead of Taylor turbulence, estimation on a finite number of modes. However, such approximations are unavoidable in a realistic case. Still, a noticeable gain is observed. Further studies have shown that in the present case, this gain is mainly due to a better handling of aliasing effects. This is permitted by the expansion of the phase on a finite but extended basis, beyond the DM sub-space. This allows both a good adequation to turbulence statistics and a good representation of the WFS measurement.
We now turn towards linear controllers in general, and show that in fact, essentially any existing linear AO controller is equivalent to an observer based control. If a controller of given structure is used, the equivalent state model will depend on this structure, and the controller’s output turns out to be a sub-optimal solution with respect to the implicit stochastic model of the turbulent phase. To illustrate this, we consider for example a simple integrator control with recurrence equation
The purpose of this section is to show that there exists a state model for which the optimal solution in the minimum variance sense is the control law (33) for a particular value of G. We begin therefore by building the adequate state model, and show next that the optimal solution leads to a simple integrator. As a state model describes all components of a system, it becomes then possible to analyze the implicit physical hypothesis that are hidden behind (33).
To start with, consider the turbulent phase’s dynamics defined by
where vk is a zero-mean white noise, and consider a state model of the form (19–20)
with state vector
The optimal control (18) computed on the basis of (35–36) is equivalent to the integrator control (33) for a particular value of G. This new state model is different than (26) in an important point: measurement delay is not taken into account, i.e. dm = 0.
For this equivalent state model (35), the asymptotic optimal observer of the form (28) for turbulent phase estimation turns out to be simply
with a value of L that is computed using the new state model defined through (35–36). Simply because dm = 0, the measurement equation is yk = D( - ) + wk , leading to ŷ k|k-1 = D( k|k-1 tur -) = 0. This explains why ŷ k|k-1 does not appear in (37) when comparing with (28).
Multiplying now both sides of (37) by P = (N t N)-1 N t (remember that uk = P k+1|k tur) leads exactly to (33) with G satisfying
Let us now unfold this backward: for any choice of gain integrator G, there always exists an observer of the form (37), with corresponding turbulent phase stochastic model (34). The observer (and consequently the integral controller) will not be optimal in the minimum variance sense unless G is defined from (38) with optimal L given by (30).
The implicit stochastic model (34) that appears when writing down the equivalent state model that leads to integrator recurrence (33) produces a turbulent phase with unbounded energy, as trace (Var()) → ∞ with k. A nefarious consequence is that an observer built from this model is not stable and may well lead to unbounded trajectories of tur, and therefore of the control u - the notorious wind-up effect for controllers with integral action.
In a similar way, it can be shown that any linear AO control can be traced back to an observer, and thus to an implicit turbulent phase model - which order depends on the controller’s one (more precisely on the order of the denominator) - and to an equivalent state-space model. This allows one to compute the optimal gain thanks to (30–31). Using anything else than Kalman gain leads therefore implacably to a sub-optimal control with respect to the minimum variance criterion.
7. Discussion and conclusions
The approach proposed here is based on a linear description of the constitutive elements of an AO loop. Provided that the mirror’s voltages are computer-controlled, the control optimization can be performed using a discrete time model of the AO system. We have pointed out some fundamental limitations of the AO closed-loop, and the unavoidable priors that must be defined for control optimization: PSDs of turbulent phase and noise. The optimal solution is then derived thanks to the separation theorem, based on a state-space model. A simulation example comparing OMGI and LQG control on an end-to-end simulator shows that even with some model errors on turbulent phase, performances are better than with an optimized integrator.
Finally, the approach is used to analyze some aspects of pure integrator-type controllers, and particularly the fact that their structure assumes implicit prior hypothesis that are made explicit when using an equivalent state model. It is shown that, as they are unstable, they contain an unstable turbulent phase model, that renders them prone to the famous wind-up effect.
The choice of the state vector, as said before, is not unique. For example, ϕcor could be part of the state vector instead of u, and a state model of smaller dimension could be used. Our choice is motivated by several considerations. Firstly, it limits the influence of matrices that are obtained through calibration process (and thus that contain model errors) mainly to the observation equation (20), so that there is no direct error propagation through the state equation (19). Secondly, it keeps a clear visible physical structure, so that extensions of the model to various situations (presence of vibrations, off-axis AO, MCAO) is easy.
Some comments about turbulence phase modelization: more complex temporal correlation structures could easily be embedded in the model. Indeed, any stationary process with rational PSD can be constructed as the output of a linear “shaping filter” whose input is a white noise and which can be represented in a state-space form . However, a one-order model has been shown to be a good approximation as only short-term correlations are important in control performance .
We have presented here a classical on-axis AO setup, but various extensions may be conducted with state models. For instance, the filtering of parasitic vibrations can be considered . The tool is also well adapted to optimal control in MCAO [7, 27], where anisoplanetism is taken into account.
Linear mirror’s dynamics could also be considered, with a minimum variance criterion. In this case, the state equation must also describe these new dynamics. More generally, the tools presented here apply readily to any AO system provided that it is linear or well approximated by linear behaviors, and that the control criterion of interest is minimum variance. Of course, state vector and matrices A, B and C shall be modified according to the description of each element of the system. Also, several types of DM’s nonlinearities, such as saturations or nonlinear influence functions, can be explicitly accounted for in both the full information optimal control computation and the turbulent phase estimation.
The Kalman based control has been successfully experimented on the BOA bench at ONERA in various configurations: classic AO with and without vibration filtering (not yet published) and simplified MCAO .
To conclude, it would seem that a Kalman filter is the perfect tool for optimizing the overall performance of an AO loop. However, this attractive optimality is predicated on the assumption that one can tune the parameters of the linear stochastic model so that it fits the real system with at least adequate precision. This important caveat emptor should be kept in mind at all times in order to achieve a sensible balance between the model’s accuracy and its complexity.
This work has been partly supported by the DGA, Ministère de la Défense, under contract N. 0534028.
References and links
1. F. Roddier (Ed.), Adaptive Optics in Astronomy, (Cambridge University Press, 1999). [CrossRef]
2. T. Fusco, G. Rousset, J.-L. Beuzit, D. Mouillet, K. Dohlen, R. Conan, C. Petit, and G. Montagnier, “Conceptual design of an extreme AO dedicated to extra-solar planet detection by the VLT-planet finder instrument,” Proc. SPIE, 5903, (2005). [CrossRef]
3. R.H. Dicke, “Phase-contrast detection of telescope seeing and their correction,” Astron. J. 198, 605–615 (1975).
4. D.C. Johnson and B.M. Welsh, “Analysis of multiconjugate adaptive optics,” J. Opt. Soc. Am. A 11, 394–408 (1994). [CrossRef]
5. Comptes Rendus de l’Académie des Sciences, Adaptive Optics in Astronomy, (Elsevier, France, Comptes Rendus Physique, 2005) Vol. 6, Num. 10.
6. G. Rousset and F. Lacombe, et al. “NAOS, the first AO system of the VLT: on sky performance,” Proc. SPIE 4839, 140–149 (2002). [CrossRef]
7. B. Le Roux, J.-M. Conan, C. Kulcsár, H. F. Raynaud, L. M. Mugnier, and T. Fusco, “Optimal control law for classical and multiconjugate adaptive optics,” J. Opt. Soc. Am. A 21, 1261–1276 (2004). [CrossRef]
8. C. Mohtadi, “Bode’s integral theorem for discrete-time systems,” Proc. IEE 137, 57–66 (1990).
9. J.G. Proakis and D.G. Manolakis, Digital Signal Processing - Principles, algorithms and applications, (Prentice Hall, Upper Saddle River, New Jersey, 3rd ed., 1996).
10. L. Guikhman and A. Skorokhod, The theory of stochastic processes, (Springer Verlag Ed., Berlin, 1979).
11. E. Gendron and P. Lena, “Astronomical Adaptive optics I. modal control optimization,” Astron. Astrophys. 291, 337–347 (1994).
12. D.M. Wiberg, C.E. Max, and D.T. Gavel, “A special non-dynamic LQG controller: part I, application to adaptive optics,” Proceedings of the 43th IEEE Conference on Decision and Control 3, 3333–3338 (2004).
13. C. Dessenne, P.-Y. Madec, and G. Rousset, “Optimization of a predictive controller for the closed loop adaptive optics,” Appl. Opt. 37, 4623 (1998). [CrossRef]
14. A. Wirth, J. Navetta, D. Looze, S. Hippler, A. Glindemann, and D. Hamilton, “Real-time modal control implementation for adaptive optics,” Appl. Opt. 37, 4586–4597 (1998). [CrossRef]
15. J.S. Gibson and B.L. Ellerbroek, “Adaptive optics wave-front correction by use of adaptive filtering and control,” Appl. Opt. 39, 2525–2538 (2000). [CrossRef]
16. H. W. Sorenson, “Least-square estimation: from Gauss to Kalman,” IEEE Spectrum 7, 63–68 (1970). [CrossRef]
17. P.D. Joseph and J.T. Tou, “On linear control theory,” AIEE Trans. Applications in Industry, pgs. 193–196 (1961).
18. R. N. Patchell and Jacobs, “Separability, neutrality and certainty equivalence,” Int. J. Control 13, (1971). [CrossRef]
19. Y. Bar-Shalom and E. Tse, “Dual effect, certainty equivalence and separation in stochastic control,” IEEE Trans. Automat. Contr. 19, 494–500 (1974). [CrossRef]
20. R.N. Paschall and D.J. Anderson, “Linear Quadratic Gaussian control of a deformable mirror adaptive optics system with time-delayed measurements,” Appl. Opt. 32, 6347–6358 (1993). [CrossRef] [PubMed]
21. M.W. Oppenheimer and M. Pachter, “Adaptive optics for airbone platforms—part 2: controller design,” Opt. Laser Technol. 34, 159–176 (2002). [CrossRef]
22. A.H. Jazwinski, Stochastic Processes and Filtering Theory, (Academic Press, 1970).
23. B.D.O Anderson and J.B. Moore, Optimal Control, Linear Quadratic Methods, (Prentice Hall, London, 1990).
24. B. Le Roux, J.-M. Conan, C. Kulcsár, H. F. Raynaud, L. M. Mugnier, and T. Fusco, “Optimal control law for multiconjugate adaptive optics,” Proc. SPIE, 4839, (2003). [CrossRef]
25. J.-M. Conan, G. Rousset, and P.-Y. Madec, “Wave-front temporal spectra in high-resolution imaging through turbulence,” J. Opt. Soc. Am. A 12, 1559–1570 (1995). [CrossRef]
26. C. Petit, F. Quiros-Pacheco, J.-M. Conan, C. Kulcsár, H.-F. Raynaud, T. Fusco, and G. Rousset, “Kalman Filter based control loop for Adaptive Optics,” Proc. SPIE, 5490, (2004). [CrossRef]
27. C. Petit, J.-M. Conan, C. Kulcsár, H. F. Raynaud, T. Fusco, J. Montri, and D. Rabaud. “Optimal control for Multi-Conjugate Adaptive Optics,” Elsevier, Comptes Rendus Physique 6, 1059–1069 (2005). [CrossRef]