Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope

Open Access Open Access

Abstract

Multiframe blind deconvolution - the process of restoring resolution to blurred imagery when the precise form of the blurs is unknown - is discussed as an estimation-theoretic method for improving the resolving power of ground-based telescopes used for space surveillance. The imaging problem is posed in an estimation-theoretic framework whereby the object’s incoherent scattering function is estimated through the simultaneous identification and correction of the distorting effects of atmospheric turbulence. An iterative method derived via the expectation-maximization (EM) procedure is reviewed, and results obtained from telescope imagery of the Hubble Space Telescope are presented.

©1997 Optical Society of America

1. Introduction

The distorting effects of atmospheric turbulence on ground-based imagery of astronomical and space objects are well-documented, and numerous methods have been proposed, implemented, and tested for their correction1. These methods can be broadly classified into two categories: post-processing through signal processing whereby the image distortions are identified and compensated by computer processing of the recorded imagery; and pre-processing through adaptive optics whereby the distortions are identified and compensated with wavefront sensors and deformable mirrors so that undistorted imagery is, in principle, recorded directly. With current technology the cost of adaptive compensation can be prohibitive, particularly if full compensation is desired. Because of this, many systems in use today benefit only from partial compensation or none at all, and post-processing methods such as the one discussed in this paper provide an economical means for restoring a telescope’s lost resolving power.

It is widely accepted that only modest improvements in image quality can be attained through the measurement and processing of long exposure (on the order of minutes to hours) telescope imagery. This difficulty is often quantified through Fried’s seeing parameter r 0, which provides a measure of the largest spatial frequency that is present in the long-exposure imagery. Whereas a diffraction-limited telescope records spatial frequencies as high as the ratio D/λdo , where D is the telescope diameter, λ is the nominal wavelength being sensed, and do is the distance from the telescope to the object, long-exposure imagery acquired through atmospheric turbulence will only contain frequencies to the turbulence-limited cut-off frequency r 0/λd 0, where r 0 can take values under 10cm. For a 2m telescope with r 0 = 10cm, then, recovery of diffraction-limited resolution requires one to super-resolve the long-exposure imagery by a factor of 20. Such improvements are rarely, if ever, accomplished with real data.

Exposure times on the order of a few milliseconds, however, provide imagery with spatial frequency information out to the diffraction limit. This fact was first exploited by Labeyrie2, and has since spawned a number of signal-processing methods broadly classified as speckle imaging techniques. A review of these is found in Ref. 1 and the many references within. Many of these, such as Labeyrie’s speckle interferometry, the Knox-Thompson method3, and the triple-correlation method4, rely on the existence of a data transformation that is invariant to the effects of atmospheric turbulence. After performing this transformation along with any required calibration, a non-trivial image-recovery problem must be solved to obtain an estimate of the underlying object. As an alternative to these methods, the data can be processed to directly estimate and correct for the effects of atmospheric turbulence. Methods based on this approach are broadly classified as multiframe blind deconvolution 5 techniques. The application of a multiframe blind deconvolution method to real telescope data is the focus of this paper.

2. Mathematical model

For vertical path, ground-to-space imaging, the turbulence-induced distortions are commonly modeled as a time-varying phase screen that induces a point-spread function given by:

h(y;θt)=A(u)exp{j[θt(u)2πλdou·y]}du2,

where: u and y are two-dimensional spatial variables in the telescope-pupil and image planes, respectively; A(·) is the telescope pupil function; θt (·) is the time-varying phase distortion caused by turbulence-induced changes in the refractive index of the atmosphere; λ is the nominal wavelength; and do is the distance from the object to the telescope. This is the isoplanatic approximation and results in an imaging model that is linear and spatially invariant. In the absence of other effects, then, the time-varying intensity recorded by the telescope’s camera is:

i(y;θt,o)=h(yx;θt)o(x)dx
=h(y;θt)*o(y),

where o(·) is the object’s incoherent scattering function, after magnification and inversion6. The notation i(·; θt , o) shows the image intensity’s explicit dependence on the unknown object o(·) and turbulence-induced phase aberrations θt (·).

Many telescope imaging systems in use today use charge-coupled-device (CCD) cameras to record the image intensity. As discussed by Snyder et al. in Ref. 7, however, the data acquired by these cameras include other effects such as internal and external background counts, nonuniform camera response, electronic read-out noise, and other sources of camera bias. Accordingly, a reasonable model for the data acquired by the nth detector element in a CCD camera array during the kth exposure is:

dk[n]=Nk[n]+Mk[n]+gk[n]+b[n],

where Nk [n] is a Poisson distributed random variable that models the number of object-induced photocounts recorded by the detector, Mk [n] is a Poisson distributed random variable that models the number of background-induced (both internal and external) photocounts recorded by the detector, gk [n] is a Gaussian distributed random variable that models the CCD read-out noise, and b[n] is a deterministic bias induced by the electronic circuitry used to acquire and record the data. All of these variables are usually independent from each other and across detector elements.

The expected number of object-induced photocounts is modeled as:

E(Nk[n])=Ik[n]
=γ[n]yntktk+Ti(y;θt,o)dydt,

where tk is the time and T the duration of the kth exposure, yn is the spatial region over which the nth detector element integrates the image intensity, and γ[n] is a spatially-varying scale factor that accounts for detector efficiency. Pixels that record meaningless data are accommodated with this model by setting γ[n] = 0. The phase aberrations are usually constant over a short exposure, and the detector regions are often small in size relative to the fluctuations in the image intensity so that the integrating operation can be modeled by the sampling operation:

Ik[n]γ[n]Tyik(yn;θtk,o),

where yn is the location of the nth detector element and |y| is its integration area. Furthermore, the object and point-spread functions are often approximated on a discrete grid so that:

ik(yn;θtk,o)mh(ynxm;θtk)o(xm)Δx2,

and

h(yn;θtk)lA(ul)exp{j[θtk(ul)2πΔuΔyλdoul·yn]}Δu22,

where Δy, Δx, and Δu are the image-, object-, and pupil-plane grid sizes, respectively. When the pupil-plane array size is N × N and ΔuΔy/λdo = 1/N, the computations required to create the point-spread function can be accomplished efficiently by using a fast Fourier transform (FFT) algorithm.

The expected number of background-induced photocounts Mk [n] is denoted by Ib [n] and assumed to be the same for each exposure. The CCD read-out noise gk [n] are typically modeled as zero-mean, independent random variables with variance σ 2[n]. Appropriate values for the gain function γ[·], background mean Ib [·], read-noise variance σ 2[·], and bias b[·] are usually determined through a carefully controlled study of the data acquisition system. This typically involves the analysis of data acquired both without illumination (dark frames) and with uniform illumination (flat fields).

3. Maximum-likelihood estimation

Consistent with the noise models developed in the previous section, the data recorded by each detector element in a CCD camera are a mixture of independent Poisson and Gaussian random variables. Accordingly, the probability of receiving N object- and background-induced photocounts in the nth detector element during the kth exposure is:

Pr{Nk[n]+Mk[n]=N;θtk,o}=exp{(Ik[n]+Ib[n])}(Ik[n]+Ib[n])NN!,

where

Ik[n]=γ[n]Tymh(ynxm;θtk)o(xm)Δx2
=γ[n]mh(ynxm;θtk)o[m].

Here, o[m] = T|y|Δx2 o(xm ) contains the dependence on the unknown object, and h(·; θtk) contains the dependence on the unknown phase aberrations. Furthermore, the probability density for the read-out noise is

pgk[n](g)=(2πσ2[n])12exp[g2(2σ2[n])],

and the density for the measured data at each detector element is:

pdk[n](d,θtk,o)=N=0pgk[n](db[n]N)Pr{Nk[n]+Mk[n]=N;θtk,o}.

For a given set of data {dk [·]}, the maximum-likelihood estimate of the unknown object and phase aberrations is selected to maximize the likelihood:

loθ=k=1Knpdk[n](dk[n];θtk,o),

or, as is commonly done, its logarithm (the log-likelihood):

𝐿oθ=lnloθ
=k=1Knlnpdk[n](dk[n];θtk,o).

The complicated form for the measurement density (Eq. 11) makes the maximization of 𝐿(o, θ) an overly complicated problem. When the read-out noise variance is large (greater than 50 or so), however, the data can be modified according to:

d˜k[n]=dk[n]b[n]+σ2[n]
=Nk[n]+Mk[n]+gk[n]+σ2[n],

and these modified data are then similar (in distribution) to the sum of three Poisson-distributed random variables8. Accordingly, the modified data are approximately Poisson distributed with the mean value Ik [n] + Ib [n] + σ 2[n]. The probability mass function for these data is then modeled as:

Pr{d˜k[n]=D;θtk,o}=exp{(Ik[n]+Ib[n]+σ2[n])}(Ik[n]+Ib[n]+σ2[n])DD!,

so that the log-likelihood is:

𝐿oθ=k=1Kn{(Ik[n]+Ib[n]+σ2[n])+d˜k[n]ln(Ik[n]+Ib[n]+σ2[n])},

where terms not dependent on the object or phase aberrations have been omitted. The maximum-likelihood phase aberration and object estimates are then those that maximize 𝐿(o, θ) subject to any physical constraints that are imposed on these parameters. For all problems we have considered, the object intensity o[·] is constrained to be a nonnegative function. Known object support is another constraint that is commonly used; however, implementation of this constraint in an optimization procedure usually results in a trivial reduction of the number of unknown parameters.

3.1 Comments

In recent years, many methods for the blind deconvolution of astronomical and space-object imagery have been proposed. These can be broadly classified into two categories: i) the Ayers-Dainty9 class of iterative algorithms; and ii) constrained numerical optimization algorithms. The methodology described here is one of constrained numerical optimization, and is a straightforward extension of the method proposed by Schulz5 with important modifications to accommodate real sensor artifacts such as read-noise, internal and external background, and non-uniform gain. Other methods of constrained optimization have been proposed10,11,12, and some have relied on essentially the same methodology as the one proposed in Ref. 5 with the differences being the method used for numerical optimization12, and the utilization (or lack thereof) of realistic sensor models. Key points regarding the methodology presented in this paper are:

  1. A practical and faithful model is used for the data collection process. By using the maximum-likelihood method for estimation, this model is in turn used to induce a cost function for a constrained optimization problem.
  2. The importance of the point-spread function model (Eq. 7) should not be understated. This point was emphasized by Cornwell in his summary of the 1994 workshop on the Restoration of HST Images and Spectra II as he commented on the subject of blind deconvolution13:

    “One always gains by adding more information in the form of known imaging physics. For example, one can ask whether the closure relations are enforced? Alternatively, is the PSF forced to be the Fourier transform of a cross-correlation of a pupil plane with phase errors? If not, then it should be.”

3.2 Numerical optimization

Many methods are available for performing the optimization required to solve for the maximum-likelihood aberration and object parameters. Gradient-based methods and the expectation-maximization (EM) method proposed in Ref. 5 are two popular choices. However, one should not confuse the method used for solving the optimization problem with the methodology used to pose the problem. Provided the problem is successfully solved, the optimization method will only influence algorithm performance (computation time). The methodology used to define the problem, which is greatly dependent upon the use of sound mathematical and physical models, will determine estimator performance (bias and variance).

With straightforward modifications to accommodate sensor effects such as non-uniform gain and backgrounds, the EM method of Ref. 5 can be applied to this problem. Additionally, the derivatives of the log-likelihood with respect to the object and phase-aberration parameters are easily derived and can be used in a gradient-based optimization method. Both methods have performed well for this problem, and the determination of an optimal method of optimization is still an open problem.

4. Results with real data

Telescope data were acquired at the Air Force Maui Optical Station (AMOS) on March 9, 1995 and subsequently processed to determine maximum-likelihood object estimates. The telescope and imaging-system parameters are listed in Table 1. The object under observation was the Hubble Space Telescope in its 600 km orbit, and 656 short exposure images were acquired, each with an 8 ms exposure time. Four of these images are shown in Fig. 1. The 656 images were partitioned into 41 sets of 16, and each set in the partition was used to obtain one object restoration. The optimization method used was a straightforward extension of the expectation-maximization method described in Ref. 5. Four of these restorations are shown in Fig. 2. Each restoration required approximately 15 minutes of processing time on 9 nodes of an IBM SP2 at the Maui High Performance Computing Center. Whereas very little detail is available in the raw data, the restored images allow one to clearly identify the satellite’s solar panels along with the cover (lower left) for the space telescope’s 2.4 m primary mirror. An mpeg video showing all data and restorations is available at the URL listed as Ref. 14.

5. Summary

For ground-based surveillance of astronomical and space objects, post-detection processing through multiframe blind deconvolution is a viable method for restoring image resolution. To support this claim, a maximum-likelihood estimation method has been implemented and tested on real telescope data, and preliminary results of this study are presented in this paper. Much work still needs to be done, however, and important topics for consideration include: the use of prior models for the object and phase aberration parameters; the optimal use of measurement diversity as supplied by a wavefront sensor or phase-diversity system15; clever methods for generating initial parameter estimates; and the development and implementation of improved methods of optimization.

Tables Icon

Table 1:. Telescope and imaging system parameters for the AMOS imagery of the Hubble Space Telescope.

 figure: Figure 1:

Figure 1: Four short-exposure images of the Hubble Space Telescope as acquired by the 1.6 m AMOS telescope.

Download Full Size | PDF

 figure: Figure 2:

Figure 2: Four restored images of the Hubble Space Telescope.

Download Full Size | PDF

6. Acknowledgments

This work was supported by the Air Force Maui Optical Station, the Maui High Performance Computing Center, and in part by the Air Force Office of Scientific Research through grant F49620-97-1-0053.

References and links

1. M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Inc. (1996).

2. A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analyzing speckle patterns in star images,” Astron. Astrophys. , 6, 85 (1970).

3. K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short exposure images,” Astrophys. J. , 193, L45–L48 (1974). [CrossRef]  

4. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. , 22, 4028–4037 (1983). [CrossRef]   [PubMed]  

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]  

6. Strictly speaking, because of image inversion and magnification an imaging system is never spatially invariant; however, if one views the input signal as the inverted and magnified object, many imaging systems are then well-modeled as spatially invariant.

7. D. L. Snyder, A. M. Hammoud, and R. L. White, “Image recovery from data acquired with a charge-coupled-device camera,” J. Opt. Soc. Am. A , 10, 1014–1023 (1993). [CrossRef]   [PubMed]  

8. D. L. Snyder, C. W. Helstrom, A. D. Lanterman, M. Faisal, and R. L. White, “Compensation for readout noise in CCD images,” J. Opt. Soc. Am. A , 12, 272–283 (1985). [CrossRef]  

9. G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution method and its application”, Opt. Lett. , 13, 547–549 (1988). [CrossRef]   [PubMed]  

10. R. G. Lane, “Blind deconvolution of speckle images,” J. Opt. Soc. Am. A , 9, 1508–1514 (1992). [CrossRef]  

11. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution”, Astrophys. J. , 63, 862–874 (1993). [CrossRef]  

12. E. Thiebaut and J.-M. Conan, “Strict a priori constraints for maximum-likelihood blind deconvolution,” J. Opt. Soc. Am. A , 12, 485–492 (1995). [CrossRef]  

13. T. J. Cornwell, “Where have we been, where are we now, where are we going?”, in The Restoration of HST Images and Spectra II, B. Hanisch and R. L. White, editors, (Space Telescope Science Institute, Baltimore, MD, 1993) pp. 369–372.

14. T. J. Schulz, “Movie of processed Hubble Space Telescope imagery,” http://www.ee.mtu.edu/faculty/schulz/mpeg/hst.mpeg

15. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations using phase diversity,” J. Opt. Soc. Am. A , 9, 1072–1085 (1992). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Figure 1:
Figure 1: Four short-exposure images of the Hubble Space Telescope as acquired by the 1.6 m AMOS telescope.
Figure 2:
Figure 2: Four restored images of the Hubble Space Telescope.

Tables (1)

Tables Icon

Table 1: Telescope and imaging system parameters for the AMOS imagery of the Hubble Space Telescope.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

h ( y ; θ t ) = A ( u ) exp { j [ θ t ( u ) 2 π λ d o u · y ] } du 2 ,
i ( y ; θ t , o ) = h ( y x ; θ t ) o ( x ) dx
= h ( y ; θ t ) * o ( y ) ,
d k [ n ] = N k [ n ] + M k [ n ] + g k [ n ] + b [ n ] ,
E ( N k [ n ] ) = I k [ n ]
= γ [ n ] y n t k t k + T i ( y ; θ t , o ) dydt ,
I k [ n ] γ [ n ] T y i k ( y n ; θ t k , o ) ,
i k ( y n ; θ t k , o ) m h ( y n x m ; θ t k ) o ( x m ) Δ x 2 ,
h ( y n ; θ t k ) l A ( u l ) exp { j [ θ t k ( u l ) 2 π Δ u Δ y λ d o u l · y n ] } Δ u 2 2 ,
Pr { N k [ n ] + M k [ n ] = N ; θ t k , o } = exp { ( I k [ n ] + I b [ n ] ) } ( I k [ n ] + I b [ n ] ) N N ! ,
I k [ n ] = γ [ n ] T y m h ( y n x m ; θ t k ) o ( x m ) Δ x 2
= γ [ n ] m h ( y n x m ; θ t k ) o [ m ] .
p g k [ n ] ( g ) = ( 2 π σ 2 [ n ] ) 1 2 exp [ g 2 ( 2 σ 2 [ n ] ) ] ,
p d k [ n ] ( d , θ t k , o ) = N = 0 p g k [ n ] ( d b [ n ] N ) Pr { N k [ n ] + M k [ n ] = N ; θ t k , o } .
l o θ = k = 1 K n p d k [ n ] ( d k [ n ] ; θ t k , o ) ,
𝐿 o θ = ln l o θ
= k = 1 K n ln p d k [ n ] ( d k [ n ] ; θ t k , o ) .
d ˜ k [ n ] = d k [ n ] b [ n ] + σ 2 [ n ]
= N k [ n ] + M k [ n ] + g k [ n ] + σ 2 [ n ] ,
Pr { d ˜ k [ n ] = D ; θ t k , o } = exp { ( I k [ n ] + I b [ n ] + σ 2 [ n ] ) } ( I k [ n ] + I b [ n ] + σ 2 [ n ] ) D D ! ,
𝐿 o θ = k = 1 K n { ( I k [ n ] + I b [ n ] + σ 2 [ n ] ) + d ˜ k [ n ] ln ( I k [ n ] + I b [ n ] + σ 2 [ n ] ) } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.