Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Blind deconvolution in optical diffusion tomography

Open Access Open Access

Abstract

We use blind deconvolution methods in optical diffusion tomography to reconstruct images of objects imbedded in or located behind turbid media from continuous-wave measurements of the scattered light transmitted through the media. In particular, we use a blind deconvolution imaging algorithm to determine both a deblurred image of the object and the depth of the object inside the turbid medium. Preliminary results indicate that blind deconvolution produces better reconstructions than can be obtained using backpropagation techniques. Moreover, it does so without requiring prior knowledge of the characteristics of the turbid medium or of what the blur-free target should look like: important advances over backpropagation.

©2002 Optical Society of America

1. Introduction

Optical diffusion tomography is an exciting new technology that can be used to image an object that is imbedded in a turbid medium. Typically, the turbid medium and object are illuminated with visible or near infrared light that is then scattered by the turbid medium. If the object is imbedded in the medium and has scattering and/or absorption properties different from the background medium, the scattered light in the medium is also perturbed by the object. It is this additional perturbation by the object that is used to reconstruct an image of the object. A schematic of the system geometry for the results in this paper is shown in Fig. 1, where the illuminating light comes from the left, the object is imbedded in an infinite turbid medium, and the detector is a two-dimensional detector located in the z=zo plane. Other geometries could be considered, such as a semi-infinite medium or an infinite slab. The experimental results in this paper are generated using an essentially infinite slab, but the theory is presented for an infinite geometry to simplify the mathematics. The spatial distribution of the light in the turbid medium, ϕ(x, y, z) , is given by

ϕ(x,y,z)=ϕhom(x,y,z)+ϕpert(x,y,z)

where, for any point (x,y,z) in the turbid medium, ϕhom (x, y,z) is the spatial distribution of the light in the absence of the imbedded object (the homogeneous or background light) and ϕpert (x, y, z) represents the perturbation of the light due to the presence of the object. In the case of an absorptive object, under the Born approximation [1], the light perturbed by the object can be written as [2]

ϕpert(x,y,z)=ϕhom(x,y,z)o(x,y,z)h(xx,yy,zz)dxdydz

where o(x, y,z) is a 3D representation of the absorption properties of the object and h(x, y,z) is the Green’s function solution to the photon diffusion equation for the homogeneous turbid medium and boundary conditions. Equation (2) shows that the light perturbed by the object is given by the 3D convolution of the imbedded object (weighted by the homogeneous light) with the Green’s function. From Eqs.(1) and (2), the signal collected by the detector in the z=zo plane, ϕ(x,y,zo), can be considered as the superposition of two components: a blurred component ϕpert(x,y,zo) from the light that is scattered by the object, and a background component ϕhom(x,y,zo) from the incident light scattered by the turbid medium. The inverse problem associated with imaging the imbedded object requires removing both the background component ϕhom(x,y,zo) and the blurring h(x,y,z) from ϕpert(x,y,z). Matson and Liu [3] have shown theoretically and experimentally that a backpropagation algorithm can be used to solve the image blur part of the problem when the light scattered by the object is measured in a plane external to the object (experimentally, external to the medium as well), such as is done with CCD cameras. In this case, under the assumption that the detection plane is perpendicular to the z axis and located at z=zo (see Fig. 1), Eq. (2) can be rewritten as [4]

 figure: Fig. 1.

Fig. 1. Schematic showing the geometry of the optical diffusion tomography system for the development in the paper. The imbedded object is represented by the cube.

Download Full Size | PDF

ϕpert(x,y,zo)=o2D(x,y,z1)*h(x,y,Δz)

where

o2D(x,y,z1)=∫∫∫objectsupportϕhom(x,y,z′)o(x,y,z′)h(xx,yy,z1z′)dxdydz′

and where * is the 2D convolution operator with respect to x and y, z1 is the largest z value inside the object support, and the depth parameter Δz = zo-z1 is the distance between the part of the object closest to the detection plane and the detection plane. The difference between Eq.(2) and Eqs.(3) and (4) is that the integration indicated in Eq.(2), which is over all space, is broken up into two pieces. The first piece is the integration over the object support (Eq.(4)), and the second piece is the propagation of the light perturbed by the object (o2D(x,y,z1)) to the detection plane by using the angular spectrum solution to the homogeneous source-free scalar wave propagation [5]. The angular spectrum propagation operation is indicated by the convolution of o2D(x,y,z1) with h(x,y,Δz), where the latter term is the two-dimensional inverse Fourier transform of the angular spectrum transfer function for the distance Δz and is referred to as the system point spread function (PSF) throughout this paper. Equation (3) indicates that the image reconstruction problem can be viewed as 2D deconvolution, where the desired reconstructed image is o2D(x,y,z1); that is, the desired reconstruction is the projection of the 3D object structure into the z=z1 plane. To obtain the absorptive properties of the object from the deconvolved data requires solving Eq.(4) and requires multiple views, in general [3]. Matson and Liu [3] also showed that a backpropagation algorithm can be used to determine Δz from the data itself instead of having to know this value a priori. Unfortunately, like many reconstruction methods, the backpropagation method requires prior knowledge of the structure of the mathematical model governing light propagation in turbid media, something that is not always available. A second disadvantage to the backpropagation algorithm is that, in order to tell what value of depth parameter provides the best image, the user needs to have some idea of what the object should look like. Blind deconvolution is an alternate reconstruction method that does not have these restrictions. The method uses generic properties of the measured data, such as its band-limited nature, the convolution relationship between the measured data and the object perturbing the light, and the fact that light intensities are positive, to constrain the reconstruction of the measured data. In this paper we discuss some preliminary results from applying blind deconvolution methods to optical diffusion tomography data.

2. Blind deconvolution algorithm

Blind deconvolution is a method for determining the characteristics of both the object being imaged and the system PSF from a single blurred image of the object. In its most basic form, this is an ill-posed, undetermined problem that has a multitude of solutions, not all of which are physically meaningful. However, by applying prior knowledge about the imaging process to the problem, it is possible to constrain the set of solutions to a subset that are feasible and thus useful.

There are many different mathematical techniques for blind deconvolution, each with their own strengths and weaknesses. Here we use an iterative approach that is based on a forward model of the imaging process and a maximum-residual-likelihood [6] determination of the optimal model parameters; that is, we find the object [o2D(x,y)] and PSF [h(x, y)] intensity distributions that minimize the cost function

ε=x,y0,0(rr)x,y2

where r(x, y) are weighted residuals as given by

r(x,y)=[d(x,y)(o2D*h)x,y]sd(x,y)(o2D*h)x,y+n(x,y)2

Here d(x, y) is the recorded image, sd(x, y) is a binary function that masks out bad pixels or regions of low signal-to-noise ratio in the recorded image, n(x, y) is the detector read noise, and ⊗ is the correlation operator. The weighting term (the denominator) allows for both Poisson and Gaussian noise [7]. We have dropped the term z1 in the argument list of o2D and the term Δz in the argument list of h for clarity. The iterative approach uses a conjugate gradient routine to find the intensity distributions that minimize ε.

When there is little or no information available about the object or PSF intensity distributions, it is necessary to model them in the most basic way: i.e., sample the distributions at the same resolution as the measurement image, and treat each sample (or pixel) as a free parameter. The standard constraints used by most blind deconvolution algorithms, including ours, are that: 1) the object and PSF are positive functions that extend, or at best are observable, only over a finite extent of image space, and 2) the PSF is band limited. We enforce these properties through the reparameterizations

o2D(x,y)=θ2(x,y)so(x,y)

and

h(x,y)=(ψ2*τ)x,ysh(x,y)

Here so(x,y)and sh(x,y)are binary functions that designate the spatial extent of o2D(x,y)and h(x,y),and θ(x,y) and ψ(x, y) are the parameters to be optimized. The PSF reparameterization includes a correlation (band-limit) function, τ(x,y), [8] that is a low-pass filter whose cutoff frequency is chosen to exclude all the noise-dominated Fourier components of the measured data.

In some situations, however, we may have sufficient additional prior information to model the object and/or PSF with substantially fewer free parameters than are required by the pixel-by-pixel method. This is the case for imagery obtained through turbid media where the diffusion equation can be used for modeling the scattered light propagation [1,2]. We thus use a turbid medium optical transfer function (OTF), which is based on the diffusion equation in an infinite medium, as a means for modeling the PSF. For continuous-wave (CW) illumination, the turbid medium OTF is given by [9]

H(u,v)=exp[Δz(2πu)2+(2πv)2+α]

where α depends on the scattering and absorption characteristics of the turbid medium. The incoherent, band-limited PSF is given by the inverse Fourier transform

h(x,y)=N2u,vT(u,v)H(u,v)exp(i2π[ux+vy]/N)

where T(u, v) is the Fourier transform of τ(x, y) and N2 is the number of pixels in the image. In practice, we model the PSF via the turbid medium OTF during the early part of the iteration, and then change to the pixel-by-pixel PSF model when the object reconstruction appears to be fairly well established. The reason for this is that Eq. (9) is quantitatively valid only for an infinite and homogeneous medium, and we are inherently dealing with a finite, inhomogeneous medium (due to the presence of an object). By switching to the pixel-by-pixel model during the later iterations, we help avoid artifacts in the reconstructed object due to shortcomings in the PSF model.

Equation (9) shows that the theoretical OTF has zero Fourier phase, implying that the Fourier phase of the measured data is a noisy realization of the object’s Fourier phase. We have thus found it beneficial to: 1) use the measured data as the initial estimate of the object, and 2) replace the Fourier phase of the reconstructed object, after a few iterations, with the Fourier phase of the data (over the region where the measured Fourier phase signal is above the noise level) and then continue the minimization.

3. Experimental setup

The experimental data were collected using the transmission geometry1 setup described in Matson and Liu [11] and shown in Fig. 2. Basically, a CW laser was used to illuminate a target that was either set against a turbid medium phantom or was imbedded in one. The turbid medium phantoms were created using a resin mixed with TiO2 to give scattering properties, and a dye to give absorption properties [11]. The absorption (μa) and reduced scattering (μ’s) coefficients for the phantoms were 0.01cm-1 and 18 cm-1, respectively. The scattered light was recorded using a SpectraSource 16 bit 512×512 CCD camera and stored in a computer. The targets are shown in Fig. 3. The blurred images of the targets were produced by subtracting from the blurred target measurement a second measurement of the same turbid media without a target present. We realize that it is generally not practical to measure and remove the background component of the measured signal, either in the field (surveillance), or in vivo (biomedical). As such, this approach has limited applicability. However, it does allow us to make a preliminary assessment of blind deconvolution as a potential tool for post processing imagery obtained through a turbid medium.

 figure: Fig. 2.

Fig. 2. Schematic of the experimental setup used for the data acquisition.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The different targets used to generate the experimental data. The first two targets, from left to right, were positioned behind a 2.5 cm thick phantom. The third target (far right) was imbedded in the center of a 5.5 cm thick phantom. From left to right: F-111 model aircraft, F-18 model aircraft, and Boeing 747 model aircraft. The F-111 and F-18 aircraft models, when illuminated and viewed through the turbid phantom, produce similar images (see Fig. 4). This provides a stringent test of the resolving capability of the two restoration algorithms considered in this work.

Download Full Size | PDF

4. Reconstruction results

Using the algorithm described in section 2, along with different input guesses for Δz and α, we have found that there are several {Δz, α} pairs that produce almost identical PSF profiles, and thus essentially the same values for the cost function. This degeneracy in the PSF is reminiscent of what Carasso [12] found when using Levy distributions to describe the system OTF in his blind deconvolution algorithm. However, for high signal-to-noise ratio imagery (such as that presented here), we find that the degeneracy can be reduced by penalizing solutions with spatial frequencies beyond the cutoff frequency observed for the measured data. We do this by adding the penalty term [13]

εp=N2u,v>cutoffH(u,v)2+N2u,v>cutoffF(u,v)2

to the cost function ε in Eq.(5). Figure 4 shows the reconstructions for the targets shown in Fig. 3 using our blind deconvolution (BD) algorithm, and the backpropagation (BP) algorithm of Matson and Liu [11]. We note that the OTFs for the measured data fall off extremely quickly with increasing spatial frequency. Thus, the best reconstructions that can be expected are low-frequency versions of the truth. In addition, because the targets can be considered as purely absorptive, we will only measure their shadow. Figure 4 shows that the morphology of the blind deconvolution restorations is better (visually) than that of the BP restorations. In particular, although both the BP and BD reconstructions allow the user to distinguish between the F-111 and F-18 targets, the BD result more clearly differentiates the tail structures of the two aircraft. The strong artifacts and distortion of the morphology in the BP reconstructions are a manifestation of both the type of spatial filter and the cutoff frequency used to regularize the inversion process [3]. By comparison, the BD algorithm is able to reduce the artifacts and distortion through the inclusion of prior knowledge about the imaging process into the reconstruction procedure.

From their work using BP techniques, Matson and Liu [3] concluded that the quality of the reconstructed image is directly related to the accuracy of the depth parameter used in the BP inverse filter. The best image quality is obtained when the measured data are inverted using the OTF calculated for the correct depth. This suggests that the OTF parameters estimated by the BD algorithm have the potential to provide information on the object location (depth) and the properties of the scattering medium. Our mean estimated value for Δz for the case of the targets located behind the turbid medium phantom, <Δz> = 2.3±0.2 cm, is close to its expected value of 2.5cm (the thickness of the phantom). In addition, the estimated value of Δz for the (imbedded) Boeing 747 data, Δz=2.6 cm, is in good agreement with its true location [11]. These results are encouraging as the depth values that are obtained using BP are strongly dependent on the choice of regularization filter [3]. However, our estimated values of α are very different from the expected value (3μ’sμa). This is most likely due to the fact that Eq. (9) is based upon the assumption of an infinite background medium, while the data were generated using a finite slab for the background medium. The OTF model for use with a finite background medium requires additional terms to account for the boundaries of the medium.

 figure: Fig. 4.

Fig. 4. Columns, left to right: raw data, reconstructed object, o2D(x,y;z1), using BP, and BD respectively. Top row: F-111 model airplane, second row: F-18 model airplane, bottom row: 747 model.

Download Full Size | PDF

5. Discussion

Our first results from applying BD techniques to imagery obtained through a turbid medium are encouraging. Blind deconvolution appears to be capable of providing a high-quality reconstruction of the object, and a good estimate of its location, without prior knowledge of either the characteristics of the turbid medium, or of the nature of the object. However, more work needs to be done to see if the reconstructed PSF can be a source of information about the properties of the turbid medium. In particular, the model for the turbid medium OTF needs to be improved.

As noted in the section on the experimental setup, before attempting any image reconstruction, we preprocessed the measured data to remove the component of the detected signal that was due to the light scattered by the turbid medium so as to leave just the light scattered from the target. We realize that it is generally not practical to do this, either in the field (surveillance), or in vivo (biomedical). In principle, simultaneous estimation of the homogeneous wave should be possible as it only requires the estimation of one extra parameter. Thus, we are currently investigating how to adapt our BD algorithm to estimate the background signal as well.

Acknowledgements

Stuart M. Jefferies, Kathy J. Schulze, and E. Keith Hege were supported under the Air Force Research Laboratory's Directed Energy contract F29601-96-D-0128. This work was funded in part by the National Reconnaissance Office under its Director's Innovation Initiative program and by the Air Force Office of Scientific Research.

Footnotes

1Although a transmission geometry was used, a reflection geometry could just as easily have been employed [10].

References and Links

1. E. P. Zege, A. P. Ivanov, and I. L. Katsev, Image transfer through a scattering medium, (Springer-Verlag, Berlin-Heidelberg, 1991), Chap. 6. [CrossRef]  

2. C. L. Matson, “A diffraction tomographic model of the forward problem using diffuse photon density waves,” Opt. Express 1, 6–11 (1997) http://www.opticsexpress.org/oearchive/source/1884.htm. [CrossRef]   [PubMed]  

3. C. L. Matson and H. Liu, “Backpropagation in turbid media,” J. Opt. Soc. Am. A 16, 1254–1265 (1999). [CrossRef]  

4. C. L. Matson, “Deconvolution-based spatial resolution in optical diffusion tomography,” Appl. Opt. 40, 5791–5801 (2001). [CrossRef]  

5. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, Boston, 1996), Chap. 3.

6. R. K. Pina and R. C. Puetter, “Incorporation of spatial information in Bayesian image reconstruction: the maximum residual likelihood criterion,” PASP 104, 1096–1103 (1992). [CrossRef]  

7. M. Lloyd-Hart, S. M. Jefferies, E. K. Hege, and J. R. P. Angel, “Wave Front Sensing with Time-of-flight Phase Diversity,” Opt. Lett. 26, 402–404 (2001). [CrossRef]  

8. D. G. Sheppard, B. R. Hunt, and M.W. Marcellen, “Iterative multi-frame super-resolution algorithms for atmospheric-turbulence-degraded imagery,” J. Opt. Soc. Am. A 15, 978–992 (1998). [CrossRef]  

9. C. L. Matson, N. Clark, L. McMackin, and J. S. Fender, “Three-dimensional tumor localization in thick tissue with the use of diffuse photon-density waves,” Appl. Opt. 36, 214–220 (1997). [CrossRef]   [PubMed]  

10. X. Cheng and D. A. Boas, “Diffuse optical reflection tomography with continuous-wave illumination,” Opt. Express 3, 118–123 (1998), http://www.opticsexpress.org/oearchive/source/5663.htm. [CrossRef]   [PubMed]  

11. C. L. Matson and H. Liu, “Resolved object imaging and localization with the use of a backpropagation algorithm,” Opt. Express 6, 168–174 (2000), http://www.opticsexpress.org/oearchive/source/19570.htm. [CrossRef]   [PubMed]  

12. A. S. Carasso, “Direct blind deconvolution,” SIAM J. Appl. Math. 61, 1980–2007 (2001). [CrossRef]  

13. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astro. Phys. J. 415, 862–864 (1993). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic showing the geometry of the optical diffusion tomography system for the development in the paper. The imbedded object is represented by the cube.
Fig. 2.
Fig. 2. Schematic of the experimental setup used for the data acquisition.
Fig. 3.
Fig. 3. The different targets used to generate the experimental data. The first two targets, from left to right, were positioned behind a 2.5 cm thick phantom. The third target (far right) was imbedded in the center of a 5.5 cm thick phantom. From left to right: F-111 model aircraft, F-18 model aircraft, and Boeing 747 model aircraft. The F-111 and F-18 aircraft models, when illuminated and viewed through the turbid phantom, produce similar images (see Fig. 4). This provides a stringent test of the resolving capability of the two restoration algorithms considered in this work.
Fig. 4.
Fig. 4. Columns, left to right: raw data, reconstructed object, o2D(x,y;z1), using BP, and BD respectively. Top row: F-111 model airplane, second row: F-18 model airplane, bottom row: 747 model.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( x , y , z ) = ϕ hom ( x , y , z ) + ϕ pert ( x , y , z )
ϕ pert ( x , y , z ) = ϕ hom ( x , y , z ) o ( x , y , z ) h ( x x , y y , z z ) dx dy dz
ϕ pert ( x , y , z o ) = o 2 D ( x , y , z 1 ) * h ( x , y , Δz )
o 2 D ( x , y , z 1 ) = ∫∫∫ object support ϕ hom ( x , y , z′ ) o ( x , y , z′ ) h ( x x , y y , z 1 z′ ) dx dy dz′
ε = x , y 0,0 ( r r ) x , y 2
r ( x , y ) = [ d ( x , y ) ( o 2 D * h ) x , y ] s d ( x , y ) ( o 2 D * h ) x , y + n ( x , y ) 2
o 2 D ( x , y ) = θ 2 ( x , y ) s o ( x , y )
h ( x , y ) = ( ψ 2 * τ ) x , y s h ( x , y )
H ( u , v ) = exp [ Δz ( 2 πu ) 2 + ( 2 πv ) 2 + α ]
h ( x , y ) = N 2 u , v T ( u , v ) H ( u , v ) exp ( i 2 π [ ux + vy ] / N )
ε p = N 2 u , v > cutoff H ( u , v ) 2 + N 2 u , v > cutoff F ( u , v ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.