Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A field theoretical restoration method for images degraded by non-uniform light attenuation : an application for light microscopy

Open Access Open Access

Abstract

Microscopy has become a de facto tool for biology. However, it suffers from a fundamental problem of poor contrast with increasing depth, as the illuminating light gets attenuated and scattered and hence can not penetrate through thick samples. The resulting decay of light intensity due to attenuation and scattering varies exponentially across the image. The classical space invariant deconvolution approaches alone are not suitable for the restoration of uneven illumination in microscopy images. In this paper, we present a novel physics-based field theoretical approach to solve the contrast degradation problem of light microscopy images. We have confirmed the effectiveness of our technique through simulations as well as through real field experimentations.

©2009 Optical Society of America

1. Introduction

Microscopy [1] is an important optical imaging technique for biology. While there are many microscopy techniques such as two photons and single plane illumination microscopy, confocal microscopy has become a de facto tool for bioimaging. In confocal microscopy, out-of-focus light are eliminated through the use of a pin-hole. Incident illuminating light passes through a pin-hole and gets focused into a small region in the sample. Thus only scattered light that travels along the same path of the incident illuminating light passes through the pin-hole and gets focused again at the light detector. The images acquired through a confocal microscope is sharper than those produced by conventional wide-field microscopes. However, the fundamental problem in confocal microscopy is the light penetration problem. Incident light gets attenuated and scattered and hence it cannot penetrate through thick samples. Generally, images acquired from regions deep into the sample appears exponentially darker than that regions near the surface of the sample. Difficulties in light penetration are indeed, not restricted to confocal microscopy. Other light microscopy techniques, such as single plane illumination microscopy and wide-field microscopy suffer the same fate. The classical space invariant deconvolution approaches [2], [3], [4] are not suitable to cope with this problem of microscopy imaging.

In a seemingly totally unrelated area of research is the restoration of degraded images due to atmospheric aerosols. This problem has been extensively studied [5], [6], [7], [8], [9], [10], [11], [12], [13] due to important applications on outdoor visions such as surveillance, navigation, tracking and remote sensing [5], [10]. Similar restoration techniques also found new applications in underwater vision [8], [9], specifically for surveillance of underwater cables and pipeline, etc. Various restoration algorithms are proposed based on physical model of light attenuation and light scattering (airlight) through a uniform media. One of the earlier work [5] on image restoration requires accurate information of scene depths. Subsequent works circumvented the need for scene depths, but require multiple images to recover the information needed [8], [9], [10], [14]. Narasimhan and Nayar [10], [11], [12], [13] developed an interactive algorithm that extracts all the required information from only one degraded image. This method needs manual selection of airlight color and “good color region”. A fundamental issue with these restoration techniques is the amplification of noise; this issue is elegantly handled by the use of a regularization term in the variational approach proposed by Kaftory et. al. [14].

Physics based restoration methods such as those described above have many advantages over model based methods of contrast enhancement (e.g. histogram equalization). Model based methods [15], [16], [17] generally assume that the image properties are constant over the entire image, this assumption is violated in weather degraded images. Moreover, physical models are built upon the laws-of-physics, which we assume to be an undeniable truth.

Physics based restoration techniques had found many applications and new applications will continue to emerge. One amazing aspect of such restoration techniques is its validity through several orders of magnitudes of physical length scales. In aerial surveillance, the physical length scale is of the order of ~10km and in underwater surveillance, the physical length scale is of the order of ~10m.

In this paper, we link two unrelated areas of research: restoration of images from light microscopy and restoration of weather degraded images. We propose a new application for physics based restoration for light microscopy, which has a physical length scale of ~100µm. Thereby extending the length scales of physics based restoration to 8 orders of magnitudes. We further improved the physics based model with a field theoretical formulation.

Our proposed formulation is radically different from all existing physics based restoration techniques, in which we do not assume constant extinction coefficient in the attenuating medium. Moreover, in our formalism, we make no distinction between the image object and the attenuating medium. We derived a general set of equations to handle any geometrical setup in the image acquisition. To use our method, one only need to specify details of the light source and the detection equipment such as a camera. We propose a different formulation because the existing physics-based methods [5], [6], [7], [8], [9], [10], [11], [12], [13] cannot be applied to three dimensional microscopy images. Firstly, existing methods “removes” the attenuation media to retrieve a two dimensional scence. On the contrary, in this paper, the attenuation media also contains the image information. We want to restore the true signals of the media instead of removing them. Secondly, existing methods assume a uniform attenuation medium, an assumption that is strongly violated in microscopy images.

2. Field theoretical formulation

Our approach is formulated on strong theoretical grounds, it is based on fundemental laws-of-physics, such as conservation laws represented by the continuity equation. We use a field theoretical approach that is well proven over centuries in physics.

Assume a region of interest Ω⊂ ℝ3 that contains the whole imaging system, including the image object, the attenuation medium, light sources and detectors (e.g. camera). The light sources originate from infinity, we consider them to originate from the boundary of the region of interest Ω. However, this is not a requirement in our formalism. Let r s∊Ωs be the set of points in the light sources and r p be the locations of the voxels in the detector.

2.1. Photon density and light intensity

We first consider the mathematical model of photon (light) density and light intensity field. Let f (r) be the number of photons per unit volume and n(r) be the light intensity at a point r∊Ω. Total number of photons in an infinitesimal volume dV=dldA (Fig. 1) is f(r)dldA. In the time interval dt, the number of photons passing through the area dA is,

n(r)dA=f(r)dldtdA=f(r)cdAn(r)=f(r)(c)

Where, c is the speed of light in the medium, n(r) is the number of photons passing through a unit area per unit time.

 figure: Fig. 1.

Fig. 1. Attenuation phenomenon due to incident of light on a specimen. dl and dA indicate infinitesimal length and area of the specimen, respectively.

Download Full Size | PDF

2.2. Attenuation and the absorption coefficient

The degree of attenuation of light through a medium depends on the opacity of the medium as well as the distance traveled through the medium. Referring to Fig. 1, suppose light is incident along the x-axis at a point r. The differential change of intensity through the medium with an infinitesimal thickness dl is given by,

dn(r)dl=n(r)ρab(r)

Where ρ ab(r) is the absorption coefficient of light at a point r. In several papers [5], [8], [14] the phrase “extinction coefficient” is used in place of “absorption coefficient”. ρ ab is in general a function of the wavelength of light ρ ab=ρ abλ. For clarity in our derivation, we shall leave out the subscript λ. Generalization of our equations to handle multiple λ is tedious but trivial. For now, we shall assume a monochromatic formalism.

To calculate the total attenuation effects from a light source at r s to a point r, integrate Eq. (2) from r s to r,

n(r)=n(rs)exp(γ(rs:r)ρab(r')dl)

γ (r s : r) is a light ray joining r s and r. Summing again over all rays from light sources to r,

nA(r)=ΣrsΩs,γ(rs:r)n(rs)exp(γ(rs:r)ρab(r')dl)

Where, subscript A stands for the intensity due to attenuation component. nA(r) is the total light intensity at r due to all light sources (r s∊Ωs). Eq. (4) states that light intensity decays exponentially in general, but the exponent may vary at different points in the image.

2.3. Photon absorption and emission rates

To proceed further, we need to relate the rate of photon absorption and emission. To do this, we use the continuity equation [18] to relate the photon absorption rate to the absorption coefficient. Then introduce the emission coefficient that is related to the absorption coefficient through the quantum yield of fluorophores [19]. Let v̂ be the direction of incident light. Using the continuity equation [18],

f(r)t+.f(r)v̂c=0

Using Eq. (1),

f(r)t=dn(r)dl.

Relating the continuity equation to Eq. (2),

f(r)t=n(r)ρab(r)
 figure: Fig. 2.

Fig. 2. Total number of photons emitted by an infinitesimal volume d r′ is given by n(r′)ρem(r′)d r′. A fraction of this light reaches point r and the number of photons per unit time is given by Eq. (8).

Download Full Size | PDF

If the number of absorbed photons is equal to the number of emitted photons, then by Eq. (7), the rate of light emitted by an infinitesimal volume d r′ at r′ is given by n(r′)ρ ab(r′)d r′. However, some light energy may be dissipated through heat etc. We use another symbol ρ em to represent the emission coefficient, with the rate of emission given by, n(r′)ρ em(r′)d r′≤n(r′)ρ ab(r′)d r′. The emission coefficient ρ em is related to the absorption coefficient ρ ab by the quantum yield q with ρ ab=q -1 ρ em. In this paper, we use fluorescin and Hoescht 33342 in which q=0.92 and 0.83 respectively.

2.4. Scattering and the emission coefficient

Consider that the medium scatters light in all directions, the scattered light can be absorbed and scattered again by particles in other parts of the medium.

Referring to Fig. 2, the infinitesimal incident light intensity received at r due to scattering of the light from an infinitesimal volume d r′ is given by,

dns(r)=(n(r)ρem(r)dr4πrr2)(eγ(r:r)ρab(r)dl)

Where, the subscript S stands for scattering component, and γ (r′ : r) is a light ray from r′ to r. The denominator in the first term is a geometric factor that reflects the geometry of 3D space. The numerator is the amount of photons emitted per unit time by the volume element d r′. The second term represents the attenuation of light from r′ to r. Integrating over all r′∊Ω,r′=r, the total scattered light received at r is

ns(r)=Ω,rr'n(r)ρem(r)eγ(r:r)ρab(r)dl4πrr2dr

2.5. Image formation

We can write the total light intensity at a point r∊Ω as a sum of attenuation and scattering components.

n(r)=nA(r)+ns(r)
=ΣrsΩs,γ(rs:r)n(rs)eγ(rs:r)ρab(r)dl
+rrn(r)ρem(r)eγ(r:r)ρab(r)dl4πrr2dr

Finally, we relate our physical model to the observed image. The total amount of light emitted per unit time by an infinitesimal volume d r is ρ em(r)n(r)d r. Suppose the detector detects a part of this light to form pixel r p in the 3-dimensional observed image u 0 (for example, in confocal microscopy), the pixel intensity at r p is given by,

u0(rp)=rΩ,γ(r:rp)αγρem(r)n(r)eγρab(r)dldr

the integral is over all light rays from all points r∊Ω to r p. The attenuation term appears again in this equation as light is attenuated when traveling from the medium location r to the pixel location r p. αγ=αγ (r,r p) is a function that depends on the lensing system of the detector. The subscript γ is used to indicate that αγ depends on the path of the light.

Objective of imaging is to find out what objects are present in the region of interest Ω, in other words, we want to measure the optical properties of materials in Ω. These properties are given by ρ ab(r) and ρ em(r). Thus we want to estimate ρ ab(r) and ρ em(r) given the observed image u 0(r p). To this end, we want to solve Eq (11) for ρ ab(r) and ρ em(r). At first sight, this equation seems non-invertible and its solution may not be unique. In the latter sections, we will illustrate examples in which we solve Eq. (11) for confocal microscopy and for a side scattering configuration. At this point, we would like to highlight several observations:

1. Geometry: All geometrical information is embedded in the paths γ (r : r′), which represents light rays from point r to r′.

2. Light Source: Light source information is given by the summation over Ωs and γ(r s : r) in Eq. (4).

3. Airlight: The airlight effect [10] is included in our scattering term (Eq. (10)).

4. Non-unique solution: The solution of Eq. (11) is non-unique in general. Consider a case when Ω contains an opaque box and an image is taken of this box. Since the box is opaque, the values of ρ ab and ρ em within the box are undefined.

2.6. Discretization

We derived a matrix equation by discretizing the total light intensity (Eq. (10)) at each point r. We perform our discretization such that each finite-element corresponds to one voxel in the image data. Let bi=nA(r i), b=(b 1, ⋯,bN), N is the total number of voxels. Next define ui/ρ em(r i)=n(r i), u=(u 1, ⋯,uN). Finally, define a matrix G with components Gij=(r i,r j),

G(ri,rj)={exp(Σrkγ(rj:ri)ρab(rk)Δrk)4πrirj2ΔVij1ρem(ri)i=j

Then Eq. (10) reduces to,

uiρem(ri)=bi+Σji(+exp(Σrkγ(rj:ri)ρab(rk)Δrk)4πrirj2ΔV)uj
G·u=b

It will be demonstrated in later sections that this matrix equation is convenient for subsequent calculations.

3. Confocal microscopy

Confocal microscopy [1] is one of the most important tool in bioimaging. Images acquired through a confocal microscope is much sharper than images from conventional wide-field microscope. However, degradation by light attenuation effect is acute in confocal microscopy. This problem has been inadequately handled by either increasing the laser power or increasing the sensitivity of the photomultiplier tube [20]. Both techniques are inadequate and have draw-backs: increasing laser power accelerates photo-bleaching effects and increasing the sensitivity of the photo-multiplier tube adds noise. In the past, Umesh Adiga and B. B. Chaudhury [21] discussed the use of simple thresholding method to separate background from foreground for restoring images from light attenuation along the depth of the image stack. This technique making an assumption that image voxels are isotropic (which is not true for confocal microscopy) based on XOR contouring and morphing to virtually insert the image slices in the image stack for improving axial resolution.

Fig. 3 shows the geometry for confocal microscopy setup. Incident light passes through the focusing lens and is focused at the point r f. The summation over all light rays in Eq. (4) sums over all rays from the focusing lens γ (r s : r f). We can take the area of the lens to be a set of points in the incident light sources Ωs. Detected light travels via the same paths through the focusing lens. Hence the summation over all light rays in Eq. (11) sums over the same paths γ(r f : r s) but in the opposite direction. Lastly, for clarity, define the symbol ρ(r)=ρ ab(r)=q -1 ρ em(r).

We can use an approximation to simplify Eq. (4) and (11) by calculating the mean ρ(r) over the disk area of light cones for each z-stacks (shaded gray in Fig. 4).

ρ(z)=diskdxdyρxyzdiskdxdy

Then Eq. (4) becomes

nA(r)=n0ΣrsΩs;γ(rs:r)exp(γ(rs:r)ρ(r)dl)
n0exp(z=0rfρ(z)dz)ΣrsΩs;γ(rs:r)
=βn0exp(z=0rfρ(z)dz)

Here we assume a constant incident light intensity at the focusing lens (n 0). β=ΣrSΩS,γ(rS:r) is a complicated function of the light paths but is a constant number as long as the focal length of the focusing lens does not change. For confocal microscopy, only light emitted at r f is collected by the photomultiplier (see Fig. 3). Hence replacing the ρ in Eq. (11) by 〈ρ〉 defined in Eq. (14), we have,

u0(rp)=rΩ,γ(r:rp)αγq−1ρ(r)n(r)eγρ(r)dldr
n(rf)ρ(rf)ez=0rfρdzΣγ(rf,rp)q1αγΔvf
=αu(rf)ez=0rfρ(z)dz

ΔVf is the confocal volume and α′=∑γ(r f,r p) q -1 αγΔVf is complicated but otherwise constant number. We have also use u(r)=ρ(r)n(r).

An analytic solution can be obtained if we neglect the scattering terms (nSnA, n(r)≅ nA(r)). Substituting Eqs. (15) and (16) into Eq. (13),

ρA(ri)=u0iαβn0exp(2z=0ziρ(z)dz)

ρA is the true light emission coefficient we are solving for if we neglect the scattering terms. We now give a description of how ρA can be calculated from the observed image slice-by-slice through the z-stack. ρA is calculated starting from the first slice.

1. For the first slice, z=0, the integral in Eq. (17) evaluates to zero so that ρA is proportional to the intensity in the observed image.a i.e. ρA(r i, z=0)=u 0i/αβ n 0.

2. ρA for the second slice depends on ρA for the first slice,

ρA(ri,z=1)=u0iαβn0exp[2ρA(z=0)Δz]

Δz is the thickness of the discretized z-stack. 〈ρA〉 (z=0) is an average value calculated using values of ρA for the first slice.

3. ρA for the k-th slice is given by,

ρA(ri,z=k)=u0iα'βn0exp(2Σz=0k1ρA(z)Δz)

since we calculate the values of ρA slice-by-slice starting from the first slice, the values of ρA had been calculated for all (k-1)th slice and hence the exponential term, ∑k-1 z=0 2〈ρA〉(zz can be calculated easily.

Thus we calculate ρA from first to last slice in sequence to obtain the whole restored image.

Inclusion of the scattering term results in non-analytic solutions, which can be solved numerically by,

J(ρ)=bG·u
ρ*=argminρJ(ρ)

since ρ>0, we use the absolute sign to avoid the Karush-Kuhn-Tucker condition. In the above equation bi=b(r i)=nA(r i) (Eq. (15)) and ui=u(r i)=u 0(r i)exp(∫r f z=0ρdz)/α′(Eq. (16)). To reduce computation time, we do sampling in calculating the mean ρ(r) over the disk area of light cones for each z-stacks.

 figure: Fig. 3.

Fig. 3. Geometry for confocal microscopy. Light is focused into a point and scattered light is collected along the same path as the incident light. Hence in a confocal geometry, the attenuation term sums over all light rays to and from the focusing lens to the focal point r f.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. In the image acquisition process, the sample is scanned in discrete locations to generate z-stacks. The mean value of ρ can be calculated over the light cone area for each z-stack as shown by the shaded area in this figure.

Download Full Size | PDF

Eq. (20) can be solved numerically using the gradient descend method because (z=0rfρdz)/α, ∀k can be evaluated numerically. In practice, ρA can be used for the initial guess of ρ in the gradient descend method. Through our numerical simulations, we found that ρA is a good approximation to ρ. Using ρA as initial condition reduces the local minimum problem in the gradient descend method.

4. Side scattering geometry

Our formalism can be most easily illustrated using the side scattering geometry. Side scattering geometry is in reality, the geometry for Single Plane Illumination Microscopy (SPIM) [22], [23], [24], [25], [26]. Fig. 5 shows the set up for side scattering. In this geometry, the incident light rays are constant and parallel. Eq. (4) reduces to the following by denoting the constant incident intensity n 0 at a point r=(x,y),

nA(r)=n0exp(γ(rs:r)ρ(r)dl)

Where, the integration is over the horizontal x-direction as shown in Fig. 5. As in the case of confocal microscopy, we used ρ=q -1 ρ em=ρ ab.

 figure: Fig. 5.

Fig. 5. Geometrical arrangement of side scattering, the light source originates from the side and illuminates one plane of the sample. Scattered light is collected in an orthogonal direction by a CCD camera.

Download Full Size | PDF

Assume the scattered light travels directly to the CCD camera without attenuation. As in most camera set up, let there be a one-to-one correspondence between the pixel point (in the CCD) r p and the sample location r. Then Eq. (11) may be written as,

u0(r)=αρ(r)n(r)=αu(r)

Where, α′ represents the integrated effects of quantum yield and the camera, including summations over all rays etc. An analytic solution can be obtained (Eq. (23)) if we further ignore the scattering term,

ρA(ri)=1α'n0u0(ri)exp(Σrkγ(rs,ri)ρ(rk)Δrk)

Where, the subscript A is used to indicate that an approximated solution is obtained using attenuation term only. With this approximation, Eq. (23) can be easily solved numerically and their results are given in the next section.

In our numerical experiments, we observed that the restored image using Eq. (23) and Eq. (20) differs by about 1% only, which suggest that our approximation nSnA is valid.

5. Results

We performed numerical calculations and compare our results to ground truths. Comparison with other physics-based restoration methods [5], [6], [7], [8], [9], [10], [11], [12], [13] is not possible because these methods cannot be applied to microscopy images. Firstly, other physics-based methods are not designed to restore three dimensional images. Secondly, these methods assumes a constant attenuation media. An assumption that is strongly violated in microscopy images.

5.1. Validation and calibration

We validate our method on specially prepared samples in which we know the ground truth by experimental design. Image restoration is then performed and compared to the ground truth. In the experiment, we mix fluorescein and liquid gel on an orbital shaker until the gel hardens. In this way, we are absolutely sure that the sample is uniform through-out the 3D volume. However, the intensity profile of the image will not be uniform due to attenuation, it decreases with depth. As shown in Fig. 6, the restored image has a uniform intensity profile (max. intensity projection). At the same time, we calibrate our parameter with respect to the microscope. αβn 0=181.27 gives the best result. The calibrated parameter value can be used for images taken with different laser intensities, as shown in the bottom row of Fig. 6 for 1.5n 0 and 2.0n 0. Let the value of the parameter be C=α′βn, the relationship between two parameters (C 1,C 2) with different laser intensities (n 1,n 2) is simply C 1/C 2=n 1/n 2. Figures on the right of Fig. 6 shows the 2D projections of original and restored images for 1.5n 0 and 2.0n 0. We can see that the restored image is more uniformly illuminated.

 figure: Fig. 6.

Fig. 6. Max. intensity projection (intensity profile) of 3D images. The top left figure shows that uniform illumination can be restored. Parameter calibration for the microscope setup is α′βn 0=181.27 (insert). Lower values, 121.51 and 90.02, resulted in over compensation. The bottom left figure shows that we can use the calibrated values of the top figure to restore images taken with different laser intensities, i.e. 1.5 and 2.0 times the laser intensity used for the top figure. Right figures show the 2D projections of original and restored images.

Download Full Size | PDF

5.2. Confocal microscopy

To demonstrate the effectiveness of our algorithm, we restore 3D images of neuro stem cells from mouse embryo, with nucleus stained with Hoescht 33342. The images were acquired using an Olympus Point Scanning Confocal FV 1000 system. Imaging was done with a 60x water lens with a Numerical Aperture of 1.2. Diode laser 40nm was used to excite the neurospheres stained with Hoescht. Sampling speed was set at 2µm/pixel. The original microscope images are of size 512×512×nz voxels with a resolution of 0.137µm in the x- and y-direction and 0.2µm in the z-direction. nz> is the number of z-stacks in the images.

 figure: Fig. 7.

Fig. 7. Restoration results of a set of confocal microscopy images, which has 155 z-stacks. The parameter 1/α′βn 0=0.014995 gives the best restoration. In the original views the illumination is not uniform, where as in our restored view the illumination becomes uniform. Maximum intensity projection graphs (averaging over the brightest 0.1% voxels of each z-stack) of all z-stacks are also shown. Solid- and dashed-lines indicate original and restored data, respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Restoration results of another set of confocal microscopy images, which consists of 163 z-stacks. The parameter 1/α′βn 0=0.014995 gives the best restoration. In the original views the illumination is not uniform, where as in our restored view the illumination becomes uniform. Maximum intensity projection graphs (averaging over the brightest 0.1% voxels of each z-stack) of all z-stacks are also shown. Solid- and dashed-lines indicate original and restored data, respectively.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Restoration of a synthetically degraded image to an image of uniform illumination. The degraded image shows non-uniform illumination with maximum intensity projection falling off exponentially (assuming light source comes from the left). The parameter 1/α′n 0=0.0095 gives the best restoration in which the maximum intensity projection is flat.

Download Full Size | PDF

To reduce the computation time, in our experimentation we downsampled the original images to 256×256×nz voxels by averaging the voxels in the x- and y- direction while maintaining the resolution in the z-direction. Fig. 7 and 8 show restoration results for a 256×256×nz voxels images. Here we used Eq. (17) to restore the images. The adjusted tuning parameter 1/α′βn 0=0.014995 gives optimal restoration results.

The confocal images shown in Fig. 7 and 8 have nz=155 and nz=163 z-stacks, respectively. Fig. 7(a) and 8(a) show the maximum intensity projection onto the yz-plane of the respective original images. Similarly, Fig. 7(b) and 8(b) show the maximum intensity projection onto the yz-plane of the respective restored images. And Fig. 7(c) and 8(c) show the maximum intensity projection (averaged over top 0.1% of the brightest voxels in the xy-plane) onto the z-axis. The illuminating laser originates from the bottom and one could easily observed that for the original image, the voxels are much brighter at the bottom of the image and intensity drops towards the top of the image. After restoration, the illumination becomes uniform. The restored image is also darker on the average, however, many image processing techniques are robust against the average voxel intensity. The achievement in our work is to restore the image to have uniform illumination. Afterwhich, other image enhancement methods such as histogram equalization can be used. Fig. 7(c) and 8(c) clearly show the difference between the intensity profile of the original (solid lines) and restored (dashed lines) images. Over-exposed areas in the bottom z-stacks are correctly compensated by our restoration method.

5.3. Side scattering microscopy

We performed calculations of side scattering geometry on synthetically degraded images. The synthetic image with non-uniform illumination (maximum intensity projection falling off exponentially assuming light source comes from the left) was generated from an image of uniform illumination. Fig. 9 shows restoration results for a 256×256 pixels image. Here we used Eq. (23) to restore the image. The tuning parameter 1/αn 0 can be adjusted to obtain optimal results. n 0 is the incident light intensity and α′ is a geometric factor that is usually unknown. For small 1/αn 0, the image is hardly restored and for large 1/αn 0, there is over-compensation of the attenuation effect. The optimal value of 1/αn 0 is 0.0095 for this image in which the restored image is almost perfectly (uniformly) illuminated.

6. Conclusions

In this paper, we addressed the fundamental light attenuation and scattering problem for light microscopy, which caused non-uniformly illuminated images, by proposing a physics-based field theoretical restoration model. Our method has strong theoretical grounds as it is based on well proven fundemental laws-of-physics. We presented the complete mathematical formulations for restoration of light microscopy images. We validated our method with a specially designed experiment, and confirmed through specific examples of confocal and side scattering geometry, that our restoration method is efficient in solving the problem of light attenuation and scattering.

Lastly, we would like to note that existing physics-based approaches [5], [6], [7], [8], [9], [10], [11], [12], [13] cannot be use as comparisons as they are not formulated to restore the whole three dimensional image volume, as required in microscopy images.

References and links

1. James B. Pawley ed. Handbook of Biological Confocal MicroscopyThird Edition (Springer, New York, 2005).

2. D. Kundur and D. Hatzinakos, “Blind image deconvolution”, IEEE Signal Process. Mag. pp. 43–64, May 1996).

3. P. Shaw, “Deconvolution in 3-D optical microscopy,” Histochem. J. 261573–6865 (1994). [CrossRef]  

4. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process. Mag. pp. 32–45, May 2006.

5. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for degradation,” IEEE Trans. Image Process 7(2), 167–179 (1998). [CrossRef]  

6. K. Tan and J. P. Oakley, “Enhancement of color images in poor visibility conditions,” Proc. Int’l Conf. Image Process. 2, 788–791 (2000).

7. K. Tan and J. P. Oakley, “Physics Based Approach to color image enhancement in poor visibility conditions,” J. Optical Soc. Am. 18(10), 2460–2467 (2001).

8. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]  

9. Y. Y. Schechner and N. Karpel, “Clear underwater vision,” Proc. IEEE Conf. Computer Vision and Pattern Recognition 1, 536–543 (2004).

10. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell 25(6), 713–724 (2003). [CrossRef]  

11. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int’l J. Computer Vision 48(3), 233–254 (2002). [CrossRef]  

12. S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” Proc. IEEE Conf. Computer Vision and Pattern Recognition 2, 186–193 (2001).

13. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” Proc. IEEE Conf. Computer Vision and Pattern Recognition 1598–605 (2000). [CrossRef]  

14. R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (2007).

15. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of mages,” IEEE Trans. Pattern Anal. Mach. Intell 6, 721–741 (1984). [CrossRef]  

16. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60259–268 (1992). [CrossRef]  

17. P. L. Combettes and J. C. Pesquet, “Image restoration subject to a total variation constraint,” IEEE Trans. Image Process. 13, 1213–1222 (2004). [CrossRef]   [PubMed]  

18. A. R. Patternson, A first course in fluid dynamics (Cambridge university press1989).

19. J. B. Pawley, Handbook of Biological Confocal Microscopy (Springer1995).

20. M. Capek, J. Janacek, and L. Kubinova, “Methods for compensation of the light attenuation with depth of images captured by a confocal microscope,” Microscopy Res. Tech. 69, 624–635 (2006). [CrossRef]  

21. P. S. Umesh Adiga and B. B. Chaudhury, “Some efficient methods to correct confocal images for easy interpretation,” Micron. 32, 363–370 (2001). [CrossRef]  

22. K. Greger, J. Swoger, and E. H. K. Stelzer, “Basic building units and properties of a fluorescence single plane illumination microscope,” Rev. Sci. Instrum 78, 023705 (2007). [CrossRef]   [PubMed]  

23. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305, 1007–1009 (2004). [CrossRef]   [PubMed]  

24. P. J. Verveer, J. Swoger1, F. Pampaloni, K. Greger, M. Marcello, and E. H. K. Stelzer. “High-resolution threedimensional imaging of large specimens with light sheet-based microscopy,” Nature Methods 4(4), 311–313 (2007).

25. P. J. Keller, F. Pampaloni, and E. H. K. Stelzer, “Life sciences require the third dimension,” Curr. Opin. Cell Biol. 18, 117–124 (2006). [CrossRef]   [PubMed]  

26. J. G. Ritter, R. Veith, J. Siebrasse, and U. Kubitscheck. “High-contrast single-particle tracking by selective focal plane illumination microscopy,” Opt. Express 16(10), 7142–7152 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Attenuation phenomenon due to incident of light on a specimen. dl and dA indicate infinitesimal length and area of the specimen, respectively.
Fig. 2.
Fig. 2. Total number of photons emitted by an infinitesimal volume d r′ is given by n(r′)ρem(r′)d r′. A fraction of this light reaches point r and the number of photons per unit time is given by Eq. (8).
Fig. 3.
Fig. 3. Geometry for confocal microscopy. Light is focused into a point and scattered light is collected along the same path as the incident light. Hence in a confocal geometry, the attenuation term sums over all light rays to and from the focusing lens to the focal point r f .
Fig. 4.
Fig. 4. In the image acquisition process, the sample is scanned in discrete locations to generate z-stacks. The mean value of ρ can be calculated over the light cone area for each z-stack as shown by the shaded area in this figure.
Fig. 5.
Fig. 5. Geometrical arrangement of side scattering, the light source originates from the side and illuminates one plane of the sample. Scattered light is collected in an orthogonal direction by a CCD camera.
Fig. 6.
Fig. 6. Max. intensity projection (intensity profile) of 3D images. The top left figure shows that uniform illumination can be restored. Parameter calibration for the microscope setup is α′βn 0=181.27 (insert). Lower values, 121.51 and 90.02, resulted in over compensation. The bottom left figure shows that we can use the calibrated values of the top figure to restore images taken with different laser intensities, i.e. 1.5 and 2.0 times the laser intensity used for the top figure. Right figures show the 2D projections of original and restored images.
Fig. 7.
Fig. 7. Restoration results of a set of confocal microscopy images, which has 155 z-stacks. The parameter 1/α′βn 0=0.014995 gives the best restoration. In the original views the illumination is not uniform, where as in our restored view the illumination becomes uniform. Maximum intensity projection graphs (averaging over the brightest 0.1% voxels of each z-stack) of all z-stacks are also shown. Solid- and dashed-lines indicate original and restored data, respectively.
Fig. 8.
Fig. 8. Restoration results of another set of confocal microscopy images, which consists of 163 z-stacks. The parameter 1/α′βn 0=0.014995 gives the best restoration. In the original views the illumination is not uniform, where as in our restored view the illumination becomes uniform. Maximum intensity projection graphs (averaging over the brightest 0.1% voxels of each z-stack) of all z-stacks are also shown. Solid- and dashed-lines indicate original and restored data, respectively.
Fig. 9.
Fig. 9. Restoration of a synthetically degraded image to an image of uniform illumination. The degraded image shows non-uniform illumination with maximum intensity projection falling off exponentially (assuming light source comes from the left). The parameter 1/α′n 0=0.0095 gives the best restoration in which the maximum intensity projection is flat.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

n(r)dA=f(r)dldtdA=f (r)cdAn(r)=f(r)(c)
dn(r)dl=n(r)ρab(r)
n(r)=n (rs)exp(γ(rs:r)ρab(r')dl)
nA(r)=ΣrsΩs,γ(rs:r)n(rs)exp(γ(rs:r)ρab(r')dl)
f(r)t+.f(r)v̂c=0
f(r)t=dn(r)dl.
f(r)t=n (r)ρab(r)
dns(r)=(n(r)ρem(r)dr4πrr2)(eγ(r:r)ρab(r)dl)
ns(r)=Ω,rr'n(r)ρem(r)eγ(r:r)ρab(r)dl4πrr2dr
n(r)=nA(r)+ns (r)
= ΣrsΩs,γ(rs:r) n (rs)eγ(rs:r)ρab(r)dl
+ rr n(r)ρem(r)eγ(r:r)ρab(r)dl4πrr2 d r
u0(rp)=rΩ,γ(r:rp)αγ ρem (r)n(r)eγρab(r)dldr
G(ri,rj)={exp(Σrkγ(rj:ri)ρab(rk)Δrk)4πrirj2ΔVij1ρem(ri)i=j
uiρem(ri)=bi + Σji (+exp(Σrkγ(rj:ri)ρab(rk)Δrk)4πrirj2ΔV) uj
G · u = b
ρ (z) = diskdxdyρxyzdiskdxdy
nA (r)=n0ΣrsΩs;γ(rs:r)exp (γ(rs:r)ρ(r)dl)
n0 exp (z=0rfρ(z)dz) ΣrsΩs;γ(rs:r)
= β n0 exp (z=0rfρ(z)dz)
u0(rp)=rΩ,γ(r:rp)αγq−1ρ(r)n(r)eγρ(r)dldr
n (rf)ρ(rf)ez=0rfρdzΣγ(rf,rp)q1αγΔvf
=α u (rf)ez=0rfρ(z)dz
ρA(ri)=u0iαβn0exp (2z=0ziρ(z)dz)
ρA(ri,z=1)=u0iαβn0exp [2ρA(z=0)Δz]
ρA(ri,z=k)=u0iα'βn0exp (2Σz=0k1ρA(z)Δz)
J(ρ)=bG·u
ρ* = argminρJ(ρ)
nA(r)=n0exp(γ(rs:r)ρ(r)dl)
u0(r)=αρ(r)n(r)=αu(r)
ρA(ri)=1α'n0 u0 (ri) exp (Σrkγ(rs,ri)ρ(rk)Δrk)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.