Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach

Open Access Open Access

Abstract

This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery.

©2011 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) is a non-invasive, micrometer resolution, three-dimensional optical imaging technique capable of capturing backscattering photons from within optical scattering media such as biological tissues [1, 2]. One of the advantages of OCT is that the axial and the lateral resolutions are decoupled from each other. The axial resolution of the OCT system is determined by the spectral bandwidth of light source and is defined as the half of the coherence length of the light source which can be expressed by

Ra=lc20.44λ02Δλ
where lc is the coherence length, λ0is the center wavelength and Δλ is the spectral full-width at half maximum (FWHM) of the light source.

The lateral resolution of the OCT system is determined by the objective lens that is used to deliver/focus the light onto the sample and is expressed by

Rl=1.22λ02NAobj
where NAobj is the numerical aperture (NA) of the objective lens which is inversely proportional to the lateral resolution. On the other hand, the depth-of-focus (DOF) is given by
ZDOF=2λ0nNAobj2
where n is the average refractive index of the sample. In general, the lateral resolution can be enhanced by the increase of numerical aperture. Unfortunately, the increase of the numerical aperture leads to the reduction of DOF. The relationship between the lateral resolution and DOF is schematically illustrated in Fig. 1 for the cases of low (left figure) and high NAs (right figure). From the illustrations, we can appreciate that only the portion of the 3D image within DOF exhibits the desired lateral resolution, while that in the de-focal plane is blurred. There exist several dynamic focusing methods in the literature to mitigate the compromise between lateral resolution and DOF. In general, these methods can be divided into hardware-based and software-based (digital) approaches.

 figure: Fig. 1

Fig. 1 Schematic illustration on the effect of numerical aperture on the desired lateral resolution of OCT images, in the case of (a) low NA, and (b) high NA.

Download Full Size | PDF

1.1 Hardware-based approaches

Lexer et al. [3] investigated an optical setup to shift the focus of the beam illuminating the object through the object depth without changing the path length in the corresponding interferometer arm and achieved 10μm lateral resolution over an object depth of 430 μm in human cornea. Qi et al. [4] designed a high-speed dynamic focus control system based on a micro-electro-mechanical mirror, in which the silicon nitride deformable mirror shifted the focus of the sample beam of the OCT interferometer in synchrony with the coherence-gate scan in the reference arm. As a result, the coherence gate remained at the beam focus during the whole imaging process. Divetia et al. [5] used a 1 mm liquid-filled polymer lens for endoscopic OCT applications that dynamically provided scanning depth of focus by changing the hydraulic pressure within the lens which enabled dynamic control of the focal depth without the need for articulated parts. This configuration was shown to have resolving power of 5 μm over an object depth of 2.5 mm. Xie et al. [6] developed a gradient index (GRIN) lens rod based probe for endoscopic spectral domain OCT with dynamic focus tracking. The focus of their endoscopic OCT probe could be dynamically adjusted at 500 mm/s without changing reference arm length to obtain high-quality OCT images for contact or non-contact tissue applications. The dynamic focusing range of the probe was achieved between 0 to 7.5 mm without moving the probe beam itself, while the imaging depth was 2.8 mm. To overcome the trade-off between lateral resolution and focusing depth, Ding et al. [7] incorporated an axicon lens with a top angle of 160° into the sample arm of an interferometer and maintained 10μm lateral resolution over a focusing depth of 6 mm. Holmes et al. [8] presented a multi-beam OCT system that overcomes the problem of limited lateral resolution inherent in single-beam Fourier domain OCT and reduces speckle noise. However, the hardware-based methods require special hardware set-up configuration in the system design which can limit the scanning speed and raise the cost of the OCT system.

1.2 Digital approaches

There also exist several digital methods in the literature to compensate image degradation out of the focal depth zone by solving inverse scattering and deconvolution problems. Ralston et al. [9] proposed a novel computational image-formation technique called interferometric synthetic aperture microscopy (ISAM) by solving the inverse scattering problem for interference microscopy. ISAM could achieve reconstructed volumes with a resolution in all planes that was equivalent to the resolution achieved only at the focal plane for the conventional high-resolution microscopy. Yu et al. [10] developed a two-dimensional scalar diffraction model to simulate the wave propagation process from out-of-focus scatters within the short coherence gate of the OCT system and high-resolution details could be recovered from outside the depth-of-field region with minimum loss of lateral resolution. These two methods account for the diffraction and the coherent scattering of light by the sample, thus are the most accurate representation of the sample’s microstructure outside the focus region, with ISAM being the most elegant one. However, they are extremely sensitive to the phase stability of the measurements, particularly when the in vivo imaging is involved where the sample movement is inevitable. Therefore, there is still a need to seek for alternatives to mitigate this issue. In this pursuit, the best solution is to use the amplitude information of the OCT signals because it is insensitive to the optical phases of the system. Over the last decade, efforts have been paid that explore the amplitude information of the OCT signals for the purpose of de-blurring the OCT images.

Yasuno et al. [11] applied a numerical deconvolution method to cancel lateral defocus in Fourier-domain OCT using a depth-dependent lateral PSF and applied the method to OCT images of in vivo human anterior eye segment. Kulkarni et al. [12] proposed a linear shift-invariant system model to describe coherent light specimen interactions in OCT images and applied an iterative deconvolution algorithm to enhance the sharpness of the images of biological structures. A constrained iterative restoration algorithm was utilized to estimate the impulse response of the system model using a prior knowledge of the properties of that response [13]. The algorithm was applied to a defocused OCT image of fresh onion sample and the deconvolved OCT image revealed polygonal cellular structure in the specimen. Ralston et al. [14] proposed a method for reducing transverse blurring and improving the transverse resolution in OCT using Gaussian beam deconvolution. First, the direct inverse problem was investigated using two types of regularization, truncated singular value decomposition and Tikhonov. Then, an iterative expectation-maximization algorithm, the Richardson-Lucy algorithm, with a beam-width-dependent iteration scheme was developed. Using a dynamically iterative Richardson-Lucy algorithm reduced the transverse blurring by providing an improvement in the transverse PSF for sparse scattering samples in regions up to two times larger than the confocal region of the lens. Also, Wiener and Richardson-Lucy OCT image restoration algorithms were implemented and compared with each other where the images deconvolved using Richardson-Lucy had a better contrast and quality [15]. Besides, JM Schmitt [16] derived CLEAN deconvolution kernel to provide an effective way of improving resolution and contrast in optical coherence tomography. Rolland et al. [17] developed a Gabor-based fusion technique based on the concept of the inverse local Fourier transform and the Gabor’s signal expansion to produce a high lateral resolution image throughout the depth of imaging.

Since these previous digital methods require a prior knowledge of the optical parameters or PSF of the optical system, an initial calibration experiment is required. Tomlins et al. [18] and [19] made PSF phantoms doped with a small quantity of calibrated spherical micro-particles to estimate the Gaussian PSF parameters and researched the potential PSF degrading effects of optical dispersion and spherical aberration. Tomlins et al. [20] also adopted femtosecond laser subsurface micro-inscription techniques to fabricate an OCT test artifact for validating the resolution performance of a commercial OCT system. The method [18] and [19] were generalized by Woolliams et al. [21] to estimate 2D PSF at multiple locations across the B-scans to account for the variation in PSF as a function of both depth and lateral position. Agrawal et al. [22] present a PSF measurement approach using a nanoparticle-embedded phantom, towards the development of standardized test methods. However, the optical parameters or the PSF of OCT system needed to be calculated by performing a separate experiment on a specialized phantom to calibrate the parameters and the parameters may not be valid for complex heterogeneous structures.

In this paper, by using the forward OCT model previously developed by Ralston et al [14], we propose an automatic PSF estimation method based on discontinuity of information entropy and Richardson-Lucy deconvolution algorithm to improve the image contrast and quality. The advantage of this method is that there is no need to implement a new optical hardware or to measure the optical parameters of the OCT system, and importantly it is relatively in-sensitive to the phase-instability of the measurements. Since the Richardson-Lucy algorithm tends to concentrate energies near boundaries, it provides a good approximation to cellular boundaries and subcellular features and is more robust to noise. Therefore, combining PSF estimation method and Richardson-Lucy algorithm can estimate PSF of OCT at different defocal planes automatically and objectively while decreasing the transverse blurring.

2. Image recovering principle and algorithms

In Gaussian optics model, the lateral PSF of OCT is approximated by [14]:

h(x,y)=exp(2x2+y2Wz2)
where Wzis the depth-dependent beam spot size given by
Wz=W0[1+(zzR)2]12
zR=πW02λn
where W0 is the minimum value of Wz which occurs at z = 0, z is the axial coordinate and z=zR at the boundary of the confocal region (Rayleigh range) . The assumption in (5) is that Gaussian beam propagates in a direction normal to the lens and the refractive index n is constant along the beam. However, the beam profile deviates from the ideal model and the transverse resolution is not constant because index of refraction is not constant in the sample. Also, the beam profile depends on the depth and the focal region of the lens.

In OCT system, the final image at each depth location I(x,y) is the convolution of the lateral PSF h(x,y) and the sample profile O(x,y) which can be formulated by

I(x,y)=O(x,y)h(x,y)+n=yxO(xx0,yy0)h(x0,y0)dx0dy0+n
where n is additive noise and tissue heterogeneous noise [23]. By estimating the best beam spot size Wzat each depth location, the lateral resolution at out-of-focus depth locations can be improved by solving an inverse problem.

2.1 Image recovering flow diagram

The flow diagram of the proposed method for recovering defocused FD-OCT images is given in Fig. 2 .

 figure: Fig. 2

Fig. 2 Flow diagram for recovering defocused OCT images.

Download Full Size | PDF

Step1: The non-linear spectral interferogram captured in an A-scan is rescaled into linear k-space followed by Fast Fourier Transform (FFT) to become A-line complex information (phase and amplitude) along the z axis, whose amplitude represents the structural information. The 3D structure of a sample is acquired by two-dimensional scanning of the probe beam by a paired galvanometer mirror.

Step2 and Step3: The 3D structure image is resampled along the depth (in the z direction) to obtain a sequence of en-face images I(x,y) at different depth locations followed by a Gaussian low-pass filter. Here, the Gaussian LPF acts to reduce the noise levels in the image, including the system noise and the spurious effects from speckle, so that the recovered image becomes smooth. It should be noted that applying a Gaussian LPF will reduce the lateral PSF of OCT system as follows:

I=fI=f(hO+n)=fhO+fn(fh)O=hO
where f is the impulse response of Gaussian LPF, I is the raw image, h is the lateral PSF, O is the object being imaged (sample), n is additive noise, I is the filtered image and h is the lateral PSF after low-pass filtering. In the experiments, the low-pass filter was empirically selected such that its bandwidth is approximately 5 times broader than that of the system PSF.

Step4 and Step5: Richardson-Lucy deconvolution method requires a prior knowledge about the lateral PSF, h(x,y), which is a function of the actual beam spot size Wza. In order to estimate the best choice of Wzat each depth location, the filtered en-face images I(x,y) are deconvolved using Richardson-Lucy algorithm by a Gaussian family PSFs with stepped beam spot size Wz[Wz1,Wz2]and an evaluation function defined in subsection 2.3 is analyzed. The evaluation functions tends to have a discontinuity around Wz=Wza, so, PSF(The best choice of Wz) is acquired by searching a peak of the Laplacian of evaluation function .

Richardson-Lucy is an iterative method based on the maximum-likelihood estimation for Poisson statistical data, which is updated by

On+1(x,y)=On(x,y)[h(x,y)I(x,y)h(x,y)On(x,y)]
whereI(x,y) is the degraded image after Gaussian LPF, On(x,y)is the estimate of the original image after n iterations, h(x,y)is the modified lateral PSF and h(x,y)On(x,y) is referred to as the reblurred image. The iterations will stop when the difference between On+1 and On goes below a threshold.

Step6: Each original en-face image can be recovered using Richardson-Lucy algorithm and estimated lateral PSF (Wz).

2.2 Discontinuity of recovered image around Wz=Wza

The output of OCT imaging system is described mathematically by convolving its PSF with an input image. Let’s assume that a sufficiently small object can be modeled as a Dirac delta function. Then, the output of the imaging system is its lateral PSF as follows:

I(x,y)=O(x,y)h(x,y)=δ(x,y)h(x,y)=h(x,y)=exp(2x2+y2Wza2)
where I is the received image, O is the object profile, h is the lateral PSF and Wza is the actual beam spot size. When received image I is deconvolved using different beam spot size Wz, it can be described as follows:
I(x,y)=O(x,y)h(x,y)=O(x,y)exp(2x2+y2Wz2)
where O(x,y) is the recovered image when beam spot size is Wz. Equation (12) is obtained from Eq. (10) and Eq. (11) and O(x,y) can be solved by using 2-dimentional convolution theorem as Eqs. (12) and (13).

exp(2x2+y2Wza2)=O(x,y)exp(2x2+y2Wz2)
O(x,y)=2Wz2Wza2(Wz2Wza2)exp(2x2+y2Wz2Wza2)

It can be observed that the recovered image O(x,y)will have a discontinuity at Wz = Wza.

According to sampling property of delta function, any input image structure can be modeled as weighted superposition of delta functions.

O(x,y)=n=1Nm=1MAnmδ(xn,ym)
where Anm is the weight (intensity) of the image at each n and m location. Based on the definition of linearity, the results from a delta function can be generalized to this case. Therefore, the recovered image O(x,y) is also discontinuous at the point Wz = Wza. This is exactly the theoretical basis to estimate the actual Wza. In order to find the discontinuity of the recovered image around Wz = Wza, an evaluation function based on information entropy is defined and the discontinuity of this evaluation function is used to estimate the actual Wza.

2.3 Evaluation function for PSF estimation

Information entropy of the recovered image O(x,y) and the blurry out-of-focus raw image I(x,y)can be expressed by

E(O)=ap(O(a)).log(p(O(a)))
E(I)=bp(I(b)).log(p(I(b)))
where p(O(a))is the marginal probability of the image O, which is the probability that the intensity value of the image is equal to a. The joint entropy ofO and I images, which is an measure of uncertainty associated with them, can be expressed by
E(O,I)joint=a,bp(IO(a,b)).log(p(IO(a,b)))
wherep(IO(a,b))is the joint probability distribution of O and I images, which is the probability that the intensity value of the image O is equal to a and the intensity value of the image I is equal to b, respectively. Also, the mutual information entropy of the O and I images can be expressed by

E(O,I)mutual=E(O)+E(I)E(O,I)joint

The evaluation function is defined by

F(Wz)=E(O)E(O,I)jointE(O)
which is a function for Wz[Wz1,Wz2]. Because of the discontinuity of the recovered O, the evaluation function tends to have discontinuity at Wz = Wza. The discontinuity of the evaluation function is equivalent to a peak in the Laplacian (second derivative) of that function which is used to locate the discontinuity of the evaluation function. Typical curves of evaluation function and Laplacian of evaluation function are displayed in Fig. 3 .

 figure: Fig. 3

Fig. 3 The peak of the Laplacian of evaluation function (b) corresponding to the discontinuous point of evaluation function (a) is used to locate the actual beam spot sizeWza.

Download Full Size | PDF

2.4 Image recovery

If the object en-face imageO(x,y) is located in the focal plane (Wz = W0), the received image will be given by

I0(x,y)=O(x,y)h0(x,y)=O(x,y)exp(2x2+y2W02)

Assuming that the object en-face image O(x,y) is located in the out-of-focus plane (Wz = Wza), the received image will be given by

I(x,y)=O(x,y)h(x,y)=O(x,y)exp(2x2+y2Wza2)

The focused image I0(x,y) can be recovered from out-of-focus image I(x,y) by filtering the out-of-focus image with the compensation function g(x,y) as follows

I0(x,y)=g(x,y)I(x,y)=g(x,y)O(x,y)h(x,y)=O(x,y)h0(x,y)
which yields a compensation function for image recovery

g(x,y)=2Wza2W02(Wza2W02)exp(2x2+y2Wza2W02)

3. Simulation studies

In order to show the performance of the proposed method, we performed a series of simulations on digital phantoms. First, a Dirac delta function digital phantom was generated and manually defocused at different Wza. For each Wza scenario, Richardson-Lucy algorithm was performed to recover the image on a family of Wz values and the discontinuity (peak of the Laplacian) of the evaluation function was measured to estimate the beam spot size. Figure 4(a) shows a 3D plot of the evaluation function for different Wzas where the x axis is Wz, the y axis is Wza and z axis (intensity) is the evaluation function. It can be observed that the discontinuity of the evaluation function is at Wz = Wza. Also, the Laplacian of the evaluation function is shown in Fig. 4(b) where x axis is Wz, y axis is Wza and z axis is the Laplacian of the evaluation function. The peak values are observed around Wz = Wza, which was expected.

 figure: Fig. 4

Fig. 4 When digital phantom was defocused at different Wza, (a) the discontinuous point of evaluation function approximately equals Wza for all Wzas, and (b) the peak of Laplacian of the evaluation function approximately equals Wza for all Wzas.

Download Full Size | PDF

Figure 5(a) shows another digital phantom to simulate the cellular structures of onion. The image was manually defocused at Wza = 3(Fig. 5(b)) and the proposed method was performed to recover the image where the recovered image is shown in Fig. 5(f). In order to study the effectiveness of the proposed method in suppressing noise, the image was degraded by Poisson and speckle noise. Figure 5(c-e) show defocused and degraded images by speckle noise, Poisson noise, and speckle and Poisson noise, respectively. Figure 5(g-i) show the recovered images from different noise conditions. The multiplicative speckle noise was a zero-mean uniformly distributed random noise with variance 0.05.

 figure: Fig. 5

Fig. 5 The performance of the proposed method on a digital phantom. (a) Original image. (b) Out of focus image. Out of focus image was then degraded with (c) speckle noise, (d) Poisson noise and (e) both speckle and Poisson noise. Recovered images: (f) without the presence of noise. With the presence of different noise conditions, (g) speckle noise, (h) Poisson noise and (i) both speckle and Poisson noise.

Download Full Size | PDF

The experiment was repeated for different defocused depth locations (Wza) and the difference between the actual and estimated a number of beam spot size (estimation error) was measured at different noise conditions. Figure 6 shows the error in estimating actualWza under different noise conditions. It can be observed that the algorithm can effectively suppress the noise effect and the results are comparable with the experiment when no noise is present. The proposed method was specifically more effective when the image was slightly defocused (Wza [4 9]). In general, the estimation error was acceptable and the proposed method could effectively estimate the beam spot size at the present of different noise conditions.

 figure: Fig. 6

Fig. 6 Estimation errors when digital phantom was manually defocused at different Wza for different noise conditions. The maximal absolute error is about 0.2 within the range 1 to12 of Wza.

Download Full Size | PDF

4. Experimental results

The OCT system used to achieve 3D volume data is shown in Fig. 7 , which is similar to one of previous studies [24]. Briefly, the system used a super luminescent diode light source, with a central wavelength of 1300 nm and a bandwidth of 80 nm that provided a 12 µm axial resolution in air. The light is split into two paths in a fiber based Michelson interferometer. One beam is coupled onto a stationary reference mirror and the second beam is focused with a collimating lens and a focal lens (NA = 0.05) at the sample with a theoretical lateral resolution of ~16 μm and a depth of focus of ~350 μm. The focal spot on the sample was scanned using a pair of galvanometer mirror mounted in the sample arm. The spectral interferogram between the reference light and the light backscattered from the sample was sent to a homebuilt spectrometer via an optical circulator. The spectrometer consisted of a transmission grating (1200 lines/mm), a camera lens with a focal length of 100 mm, and 1024 element line scan InGaAs detector (capable of a line rate of 47 kHz), with a wavelength coverage of 200 nm. A pilot laser is also coupled into the interferometer for beam guidance.

 figure: Fig. 7

Fig. 7 Schematic of the OCT system used in this study

Download Full Size | PDF

In order to show the performance of the proposed method in real OCT images, three sets of experiments were performed on: 1) a phantom doped with microspheres that simulate the PSF of the system, 2) a fresh onion to show the efficiency of the proposed method on the still sample, and 3) in vivo tissue sample to demonstrate whether the proposed method has the potential for in vivo imaging applications. For the results presented below, the processing time of one frame image (512*512) was 1.23s for a HP dv6 laptop computer (processor: Intel i3 2.4GHz; RAM: 6G) when number of iterations was 10. Also note that during the image reconstruction process, there was no threshold applied to the images. Only when the OCT images were displayed, a fixed threshold (equal to the noise level in the raw OCT image) was applied to better show the final images.

First, a phantom was made from gelatin, doped with microspheres, having a predominant diameter of 300 nm. Since the microspheres are small relative to the system resolutions, they can be approximated by delta functions and the phantom can be modeled as a superposition of delta functions. 3D OCT volume imaging was obtained from the phantom and the focus of the system was set to the center of the phantom.

Figure 8(a) shows one B-scan of the phantom where the center of the image is in focus but the upper and lower portion of the image has a degraded lateral resolution and the microspheres look blurry. Then, the proposed automatic PSF estimation and image recovery (see Eq. (22) and (23)) method was applied on to the 3D volume data at different depth locations and the recovered image of the same B-scan is shown in Fig. 8(b). Note that the blurred image in the out-of-focus area is now focused after performing the proposed algorithm on the expérimental data. In addition, we can also recover the raw B-scan image (Fig. 8(a)) to object profile (Fig. 8(c)), which are delta functions when microspheres are sufficiently small, by deconvolution (see Eq. (7)) using estimated beam spot size Wzaat different depth locations. Note that some connected microspheres in raw B-scan image are seperated and the image quality is further improved.

 figure: Fig. 8

Fig. 8 (a) Original image where the focus is in the center and the upper and lower part of the image is out of focus. (b) De-blurred image. (c) Recovered image knowing that each point is a delta function.

Download Full Size | PDF

Then, an experiment was performed on a fresh onion and 3D OCT images with 256 A-scans and 256 B scans were acquired from the focus area and 1.5 mm away from the focal point. Figure 9(a) shows one en face image of the onion in the focal depth and the proposed method is applied to improve the contrast and resolution of the image in the presence of noise. Figure 9(b) shows the recovered image from Fig. 9(a). Then, while the probe beam was kept unchanged, the sample was shifted downwards by 1.5 mm. Figure 9(c) is the resulted image that corresponded to the same depth location of sample shown in Fig. 9(a). The proposed method was applied to recover that image (Fig. 9(c)), and the result is shown in Fig. 9(d). Note the similarity between Fig. 9(d) and Fig. 9(b) shows onion structures at the same depth location.

 figure: Fig. 9

Fig. 9 (a) The en face image of the onion layer in the focal depth, (b) the defocused same en face image with (a) when the sample was shifted downwards by 1.5 mm, (b) and (d) are recovered noise-suppressed images from (a) and (c) respectively. The axial and lateral resolution is 12 µm and 16 µm respectively.

Download Full Size | PDF

The recovered images of onion layers at different depths are shown in Fig. 10 where Figs. 10(a-d) show the defocused onion layers at 1500 µm, 1604 µm, 1725 µm and 1823 µm, respectively. Figures 10(e-h) show the recovered images at their corresponding depth locations in the onion.

 figure: Fig. 10

Fig. 10 (a-d) Out-of-focus noising images at different depth locations: 1500 µm, 1604 µm, 1725 µm and 1823 µm respectively. (e-h) Recovered images from the corresponding depth locations. The axial and lateral resolution is 12 µm and 16 µm respectively.

Download Full Size | PDF

To show the discontinuity during the recovering process, Fig. 11 gives the 2D plots of PSF estimation curve for recovering from Fig. 10(a) to Fig. 10(e), where the changes of evaluation function and Laplacian of the evaluation function with the beam spot size Wz are displayed. It can be observed that the discontinuity of the evaluation function and peak value of the Laplacian of the evaluation function occur when Wz = 4.0, meaning that the estimated beam size was 4.0 for this en face image.

 figure: Fig. 11

Fig. 11 PSF estimation curve for recovering the image from Fig. 10(a) to Fig. 10(e): (a) the change of the evaluation function with the beam spot size Wz and its discontinuity (pointed by arrow) and (b) the corresponding Laplacian of the evaluation function with the beam spot size Wz and its peak value.

Download Full Size | PDF

The theoretical beam spot size Wza is compared with the estimated beam size Wzain the onion at different depths and is shown in Fig. 12 where the x axis is the depth in um and y axis is the beam spot size in µm. It can be observed that the estimated beam spot sizes agree with the expected theoretical values. The maximum estimation error is 4.2 µm and the maximum relative error is 12.8% for a depth location between 1.5 mm to 2.3 mm.

 figure: Fig. 12

Fig. 12 The theoretical beam spot size at different depths (dashed line) in the fresh onion sample is compared with estimated beam spot size (solid line). The maximum estimation error is 4.2 µm and the maximum relative error is 12.8% within a depth from 1.5 mm to 2.3 mm.

Download Full Size | PDF

Lastly, the preliminary experiments were performed on the OCT images that were acquired from the human fingertip using the system described in Fig. 7 to demonstrate whether the proposed method is applicable in vivo. For this set of experiments, the system was running at 120 frames per second. 3D OCT images consisted of 256 A-scans and 200 B scans, covering an area of 1.5x1.2 mm2 on the skin surface. The data was first obtained from the focus region. And then with the other system setup fixed, the finger was moved 500 µm away from the focal point, and another 3D image was acquired. Because of the system spatial resolution was about 16 μm, the image quality enhancement for the regions within dermis was not easily appreciated. For this reason, we decided to show here the ability of proposed method to provide enhanced imaging quality for the de-focused sweat glands. Figure 13(a) shows one typical en-face image located at the focus region, which was dissected from the 3D volume data set. This en-face image represented a plane approximately cut through the upward oriented sweat glands with 90 degrees, in which the sweat glands (shown as the bright spots) and the skin ridges are clearly seen. Figure 13(b) shows one defocused en-face image at the approximately the same location as in Fig. 13(a). The proposed method was then used to de-blur the de-focused image. The result is given in Fig. 13(c), where it can be seen that the recovered image has qualitatively the same quality as the focused image. Quantitatively, the estimated entropies for the three images were 6.9699 (Fig. 13(a)), 7.0128 (Fig. 13(b)), and 6.9356 (Fig. 13(c)), respectively. Thus, these preliminary results demonstrate that the proposed approach is efficient in de-blurring the in vivo OCT images. Further refinement of the current model for in vivo imaging applications are still in progress.

 figure: Fig. 13

Fig. 13 The en-face images obtained from fingertip of a human volunteer, showing a plane cut through the sweat glands at ~90°. (a) Image in focal area. (b) Defocused image after moving ~0.5 mm away from the focal point. (c) Recovered image from (b).

Download Full Size | PDF

5. Conclusion

We proposed an automatic PSF estimation method for recovering defocused OCT images based on the discontinuity of information entropy. A family of beam spot sizes were used to recover the original image using Richardson-Lucy algorithm and the maxima in the Laplacian of the introduced evaluation function was utilized to find the best PSF. The performance of the proposed method was first tested on digital phantoms and then on a custom-built phantom, onion cells as well as on human finger in vivo. The quantitative performance of the method was quantized by knowing the parameters of the system and comparing the estimated parameters with their real values. It was observed that the proposed method is effective to recover defocused images in the presence of noise and extend depth of imaging range to 2.8mm. This method can be used in recovering defocused OCT images where the PSF is not known, especially when the index of refraction at different depths of the sample is unknown and not fixed. The proposed method used an approximated OCT forward model when NA of OCT system is relatively small. In future, the combination of the proposed PSF estimation method with accurate OCT forward model, such as that described in interferometric synthetic aperture microscopy [9], will be explored to account for the diffraction and the coherent scattering of light by the sample, so that a better representation of the sample’s microstructure outside the focus region is presented. The proposed image recovery approach assumes that the PSF exhibits linear shift invariance throughout an en face plane, which was achieved by confining the beam scanning to the central region of the objective lens.

The proposed PSF estimation method can be combined with other deconvolution algorithms such as wiener filters for PSF estimation and image recovery. Furthermore, the proposed PSF estimation and image deconvolution method is not limited to OCT and may be useful for image defocusing in other confocal optical system.

Acknowledgments

This work was supported in part by research grants from the National Institute of Health (NIH) (R01HL093140 and R01HL093140S).

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Z. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. P. H. Tomolins and R. K. Wang, “Theory, developments and applications of optical coherence tomography,” J. Phys. D Appl. Phys. 38(15), 2519–2535 (2005). [CrossRef]  

3. F. Lexer, C. K. Hitzenberger, W. Drexler, S. Molebny, H. Sattmann, M. Sticker, and A. F. Fercher, “Dynamic coherent focus of OCT with depth-independent transversal resolution,” J. Mod. Opt. 46, 541–553 (1999).

4. B. Qi, A. O. Himmer, L. M. Gordon, X. D. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a micro-electromechanical mirror,” Opt. Commun. 232(1-6), 123–128 (2004). [CrossRef]  

5. A. Divetia, T.-H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G.-P. Li, “Dynamically focused optical coherence tomography for endoscopic applications,” Appl. Phys. Lett. 86(10), 103902 (2005). [CrossRef]  

6. T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express 14(8), 3238–3246 (2006). [CrossRef]   [PubMed]  

7. Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett. 27(4), 243–245 (2002). [CrossRef]   [PubMed]  

8. J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in multi-beam OCT,” Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XIII, Proc. of SPIE Vol. 7168, 71681N (2009).

9. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]  

10. L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express 15(12), 7634–7641 (2007). [CrossRef]   [PubMed]  

11. Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express 14(3), 1006–1020 (2006). [CrossRef]   [PubMed]  

12. M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using deconvolution,” Electron. Lett. 33(16), 1365–1367 (1997). [CrossRef]  

13. R. K. Wang, “Resolution improved optical coherence-gating tomography for imaging biological tissue,” J. Mod. Opt. 46, 1905–1913 (1999).

14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef]   [PubMed]  

15. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26(1), 72–77 (2009). [CrossRef]   [PubMed]  

16. J. M. Schmitt, “Restoration of optical coherence images of living tissue using the CLEAN algorithm,” J. Biomed. Opt. 3(1), 66–75 (1998). [CrossRef]  

17. J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for Optical Coherence Microscopy,” Opt. Express 18(4), 3632–3642 (2010). [CrossRef]   [PubMed]  

18. P. H. Tomlins, P. Woolliams, M. Tedaldi, A. Beaumont, and C. Hart, “Measurement of the three-dimensional point-spread function in an optical coherence tomography imaging system,” Proc. SPIE 6847, 68472Q, 68472Q-8 (2008). [CrossRef]  

19. P. H. Tomlins, R. A. Ferguson, C. Hart, and P. D. Woolliams, “Point-spread function phantoms for optical coherence tomography,” NPL Report OP 2 (National Physical Laboratory, pp: 1754–2944, (2009).

20. P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. 49(11), 2014–2021 (2010). [CrossRef]   [PubMed]  

21. P. H. Tomlins, G. N. Smith, P. D. Woolliams, J. Rasakanthan, and K. Sugden, “Femtosecond laser micro-inscription of optical coherence tomography resolution test artifacts,” Biomed. Opt. Express 2(5), 1319–1327 (2011). [CrossRef]   [PubMed]  

22. A. Agrawal, T. J. Pfefer, N. Gilani, and R. Drezek, “Three-dimensional characterization of optical coherence tomography point spread functions with a nanoparticle-embedded phantom,” Opt. Lett. 35(13), 2269–2271 (2010). [CrossRef]   [PubMed]  

23. R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern artifacts in spectral-domain optical Doppler tomography,” Opt. Lett. 31(20), 3001–3003 (2006). [CrossRef]   [PubMed]  

24. Z. Zhi, Y. Jung, Y. Jia, L. An, and R. K. Wang, “Highly sensitive imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography,” Biomed. Opt. Express 2(5), 1059–1068 (2011). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Schematic illustration on the effect of numerical aperture on the desired lateral resolution of OCT images, in the case of (a) low NA, and (b) high NA.
Fig. 2
Fig. 2 Flow diagram for recovering defocused OCT images.
Fig. 3
Fig. 3 The peak of the Laplacian of evaluation function (b) corresponding to the discontinuous point of evaluation function (a) is used to locate the actual beam spot size W z a .
Fig. 4
Fig. 4 When digital phantom was defocused at different Wza , (a) the discontinuous point of evaluation function approximately equals Wza for all Wzas, and (b) the peak of Laplacian of the evaluation function approximately equals Wza for all Wza s.
Fig. 5
Fig. 5 The performance of the proposed method on a digital phantom. (a) Original image. (b) Out of focus image. Out of focus image was then degraded with (c) speckle noise, (d) Poisson noise and (e) both speckle and Poisson noise. Recovered images: (f) without the presence of noise. With the presence of different noise conditions, (g) speckle noise, (h) Poisson noise and (i) both speckle and Poisson noise.
Fig. 6
Fig. 6 Estimation errors when digital phantom was manually defocused at different Wza for different noise conditions. The maximal absolute error is about 0.2 within the range 1 to12 of Wza .
Fig. 7
Fig. 7 Schematic of the OCT system used in this study
Fig. 8
Fig. 8 (a) Original image where the focus is in the center and the upper and lower part of the image is out of focus. (b) De-blurred image. (c) Recovered image knowing that each point is a delta function.
Fig. 9
Fig. 9 (a) The en face image of the onion layer in the focal depth, (b) the defocused same en face image with (a) when the sample was shifted downwards by 1.5 mm, (b) and (d) are recovered noise-suppressed images from (a) and (c) respectively. The axial and lateral resolution is 12 µm and 16 µm respectively.
Fig. 10
Fig. 10 (a-d) Out-of-focus noising images at different depth locations: 1500 µm, 1604 µm, 1725 µm and 1823 µm respectively. (e-h) Recovered images from the corresponding depth locations. The axial and lateral resolution is 12 µm and 16 µm respectively.
Fig. 11
Fig. 11 PSF estimation curve for recovering the image from Fig. 10(a) to Fig. 10(e): (a) the change of the evaluation function with the beam spot size Wz and its discontinuity (pointed by arrow) and (b) the corresponding Laplacian of the evaluation function with the beam spot size Wz and its peak value.
Fig. 12
Fig. 12 The theoretical beam spot size at different depths (dashed line) in the fresh onion sample is compared with estimated beam spot size (solid line). The maximum estimation error is 4.2 µm and the maximum relative error is 12.8% within a depth from 1.5 mm to 2.3 mm.
Fig. 13
Fig. 13 The en-face images obtained from fingertip of a human volunteer, showing a plane cut through the sweat glands at ~90°. (a) Image in focal area. (b) Defocused image after moving ~0.5 mm away from the focal point. (c) Recovered image from (b).

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

R a = l c 2 0.44 λ 0 2 Δ λ
R l = 1.22 λ 0 2 N A o b j
Z D O F = 2 λ 0 n N A o b j 2
h ( x , y ) = exp ( 2 x 2 + y 2 W z 2 )
W z = W 0 [ 1 + ( z z R ) 2 ] 1 2
z R = π W 0 2 λ n
I ( x , y ) = O ( x , y ) h ( x , y ) + n = y x O ( x x 0 , y y 0 ) h ( x 0 , y 0 ) d x 0 d y 0 + n
I = f I = f ( h O + n ) = f h O + f n ( f h ) O = h O
O n + 1 ( x , y ) = O n ( x , y ) [ h ( x , y ) I ( x , y ) h ( x , y ) O n ( x , y ) ]
I ( x , y ) = O ( x , y ) h ( x , y ) = δ ( x , y ) h ( x , y ) = h ( x , y ) = exp ( 2 x 2 + y 2 W z a 2 )
I ( x , y ) = O ( x , y ) h ( x , y ) = O ( x , y ) exp ( 2 x 2 + y 2 W z 2 )
exp ( 2 x 2 + y 2 W z a 2 ) = O ( x , y ) exp ( 2 x 2 + y 2 W z 2 )
O ( x , y ) = 2 W z 2 W z a 2 ( W z 2 W z a 2 ) exp ( 2 x 2 + y 2 W z 2 W z a 2 )
O ( x , y ) = n = 1 N m = 1 M A n m δ ( x n , y m )
E ( O ) = a p ( O ( a ) ) . log ( p ( O ( a ) ) )
E ( I ) = b p ( I ( b ) ) . log ( p ( I ( b ) ) )
E ( O , I ) j o int = a , b p ( I O ( a , b ) ) . log ( p ( I O ( a , b ) ) )
E ( O , I ) m u t u a l = E ( O ) + E ( I ) E ( O , I ) j o int
F ( W z ) = E ( O ) E ( O , I ) j o int E ( O )
I 0 ( x , y ) = O ( x , y ) h 0 ( x , y ) = O ( x , y ) exp ( 2 x 2 + y 2 W 0 2 )
I ( x , y ) = O ( x , y ) h ( x , y ) = O ( x , y ) exp ( 2 x 2 + y 2 W z a 2 )
I 0 ( x , y ) = g ( x , y ) I ( x , y ) = g ( x , y ) O ( x , y ) h ( x , y ) = O ( x , y ) h 0 ( x , y )
g ( x , y ) = 2 W z a 2 W 0 2 ( W z a 2 W 0 2 ) exp ( 2 x 2 + y 2 W z a 2 W 0 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.