Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive sensing with dispersion compensation on non-linear wavenumber sampled spectral domain optical coherence tomography

Open Access Open Access

Abstract

We propose a novel compressive sensing (CS) method on spectral domain optical coherence tomography (SDOCT). By replacing the widely used uniform discrete Fourier transform (UDFT) matrix with a new sensing matrix which is a modification of the non-uniform discrete Fourier transform (NUDFT) matrix, it is shown that undersampled non-linear wavenumber spectral data can be used directly in the CS reconstruction. Thus k-space grid filling and k-linear mask calibration which were proposed to obtain linear wavenumber sampling from the non-linear wavenumber interferometric spectra in previous studies of CS in SDOCT (CS-SDOCT) are no longer needed. The NUDFT matrix is modified to promote the sparsity of reconstructed A-scans by making them symmetric while preserving the value of the desired half. In addition, we show that dispersion compensation can be implemented by multiplying the frequency-dependent correcting phase directly to the real spectra, eliminating the need for constructing complex component of the real spectra. This enables the incorporation of dispersion compensation into the CS reconstruction by adding the correcting term to the modified NUDFT matrix. With this new sensing matrix, A-scan with dispersion compensation can be reconstructed from undersampled non-linear wavenumber spectral data by CS reconstruction. Experimental results show that proposed method can achieve high quality imaging with dispersion compensation.

© 2013 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) is widely used as a routine medical diagnosis and screening tool in many clinical applications [1, 2]. Over the past decade, Fourier domain OCT (FDOCT) has shown superior sensitivity and imaging speed and has been replacing conventional time domain OCT (TDOCT) [3, 4].

Currently, FDOCT image generation algorithm requires the number of sampling points in spectral domain beyond the Nyquist rate which results in a high sampling rate for images requiring both large depth and high axial resolution. In spectral domain OCT (SDOCT), this requires a high-resolution spectrometer with a large linear array camera. While a laser source with high digitizer rate is necessary in swept source OCT (SSOCT). Such CCD/CMOS camera and laser source are usually expensive and significantly increases the acquisition time. In clinical applications of FDOCT where high-speed is desired, longer acquisition time makes the imaging susceptible to unavoidable motion artifact.

Recently, compressive sensing (CS) has been studied extensively since it requires only a portion of whole data for image reconstruction that satisfies image quality criteria [5, 6]. If the signal has a sparse representation and the sensing matrix satisfies restricted isometry property (RIP) [7], CS can obtain exact or accurate reconstruction from highly undersampled data. Applications of CS in FDOCT (CS-FDOCT) have been proposed [816] and high quality imaging has been achieved with a significantly reduced amount of data compared to Nyquist rate requirement.

Previous works on CS-FDOCT require linear wavenumber k-space undersampling because a uniform discrete Fourier transform (UDFT) matrix is used as the sensing matrix. This is inherited from conventional FDOCT image generation algorithms. However, in most FDOCT systems, the spectra are linear in wavelength and non-linear in wavenumber. Two methods have been proposed to obtain undersampled linear wavenumber data from the non-linear wavenum-ber spectra in SDOCT. In [9], k-space grid filling method is used to remapping the undersampled non-linear wavenumber data to linear wavenumber pixels according to a pre-calibrated functional dependency of wavenumber on pixel index. This method requires spectral calibration with numerical interpolation and is complicated and time-consuming itself. Another method uses a pre-calibrated k-linear random mask which enables obtaining a linear wavenumber subset by randomly undersampling directly from the non-linear wavenumber spectra [14]. The random mask is generated with the indices of the maximum and minimum points of the spectra by placing a single reflector in the sample arm. This method still needs pre-calibration. Also, even if only a slight change in the sampling rate is desired, the whole calibration process needs to be repeated. Besides, its sampling rate has an upper bound because of the nature of non-uniformity of the wavenumber.

Several approaches have been proposed to reconstruct OCT images from the nonlinear wavenumber whole spectra [1723]. Non-uniform discrete or fast Fourier transform (NUDFT/NUFFT) is easy to implement and results in high quality images [2023]. Applications of CS with the NUDFT/NUFFT matrix as the sensing matrix have also been studied, mainly for non-Cartesian sampling in magnetic resonance imaging (MRI) [2427], which use the non-linear wavenumber spectral data directly. However, as will be shown in Section 3.1, CS with the undersampled NUDFT/NUFFT matrix as the sensing matrix cannot be applied directly to the non-linear wavenumber undersampling of FDOCT spectra because the reconstructed A-scan will have less sparsity in spatial domain which requires much more k-space sampling. Here the sparsity is defined as the number of coefficients to represent the signal is close to 0. In this paper, we modified the NUDFT matrix to promote the sparsity of A-scans by making them symmetric while preserving the intensity of the desired part of the A-scans. Therefore, the modified NUDFT matrix can be used as the sensing matrix in CS reconstruction on the non-linear wavenumber sampling of FDOCT signal.

Dispersion in FDOCT introduces a frequency-dependent phase to the Fourier components which degrades the axial resolution and reduces sensitivity [1]. Dispersion compensation methods have been successfully implemented in both hardware and software [2831]. One widely used method is proposed in [28] by first resampling the non-linear wavenumber spectra with numerical interpolation; then constructing the complex representation of the real spectra with Hilbert transform; finally correcting the dispersion phase of the linear wavenumber complex signal. However, dispersion compensation has never been discussed in the context of CS reconstruction of FDOCT signal. In this paper, we show that dispersion compensation can be implemented by multiplying the dispersion correcting term directly to the non-linear wavenumber real spectra. This enables incorporation of dispersion compensation to CS reconstruction by adding dispersion correcting term to the modified NUDFT matrix. Dispersion compensation then becomes a by-product of CS reconstruction.

The main focus of this paper is to study of the novel sensing matrix (the MNUDFT matrix in Eq. (13)) with which dispersion compensated A-scan can be reconstructed from undersampled non-linear wavenumber sampling. The organization of this paper is as follows: Section 2 demonstrates the mathematical background of UDFT, NUDFT and CS in FDOCT; CS with the modified NUDFT matrix is proposed in Section 3; in Section 4, the dispersion compensation is incorporated to CS-FDOCT on the non-linear wavenumber undersampling; experimental results on SDOCT images are shown in Section 5, with discussion in Section 6, and conclusions in Section 7.

2. Mathematical background

2.1. Uniform discrete Fourier transform

In most FDOCT system, the spectra are non-linear in wavenumber which requires preprocessing procedures such as numerical interpolation to convert the dataset into linear in wavenumber if UDFT is to be applied. Denote y = [y0, y1,..., yN−1]T as the non-linear wavenumber k-space spectra (real value) and ŷ = [ŷ0, ŷ1,..., ŷN−1]T as the linear wavenumber spectral data (real value) obtained from y. The uppercase T denotes the transpose. N is the whole signal length (no undersample). Denote x = [x0, x1,..., xN−1]T as the A-scan reconstructed from y and = [0, 1,..., N−1]T as the A-scan reconstructed from ŷ. k = [k0, k1,..., kN−1]T is the non-linear wavenumber corresponding to y while = [0, 1,..., N−1]T is the linear wavenumber corresponding to ŷ. Then can be obtained from ŷ through inverse UDFT:

x^n=1Nm=0N1y^mexp(i2πΔk^(k^mk^0)×n)=1Nm=0N1y^mexp(iω^m×n)
for n ∈ [0,...,N − 1]. i is the imaginary unit. Δ = N−10 is the wavenumber range. ω̂m = 2π/N × m. The derivation of the last part of Eq. (1) is because contains linear wavenumber: Δ can be written as Δ = N × δk̂ and m0 = m ×δk̂.

2.2. Non-uniform discrete Fourier transform

It has been shown that the A-scan (x) can be reconstructed from the non-linear wavenumber spectra (y) through inverse NUDFT [2023]. To avoid any numerical instability in computing the inverse NUDFT matrix, [23] uses the forward NUDFT matrix instead:

xn=1Nm=0N1ymexp(i2πΔk(kmk0)×n)=1Nm=0N1ymexp(iωm×n)
for n ∈ [0,...,N − 1]. Δk = kN−1k0 is the wavenumber range (Δk = Δ). ωm = 2π/Δk × (kmk0). In standard FDOCT where y and ŷ are real values, only the first halves of x and are displayed. In other words, n ∈ [0,...,N/2−1] in Eq. (1) and (2). Compared to the interpolation-UDFT method, inverse NUDFT method has several advantages: it is simple to implement and immune to the interpolation error which results in increased background noise and side-lobes, especially at larger image depth [23].

In the rest of this paper, if not specified, inverse NUDFT refers to transformation with the inverse NUDFT matrix which has the same form as the forward NUDFT matrix in Eq. (2). Constructing a new sensing matrix by starting from the forward NUDFT matrix, instead of the strict inverse NUDFT matrix, will be discussed in Section 6.

2.3. Compressive sensing in FDOCT

In CS-FDOCT, A-scan can be reconstructed from undersampled spectral data by solving the following optimization problem if it is sparse in some domain:

minimizegWg1,s.t.Fugzu2ε
where g is the desired A-scan signal in spatial domain. W is the sparsifying operator which will transform g to a sparse representation, such as the wavelet transform matrix. If g is sparse in spatial domain, W shall be the identity matrix. Fu is the undersampled sensing matrix and zu is the undersampled k-space data. ε controls the fidelity of the reconstruction to the sampled data or, equivalently, it reflects the noise level of zu; ε ≈ 0 for noise-free data.

If Fu is the undersampled UDFT matrix, zu has to be linear wavenumber sampling which is the case in traditional CS-FDOCT. However, as is mentioned in Section 1, acquisition of linear wavenumber sampling from the non-linear wavenumber spectra of FDOCT requires either k-space grid filling with spectral calibration or k-linear random mask calibration.

If undersampled NUDFT matrix is used as the sensing matrix, zu is no longer required to be linear in wavenumber and the non-linear wavenumber spectra of FDOCT can be undersampled directly at an arbitrary sampling rate and used in CS reconstruction.

3. CS with the modified NUDFT matrix on the non-linear wavenumber sampling

3.1. A-scan sparsity of FDOCT using UDFT and NUDFT

CS reconstruction on the undersampled non-linear wavenumber spectral data with the NUDFT matrix as the sensing matrix has already been successfully implemented, mainly on non-Cartesian sampling of MRI [2427]. Traditional CS-FDOCT on the UDFT matrix and the linear wavenumber sampling also shows that A-scans of FDOCT are sparse enough and can be reconstructed from highly undersampled data [816].

However, applying the NUDFT matrix and non-linear wavenumber sampling to CS-FDOCT would be a problem since the sparsity of the reconstructed A-scan is much less than that of the A-scan obtained from traditional CS-FDOCT with the UDFT matrix and linear wavenumber sampling. According to CS theory, the sampling rate for an accurate reconstruction depends highly on the A-scan’s sparsity. Decreased A-scan sparsity will require much more sampling.

The sparsities of the A-scans with both methods are compared using the whole k-space spectra. If UDFT is applied, the A-scan () can be obtained from the linear wavenumber spectra (ŷ) through Eq. (1). Because the elements of ŷ are real, is symmetric:

x^Nn=1Nm=0N1y^mexp(i2πNm×(Nn))=1Nm=0N1y^mexp(i2πNm×n)=(x^n)*
for n ∈ [1,...,N/2 − 1] because m is integer and exp(i2πm) = 1. (n)* is the conjugate of n. This conjugate property implies that the intensity of Nn is the same as that of n for n ∈ [1,...,N/2 − 1]. Figure 1(a) shows the plot of an A-scan (belonging to a mouse paw scanning) obtained by applying inverse UDFT to ŷ which is obtained from the non-linear wavenumber spectra (y) using cubic interpolation.

 figure: Fig. 1

Fig. 1 Sparsity comparison of A-scans by applying (a) inverse UDFT to the linear wavenumber whole spectra (ŷ), (b) inverse NUDFT to the non-linear wavenumber whole spectra (y), (c) modified inverse NUDFT to the non-linear wavenumber whole spectra (y)

Download Full Size | PDF

If NUDFT is applied, the A-scan (x) can be obtained from the non-linear wavenumber spectra (y) through Eq. (2). Because y is non-linear in wavenumber, x is no longer symmetric:

xNn=1Nm=0N1ymexp(iωm×(Nn))=1Nm=0N1ymexp(iωm×n)×exp(iωmN)(xn)*
for n ∈ [1,...,N/2 − 1] because usually exp(mN) ≠ ±1. Figure 1(b) shows the plot of A-scan obtained by applying inverse NUDFT to y.

As can be seen in Fig. 1(a) and 1(b), the first halves of and x, i.e., [0,..., N/2−1] and [x0,..., xN/2−1] have similar sparsity. But the sparsity of the second half of x is much less than that of . The decrease of sparsity implies that more sampling is required to reconstruct x than using CS.

Although a specific example from a mouse paw scanning is displayed in Fig. 1, according to experimental results with different samples, it is quite universal that x has less sparsity than . The influence of the decreased sparsity usually cannot be eliminated by using larger ε in CS reconstruction: In Fig. 1(b), [x0.75N,..., xN−1] has higher intensity than most of [x0,..., xN/2−1], which implies that most of [x0,..., xN/2−1] will receive a bigger penalty than [x0.75N,..., xN−1] during the reconstruction and is more likely to be zero with larger ε. Although the second halves of both x and will not be displayed for the standard FDOCT system, their sparsity will greatly influence the reconstruction of the FDOCT signal with CS since Eq. (3) tries to find the solution that minimizes the l1-norm of the whole A-scan.

3.2. The modified NUDFT matrix

The sparsity of the second half of x has a greater influence on the CS reconstruction of the whole x. In standard FDOCT, however, we do not care about the intensity of this undisplayed half of x as long as its sparsity is high enough. It would be perfect if the second half of x is always 0. However, that’s not typically true for arbitrary y.

It has already been shown in previous work of CS-SDOCT that can be accurately reconstructed with a relatively small sample size [9, 14, 16]. So our motivation is: since the first halves of x and have similar sparsity and can be reconstructed by CS, if the second half of x is symmetric to its first half, x and will have similar sparsity. Then x can be accurately reconstructed with almost the same amount of sampling required to reconstruct by CS. We find that this idea can be realized by modifying the NUDFT matrix.

The inverse NUDFT transformation in Eq. (2) can be written in matrix form as follows:

[x0x1xN]=[h(0,0)h(1,0)h(N1,0)h(0,N/2)h(1,N/2)h(N1,N/2)h(0,N1)h(1,N1)h(N1,N1)][y0y1yN1]
where h(p, q) = exp(p * q).

It is easy to see that values of the bottom half of x ([xN/2,..., xN−1]) are only relevant to the bottom half of the inverse NUDFT matrix and y. Therefore, obtaining symmetric reconstructed A-scan (x′) can be realized by modifying the bottom half of the inverse NUDFT matrix:

[x0x1xN1]=[h(0,0)h(1,0)h(N1,0)h(0,1)h(1,1)h(N1,1)h(0,N/21)h(1,N/21)h(N1,N/21)h(0,N/2)h(1,N/2)h(N1,N/2)h(0,N/21)*h(1,N/21)*h(N1,N/21)*h(0,1)*h(1,1)*h(N1,1)*][y0y1yN1]
where h(p, q)* = exp(−p × q) is the conjugate of h(p, q). Elements from the second row to the last row of the bottom half of the transformation matrix (rows in bold) are conjugate to those of the symmetric rows in the top half, e.g., elements of the row corresponding to x′(N/2+1) are conjugate to the elements of the row corresponding to x′(N/2−1). It can be proved that x′ is symmetric:
xNn=1Nm=0N1ymh(m,n)*=(1Nm=0N1ymh(m,n))*=(xn)*
which implies that x′Nn and x′n have the same intensity for n ∈ [1,...,N/2 − 1]. Thus x′ is symmetric. It is also easy to see that [x′0, x′1,..., x′N/2−1] is the same as [x0, x1,..., xN/2−1]. Thus, proposed modification preserves the intensity of the desired part of the A-scan. The first and (N/2 + 1)th row of the modified inverse NUDFT matrix are unchanged, which makes x′ and to have the same symmetric structure. Figure 1(c) shows the plot of x′ whose sparsity is similar to and much higher than x. Proposed method shows good performance at promoting the sparsity of A-scan.

The modified inverse NUDFT matrix in Eq. (7), however, cannot be used directly as the sensing matrix in CS-FDOCT because the sensing matrix should transform data from spatial domain to k-space according to Eq. (3). Thus, its inverse matrix, i.e. the modified NUDFT matrix is needed. Based on the fact that the inverse NUDFT matrix in Eq. (6) is indeed the forward NUDFT matrix instead of the strict inverse NUDFT matrix, the modified NUDFT matrix can be easily obtained by taking conjugate transpose of the transformation matrix in Eq. (7).

Using the undersampled modified NUDFT matrix as the sensing matrix, A-scan can be reconstructed with high accuracy by CS from undersampled non-linear wavenumber spectral data.

4. CS with dispersion compensation

4.1. Dispersion compensation with NUDFT on non-linear wavenumber real spectra

Dispersion degrades FDOCT image quality by introducing a frequency-dependent phase term to the Fourier components of the signal. One widely used method to compensate the dispersion is proposed in [28] which first resamples the non-linear wavenumber spectra with numerical interpolation; generates the imaginary part of the signal with Hilbert transform; then corrects the phase of the linear wavenumber complex signal to compensate dispersion; finally applies UDFT to the corrected spectrum to obtain the A-scan. However, this method cannot be applied to undersampled spectral data because both interpolation and Hilbert transform require whole spectra which makes it very difficult to obtain undersampled dispersion compensated linear wavenumber spectral data. Also, this method cannot be applied as post processing to the reconstructed A-scan with dispersion because it corrects the dispersion in k-space, not in spatial domain. Thus it is difficult to incorporate this widely used dispersion compensation method into CS-FDOCT directly.

Therefore, we propose and validate a dispersion compensation method by first multiplying the correcting phase directly to the non-linear wavenumber real spectra; then applying NUDFT to the corrected spectrum to reconstruct the A-scan. This method eliminates the need for interpolation and Hilbert transform which are used to transform the spectra to be first linear in wavenumber then complex. It will be shown in Section 4.3 that after transformation, proposed dispersion compensation method has the form similar to that of the inverse NUDFT-based FDOCT image generation algorithm in Eq. (6). Then the transformation matrix is modified in the same way mentioned in Section 3.2 to promote the A-scan’s sparsity and its undersampled matrix can be used in CS reconstruction on the non-linear wavenumber real sampling to obtain dispersion compensated A-scan.

Multiplying the correcting phase directly to the non-linear wavenumber real spectra can be written as:

Icomp(ωm)=Re{2×ΣnSn(ωm)Sr(ωm)exp(i[ωmτn+Φ(ωm)])}×exp(iΦ(ωm))=(ΣnAn(ωm)(exp(i[ωmτn+Φ(ωm)]+exp(i[ωmτn+Φ(ωm)])))exp(iΦ(ωm))=nAn(ωm)exp(i[ωmτn])A1+nAn(ωm)exp(i[ωmτn+2Φ(ωm)])A2
for m ∈ [0,...,N − 1]. Icomp(ω) is the corrected spectrum. Sn(ω) is the intensity of light reflected from the n-th layer in the sample; Sr(ω) is the intensity of light reflected from the reference arm; τn is the optical group delay of the n-th reflection to the reference light path. Φ(ω) is the dispersion term. An(ωm) substitutes Sn(ωm)Sr(ωm) from the second row. The first term after the first equal mark is the interferometric signal with some degree of dispersion (i.e. y in previous sections) while the second term is the dispersion compensation term. Compensating second and third order dispersion is usually sufficient where Φ(ω) = −a2(ωω*)2a3(ωω*)3. ω* is the central angular frequency; a2 and a3 are constant. In the last row, the term A1 is the desired dispersion compensated spectra. However, proposed method will also introduce an undesired term in Icomp(ω): A2.

Then inverse NUDFT is applied to the corrected spectrum Icomp(ω). Denote the results of applying the inverse NUDFT to A1 and A2 as T1 and T2 respectively. The resulting A-scan is the sum of T1 and T2. In standard FDOCT system, only the first half of the A-scan will be shown. The first half of T1 is the desired dispersion compensated A-scan with better resolution while the first half of T2 degrades the A-scan’s resolution. However, as will be shown below, the intensity of first half of T2 is relatively small compared to that of the first half of T1. In other words, T1 dominates the displayed half of the A-scan and the resolution degradation caused by T2 has little effect.

4.2. Experimental validation

To illustrate this domination effect, the simulation with only one reflector (n = 1) is shown in Fig. 2 which plots the intensity ratio value of (|T1(τ1)|/|T2(τ1)|) with different reflector position τ1 (from 1 to N/2−1). The simulation is done with different level of dispersion (a2 ∈ {−500, −250, −100, 0, 100, 250, 500} fs2; a3 = 0) to demonstrate that intensity of T1 is much higher than that of T2 regardless of the level of dispersion. ω is obtained from the SDOCT system used in the study. Sn and Sr are set as 1 since their values do not influence the plot.

 figure: Fig. 2

Fig. 2 Plot of |T1(τ1)|/|T2(τ1)| versus different reflector position τ1. Simulation is done with different level of dispersion (a2 ∈ {−500, −250, −100, 0, 100, 250, 500} fs2; a3 = 0).

Download Full Size | PDF

As can be seen in Fig. 2, most of (|T1(τ1)|/|T2(τ1)|) is above 30dB when the reflector is in the displayed half of the A-scan. T1 dominates T2 under various dispersion condition. The curve without dispersion shows higher ratio as expected.

Sensitivity roll-off of the proposed dispersion compensation method on the non-linear wavenumber real spectra (y) is compared with those by applying (1) inverse UDFT to the linear wavenumber real spectra (ŷ) without dispersion compensation; (2) inverse NUDFT to y without dispersion compensation; (3) dispersion compensation method in [28]. Here 124 A-scans with 2048-pixel each were averaged for each position. A 2cm water cell was inserted to the reference arm of the interferometer to introduce large dispersion mismatch (both y and ŷ contains dispersion). The dispersion coefficients were empirically set as a2 = 460 fs2 and a3 = 134 fs3. The results are shown in Fig. 3 (intensity of each A-scan is normalized). Figure 3(b) shows flatter sensitivity roll-off with smaller background noise and side-lobes at larger image depth than Fig. 3(a), which is consistent with the observation in [20, 23]. Peaks in Fig. 3(c) and 3(d) are much sharper than those in Fig. 3(a) and 3(b), indicating the FWHM of 4μm against 22μm. Figure 3(d) also shows better sensitivity than Fig. 3(c) as well as less background noise and side-lobes. Thus, proposed method shows good potential at compensating dispersion and outperforms the dispersion compensation method in [28].

 figure: Fig. 3

Fig. 3 Sensitivity roll-off of systems applying (a) inverse UDFT to the linear wavenumber real spectra without dispersion compensation, (b) inverse NUDFT to the non-linear wavenumber real spectra without dispersion compensation, (c) dispersion compensation method in [28], (d) proposed dispersion compensation method on the non-linear wavenumber real spectra. A 2cm water cell is inserted to introduce large dispersion mismatch.

Download Full Size | PDF

The above two calculations show that the influence of undesired term T2 is relatively small compared with T1 on the displayed half of reconstructed A-scan and that multiplying the correcting term directly to the non-linear wavenumber real spectra could achieve a satisfying dispersion compensation effect.

4.3. Incorporation of dispersion compensation to CS-FDOCT

Dispersion compensation method proposed in Section 4.2 can be written in matrix form as:

[x0cx1cxN1c]=[h(0,0)h(1,0)h(N1,0)h(0,N/2)h(1,N/2)h(N1,N/2)h(0,N1)h(1,N1)h(N1,N1)]([y0y1yN1].*[eiΦ(ω0)eiΦ(ω1)eiΦ(ωN1)])
where xc=[x0c,x1c,,xN1c]T is the dispersion-compensated A-scan; “.*” stands for the component-wise multiplication. The transformation matrix is the inverse NUDFT matrix.

Denote exp(iΦ(ωn)) as Φn for n ∈ [0, 1,...,N − 1], then Eq. (10) can be rewritten by incorporating the dispersion compensation term into the inverse NUDFT matrix:

[x0cx1cxN1c]=[h(0,0)Φ0h(1,0)Φ1h(N1,0)ΦN1h(0,N/2)Φ0h(1,N/2)Φ1h(N1,N/2)ΦN1h(0,N1)Φ0h(1,N1)Φ1h(N1,N1)ΦN1][y0y1yN1]
where h(p, qn = exp(p × q) × exp(iΦ(ωn)). Although the transformation matrix in Eq. (11) does not have the same form as the inverse NUDFT matrix in Eq. (2), it could still be considered an inverse NUDFT matrix because the phases in each row are non-linear in wavenumber.

Eq. (11) gives a transformation between the dispersion-compensated A-scan and the non-linear wavenumber real spectra, building the foundation of CS reconstruction. However, it cannot be used directly in CS-FDOCT because of decreased A-scan sparsity problem mentioned in Section 3.1. Thus the transformation matrix in Eq. (11) also needs modification to make xc symmetric for higher sparsity:

[x0cx1cxN1c]=[h(0,0)Φ0h(1,0)Φ1h(N1,0)ΦN1h(0,1)Φ0h(1,1)Φ1h(N1,1)ΦN1h(0,N/21)Φ0h(1,N/21)Φ1h(N1,N/21)ΦN1h(0,N/2)Φ0h(1,N/2)Φ1h(N1,N/2)ΦN1(h(0,N/21)Φ0)*(h(1,N/21)Φ1)*(h(N1,N/21)ΦN1)*(h(0,1)Φ0)*(h(1,1)Φ1)*(h(N1,1)ΦN1)*][y0y1yN1]
The modification also preserves the value of desired half of the A-scan: [x0c,x1c,,xN/21c]T=[x0c,x1c,,xN/21c]T.

The sensing matrix required for CS reconstruction can be obtained in the same way as in Section 3.2: take the conjugate transpose of the transformation matrix in Eq. (12):

[(h(0,0)Φ0)*(h(0,N/21)Φ0)*h(0,N/21)Φ0h(0,1)Φ0(h(N1,0)ΦN1)*(h(N1,N/21)ΦN1)*h(N1,N/21)ΦN1h(N1,1)ΦN1]
With the undersampled matrix of Eq. (13), dispersion compensated A-scan can be reconstructed from undersampled non-linear wavenumber spectral data. It is noteworthy to mention that with proposed sensing matrix, dispersion compensation becomes a by-product of CS reconstruction and no additional dispersion compensation procedure is needed.

5. Experimental results

To evaluate the effect of proposed method, k-space data from a SDOCT system are used. The system uses a spectrometer having a 12-bit CMOS line scan camera (EM4, e2v, USA) with 2048 pixels at 70 kHz line rate. A superluminescent laser diode (SLED) is used as the light source which provides an output power of 10 mW and an effective bandwidth of 105nm centered at 845nm. The experimental axial resolution of the system is 4.0μm in air while the transversal resolution is approximately 12μm. All animal studies were conducted in accordance with the Johns Hopkins University Animal Care and Use Committee Guidelines. Spectral data were post-processed with MATLAB® R2012b on a desktop with Intel® CoreTM 2 Duo CPU (E8400, 3.0GHz), 4GB RAM, Windows® 7 64-bit operation system. The CS reconstruction algorithm is SPGL1 with default parameters [32, 33].

Proposed sensing matrix in Eq. (13) (denoted as the MNUDFT matrix) is applied to CS reconstruction on the undersampled non-linear wavenumber real spectra. For comparison purpose, the following results are also evaluated: 1) original image obtained by applying NUDFT to 100% of the non-linear wavenumber real spectra; 2) image reconstructed using CS on the undersampled non-linear wavenumber real spectra with the NUDFT matrix as the sensing matrix.

The undersampled non-linear wavenumber spectral data is obtained by applying a pseudo-random mask to the original spectra. Variable density random sampling [12] is used to generate this mask. This evaluation method is widely used in the studies of CS-SDOCT [9, 14, 16]. A CCD camera with randomly addressable pixels [9,3436] can be used to practically implement random undersampling of spectral data in an SDOCT system.

The same undersampled data, reconstruction domain and ε are used in the CS reconstructions of the same object. According to experimental results, OCT signal is sparse in spatial domain in most cases. However, it is usually sparser in wavelet domain than spatial domain. The reconstruction domain is chosen to optimize the reconstruction result. Besides, the sampling rate is chosen to balance the size of spectral data and the image quality while ε is selected to balance the loss of useful information and the reduction of noise. The dispersion compensation parameters a2 and a3 are set empirically.

To carry out a quantitative assessment of reconstruction results of different methods, the local contrast and signal to noise ratio (SNR) are computed. Their definitions are as follows:

localcontrast=μoμb
SNR=20×log10(1No(i,j)objectI(i,j)21Nb(i,j)backgroundI(i,j)2)
where No and Nb are the number of pixels in the selected object region and background region respectively. As is shown in Fig. 4, 5, and 6, area in the red rectangles are selected object region while the green rectangle area is selected background region. They have the same size. I(i, j) is the intensity. μo and μb are mean of intensity of the object region and background region respectively.

 figure: Fig. 4

Fig. 4 B-scans of a mouse paw. (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 40% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 40% of the acquired non-linear wavenumber spectra; The scale bars represent 100μm. Image size in pixel is 450 × 1000

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 B-scans of a mouse cornea; (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 37.5% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 37.5% of the acquired non-linear wavenumber spectra; (d), (e) and (f) are zoom in of the cyan rectangle areas in (a), (b) and (c) respectively. The scale bars represent 100μm. Image size in pixel is 700 × 1000.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 B-scans of a polymer-layered phantom with 2.4cm water induced dispersion; (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 50% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 50% of the acquired non-linear wavenumber spectra; (d) image obtained by applying the forward MNUDFT matrix to 100% of the acquired non-linear wavenumber spectra. The scale bars represent 100μm. Image size in pixel is 450 × 1000.

Download Full Size | PDF

The comparison is first implemented on the mouse paw scanning which contains several layers. The CS reconstructions use 40% spectral data; W is the four-level daubechies4 wavelet transform matrix. Compensation parameters are set as a2 = 157 fs2 and a3 = 170 fs3. All images are shown in the same dynamic range. Figure 4(b) exhibits bad quality because of the decreased A-scan sparsity problem. It achieves accurate reconstruction for pixels with high intensity but loses information for the pixels with relative low intensity. The layer beneath the surface is difficult to see due to the CS reconstruction error, as is pointed out by the arrows. Figure 4(c) has much better quality, which is very close to 4(a) which uses 100% sampling rate. Besides, Fig. 4(c) shows obvious dispersion compensation effect with clear and thin tissue boundary compared to 4(a) and 4(b). The overall contrast of 4(c) is better than that of 4(a) with 100% sampling rate because CS is well known to be good at reducing noise [15, 37]. The local contrast and SNR of Fig. 4(a), 4(b) and 4(c) are listed in Table 1 which shows that CS reconstruction with the MNUDFT matrix obtains better image quality.

Tables Icon

Table 1. Local contrast and SNR of the B-scans of mouse paw in Fig. 4

The mouse cornea images are shown in Fig. 5. CS reconstructions used 37.5% sampling rate. W is the identity matrix. a2 = 120 fs2 and a3 = 100 fs3. Figure 5(c) is very close to 5(a) while Fig. 5(b) shows obvious artifact and information loss. Regions of interest (ROI) are extracted from the reconstructed images (cyan rectangles in Fig. 5(a), 5(b) and 5(c)). Figure 5(f) shows that CS with the MNUDFT matrix preserves almost all the structures in the original image which uses 100% sampling rate while Fig. 5(e) are void of fine details. Table 2 lists the local contrast and SNR of Fig. 5(a), 5(b) and 5(c).

Tables Icon

Table 2. Local contrast and SNR of the B-scans of mouse cornea in Fig. 5

To give an in-depth assessment of the dispersion compensation effect of CS with the MNUDFT matrix, a 2.4cm water cell was placed in the reference arm when imaging a polymer-layered phantom to intentionally introduce a large dispersion mismatch between the two arms. CS reconstructions use 50% sampling rate. W is the four-level daubechies4 wavelet transform matrix. a2 = 575 fs2 and a3 = 295 fs3. The B-scan obtained by applying the forward MNUDFT matrix to 100% of the acquired spectra (Eq. (12)) is added in Fig. 6(d) to show that MNUDFT can also achieve obvious dispersion compensation when used in the traditional imaging generation. Figure 6(a) shows much bigger dispersion than the previous cases. Figure 6(b) is void of fine details and there are obvious artifacts in the area outside the phantom. Dispersion compensated images, Fig. 6(c) and 6(d), are highly clear compared to Fig. 6(a), especially near the upper surface. Both of them show obvious dispersion reduction which validates the proposed method while CS uses only 50% of the acquired spectra and achieves better image quality, as is shown in Table 3.

Tables Icon

Table 3. Local contrast and SNR of the B-scans of polymer-layered phantom in Fig. 6

6. Discussion

The MNUDFT matrix is created by “mirroring” its first half: setting columns of the right half to be conjugate to the corresponding columns at the left half. This modification itself is not unique. Looking at the MNUDFT matrix in Eq. (13), one can easily create another sensing matrix by changing the order of the columns of its right half. Although the resulting A-scan will not be symmetric any more, this does not change the performance of the method since its l1-norm is unchanged and the displayed half of the A-scan is preserved.

Although, according to Eq. (7) and (12), setting the entire bottom half of the transformation matrix to zero will maximize the sparsity of the undisplayed half of the A-scan, one cannot simply drop the information in this way since it will make all the right half of the MNUDFT matrix zero. Denote the proposed MNUDFT matrix as H. It implies y = H * where y is the acquired spectra with non-linear wavenumber and is the A-scan. If the right half of H are all zero, no matter what value the bottom half of is (including the desired zero value), y = H * does not hold any more if the first half of is unchanged. Thus the basic principle is violated. In addition, the new sensing matrix should not have any two columns/rows the same, as is required by the standard CS theory [5, 6].

The proposition of the MNUDFT matrix starts from the forward NUDFT matrix instead of the strict inverse NUDFT matrix. This substitution does not change the effect of proposed method because the forward NUDFT matrix is only used to demonstrate what the A-scan would be after the modification. The usage of the forward NUDFT matrix to compute the A-scan has already been validated by the experiments in [20, 23]. Then the MNUDFT matrix can be easily obtained because the modification is done on its conjugate transpose matrix. After all, the matrix that will be used in CS reconstruction as the sensing matrix is the MNUDFT matrix. Selected CS reconstruction algorithm relies on the sensing matrix and its conjugate transpose, not its inverse. Starting from the forward NUDFT matrix instead of the strict inverse NUDFT matrix is because it helps to demonstrate the motivation of proposed method.

Dispersion compensation coefficients a2 and a3 used in this paper were obtained empirically from the system in the study. However, these two parameters can be obtained automatically using an iterative procedure which optimizes the sharpness of the reconstructed image, as in [28].

Another interesting subject is incorporating dispersion compensation directly to UDFT-based CS-FDOCT on undersampled linear wavenumber spectral data. No interpolation is needed in this case. The Hilbert transform cannot be applied to undersampled data. This can be overcome by multiplying the dispersion compensation term directly to the real spectra. But the remaining processing actually becomes CS reconstruction with the NUDFT matrix: the transformation equation on 100% spectral data (ŷ) in this case is very similar to Equation (11) except that the non-linear angular frequency ω is replaced by the linear angular frequency ω̂. Denote the transformation matrix as D. Because Φ(ω̂) is non-equispaced, the phase of arbitrary row in D is no longer equispaced. Thus D is a NUDFT matrix, which may introduce low sparsity to the resulting A-scan. Thus, modification on D is still needed to improve the sparsity of the resulting A-scan, especially when the dispersion is big.

We have shown that CS reconstruction with the proposed MNUDFT matrix on undersampled nonlinear wavenumber data represents a simpler approach compared to traditional CS-FDOCT based on the UDFT matrix and undersampled linear wavenumber data. The time for CS reconstructions in both cases are almost the same. However, many properties such as the difference in sampling rate and robustness to noise of UDFT and MNUDFT based CS-FDOCT still need more study.

There is some difference when choosing the CS reconstruction algorithm because the MNUDFT matrix is not unitary. Several CS reconstruction algorithms-such as NESTA [38] and CSALSA [39]-cannot or are difficult to use to solve the CS optimization problem defined in Eq. (3), because they require either FuTFu=I or explicit (IFuTFu)1 ( FuT is the conjugate transpose of Fu). SPGL1 is chosen as the reconstruction algorithm, in part because it does not require unitary sensing matrix.

7. Conclusion

In this paper, we propose a novel CS-FDOCT method based on the MNUDFT matrix which can be applied to the non-linear wavenumber spectral sampling instead of the linear wavenumber sampling in traditional CS-FDOCT with the UDFT matrix. In addition, dispersion compensation by multiplying the correcting term directly to the non-linear wavenumber real spectra is proposed and incorporated into the sensing matrix of CS. Experimental results show that proposed method achieves high quality images with good dispersion reduction.

Acknowledgments

We thank Dr. Dedi Tong for his help in mouse experiments. This work was partially supported by NIH/NIE 1R01EY021540-01A1 and NSF/ERC MIRTHE. Yong Huang is partially supported by the China Scholarship Council (CSC).

References and links

1. W. Drexler and J. G. Fujimoto, Optical coherence tomography: Technology and Applications (Springer, Berlin, Germany, 2008) [CrossRef]  .

2. A.F. Fercher, W. Drexler, C.K. Hitzenberger, and T. Lasser, “Optical coherence tomography - principles and applications,” Rep. Prog. Phys. , 66(2), 239–303 (2003) [CrossRef]  .

3. R. Leitgeb, C. Hitzenberger, and A. Fercher, “Performance of fourier domain vs. time domain optical coherence tomography,” Opt. Express , 11(8), 889–894 (2003) [CrossRef]  .

4. M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of swept source and Fourier domain optical coherence tomography,” Opt. Express , 11(18), 2183–2189 (2003) [CrossRef]   [PubMed]  .

5. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory , 52(4), 1289–1306 (2006) [CrossRef]  .

6. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” Inf. Theory , 52(2), 489–509 (2006) [CrossRef]  .

7. E. J. Candes and T. Tao, “Near-optical signal recovery from random projection: universal encoding strategies?” IEEE Trans. Inf. Theory , 52(12), 5406–5425 (2006) [CrossRef]  .

8. N. Mohan, I. Stojanovic, W.C. Karl, B.E.A. Saleh, and M.C. Teich, “Compressed sensing in optical coherence tomography,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XVII, SPIE , 7570, 75700L (2010) [CrossRef]  .

9. X. Liu and J. U. Kang, “Compressive SD-OCT: the application of compressed sensing in spectral domain optical coherence tomography,” Opt. Express , 18(21), 22010–22019 (2010) [CrossRef]   [PubMed]  .

10. E. Lebed, P. J. Mackenzie, M. V. Sarunic, and F. M. Beg, “Rapid volumetric OCT image acquisition using compressive sampling,” Opt. Express , 18(29), 21003–21012 (2010) [CrossRef]   [PubMed]  .

11. M. Young, E. Lebed, Y. Jian, P. J. Mackenzie, M. F. Beg, and M. V. Sarunic, “Real-time high-speed volumetric imaging using compressive sampling optical coherence tomography,” Biomed. Opt. Express , 2(9), 2690–2697 (2011) [CrossRef]  .

12. X. Liu and J. U. Kang, “Sparse OCT: Optimizing compressed sensing in spectral domain optical coherence tomography,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XVII, SPIE , 7904, 79041CL (2011).

13. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012) [CrossRef]   [PubMed]  .

14. N. Zhang, T. Huo, C. Wang, T. Chen, J. Zheng, and P. Xue, “Compressed sensing with linear-in-wavenumber sampling in spectral-domain optical coherence tomography,” Opt. Lett. 37(15), 3075–3077 (2012) [CrossRef]   [PubMed]  .

15. D. Xu, N. Vaswani, Y. Huang, and J. U. Kang, “Modified compressive sensing optical coherence tomography with noise reduction,” Opt. Lett. 37(20), 4209–4211 (2012) [CrossRef]   [PubMed]  .

16. S. Schwartz, C. Liu, A. Wong, D. A. Clausi, P. Fieguth, and K. Bizheva, “Energy-guided learning approach to compressive sensing,” Opt. Express 21(1), 329–344 (2013) [CrossRef]   [PubMed]  .

17. J. Ke and E. Lam, “Image reconstruction from nonuniformly spaced samples in spectral-domain optical coherence tomography,” Biomed. Opt. Express 3, 741–752 (2012) [CrossRef]   [PubMed]  .

18. M. Jeon, J. Kim, U. Jung, C. Lee, W. Jung, and S. A. Boppart, “Full-range k-domain linearization in spectral-domain optical coherence tomography, Appl. Opt. 50, 1158–1162 (2011) [CrossRef]   [PubMed]  .

19. H. K. Chan and S. Tang, High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform, Biomed. Opt. Express 1, 1309–1319 (2010) [CrossRef]  .

20. K. Zhang and J. U. Kang, “Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system,” Opt. Express 18(11), 11772–11784 (2010) [CrossRef]   [PubMed]  .

21. S.S. Sherif, C. Flueraru, Y. Mao, and S. Change, “Swept source optical coherence tomography with nonuniform frequency domain sampling,” Biomedical Optics, OSA, Technical Digest (CD)(Optical Society of America, 2008), paper BMD86 [CrossRef]  .

22. K. Wang, Z. Ding, T. Wu, C. Wang, J. Meng, M. Chen, and L. Xu, “Development of a non-uniform discrete Fourier transform based high speed spectral domain optical coherence tomography system,” Opt. Express 17(14), 12121–12131 (2009) [CrossRef]   [PubMed]  .

23. S. Vergnole, D. Levesque, and G. Lamouche, “Experimental validation of an optimized signal processing method to handle non-linearity in swept-source optical coherence tomography,” Opt. Express 18(12), 10446–10461 (2010) [CrossRef]   [PubMed]  .

24. M. Lustig and J. M. Pauly, “SPIRiT: iterative self-consistent parallel imaging reconstruction from arbitrary k-space,” Magn. Reson. Med. , 64, 457–471 (2010) [PubMed]  .

25. F. Knoll, G. Schultz, K. Bredies, D. Gallichan, M. Zaitsev, J. Hennig, and R. Stollberger, “Reconstruction of undersampled radial PatLoc imaging using total generalized variation,” Magn. Reson. Med. 37(15), in Press (2012).

26. E. Aboussouan, L. Marinelli, and E. Tan, “Non-cartesian compressed sensing for diffusion spectrum imaging,” Proc. Intl. Soc. Mag. Recon. Med. , 19, 1919 (2011).

27. X. Chen, M. Salerno, F. H. Epstein, and C. H. Meyer, “Accelerated multi-TI spiral MRI using compressed sensing with temporal constraints,” Proc. Intl. Soc. Mag. Recon. Med. 19, 4369 (2011).

28. M. Wojtkowski, V.J. Srinivasan, T.H. Ko, J.G. Fujimoto, A. Kowalczyk, and J.S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12(11), 2404–2422 (2004) [CrossRef]   [PubMed]  .

29. Y. Chen and X. Li, “Dispersion management up to the third order for real-time optical coherence tomography involving a phase or frequency modulator,” Opt. Express 12(24), 5968–5978 (2004) [CrossRef]   [PubMed]  .

30. D.L. Marks, A.L. Oldenburg, J.J. Reynolds, and S.A. Boppart, “Digital algorithm for dispersion correction in optical coherence tomography for homogeneous and stratified media,” Appl. Opt. 42(2), 204–217 (2003) [CrossRef]   [PubMed]  .

31. K. Zhang and J. U. Kang, “Real-time numerical dispersion compensation using graphics processing unit for Fourier-domain optical coherence tomography,” Electron. Lett. , 47(5), 309–310 (2011) [CrossRef]  .

32. E. van den Berg and M.P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing , 31(2), 890–912 (2008) [CrossRef]  .

33. E. van den Berg and M.P. Friedlander, “SPGL1: a solver for large-scale sparse reconstruction”, http://www.cs.ubc.ca/labs/scl/spgl1(2007).

34. S. M. Potter, A. Mart, and J. Pine, “High-speed CCD movie camera with random pixel selection for neurobiology research,” Proc. SPIE 2869, 243253 (1997).

35. S. P. Monacos, R. K. Lam, A. A. Portillo, and G. G. Ortiz, “Design of an event-driven random-assess-windowing CCD-based camera,” Proc. SPIE 4975, 115 (2003) [CrossRef]  .

36. B. Dierickx, D. Scheffer, G. Meynants, W. Ogiers, and J. Vlummens, “Random addressable active pixel image sensors,” Proc. SPIE 2950, 2–7 (1996) [CrossRef]  .

37. M. lusting, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. 58(6), 1182–1195 (2007) [CrossRef]  .

38. S. Becker, J. O. Robin, and E. J. Candes, “NESTA: a fast and accurate first-order method for sparse recovery,” Technical report, California Institute of Technology (2009).

39. M. Afonso, J. Bioucas-Dias, and M. Figueiredo, “An augmented Lagrangian approach to the constraint optimization formulation of imaging inverse problems,” IEEE Trans. on Image Proc. 20(3), 681–695 (2009) [CrossRef]  .

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Sparsity comparison of A-scans by applying (a) inverse UDFT to the linear wavenumber whole spectra (ŷ), (b) inverse NUDFT to the non-linear wavenumber whole spectra (y), (c) modified inverse NUDFT to the non-linear wavenumber whole spectra (y)
Fig. 2
Fig. 2 Plot of |T1(τ1)|/|T2(τ1)| versus different reflector position τ1. Simulation is done with different level of dispersion (a2 ∈ {−500, −250, −100, 0, 100, 250, 500} fs2; a3 = 0).
Fig. 3
Fig. 3 Sensitivity roll-off of systems applying (a) inverse UDFT to the linear wavenumber real spectra without dispersion compensation, (b) inverse NUDFT to the non-linear wavenumber real spectra without dispersion compensation, (c) dispersion compensation method in [28], (d) proposed dispersion compensation method on the non-linear wavenumber real spectra. A 2cm water cell is inserted to introduce large dispersion mismatch.
Fig. 4
Fig. 4 B-scans of a mouse paw. (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 40% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 40% of the acquired non-linear wavenumber spectra; The scale bars represent 100μm. Image size in pixel is 450 × 1000
Fig. 5
Fig. 5 B-scans of a mouse cornea; (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 37.5% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 37.5% of the acquired non-linear wavenumber spectra; (d), (e) and (f) are zoom in of the cyan rectangle areas in (a), (b) and (c) respectively. The scale bars represent 100μm. Image size in pixel is 700 × 1000.
Fig. 6
Fig. 6 B-scans of a polymer-layered phantom with 2.4cm water induced dispersion; (a) original image obtained by applying NUDFT to 100% of the acquired non-linear wavenumber spectra; (b) CS reconstruction result with the NUDFT matrix from 50% of the acquired non-linear wavenumber spectra; (c) CS reconstruction result with the MNUDFT matrix from 50% of the acquired non-linear wavenumber spectra; (d) image obtained by applying the forward MNUDFT matrix to 100% of the acquired non-linear wavenumber spectra. The scale bars represent 100μm. Image size in pixel is 450 × 1000.

Tables (3)

Tables Icon

Table 1 Local contrast and SNR of the B-scans of mouse paw in Fig. 4

Tables Icon

Table 2 Local contrast and SNR of the B-scans of mouse cornea in Fig. 5

Tables Icon

Table 3 Local contrast and SNR of the B-scans of polymer-layered phantom in Fig. 6

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

x ^ n = 1 N m = 0 N 1 y ^ m exp ( i 2 π Δ k ^ ( k ^ m k ^ 0 ) × n ) = 1 N m = 0 N 1 y ^ m exp ( i ω ^ m × n )
x n = 1 N m = 0 N 1 y m exp ( i 2 π Δ k ( k m k 0 ) × n ) = 1 N m = 0 N 1 y m exp ( i ω m × n )
minimize g W g 1 , s . t . F u g z u 2 ε
x ^ N n = 1 N m = 0 N 1 y ^ m exp ( i 2 π N m × ( N n ) ) = 1 N m = 0 N 1 y ^ m exp ( i 2 π N m × n ) = ( x ^ n ) *
x N n = 1 N m = 0 N 1 y m exp ( i ω m × ( N n ) ) = 1 N m = 0 N 1 y m exp ( i ω m × n ) × exp ( i ω m N ) ( x n ) *
[ x 0 x 1 x N ] = [ h ( 0 , 0 ) h ( 1 , 0 ) h ( N 1 , 0 ) h ( 0 , N / 2 ) h ( 1 , N / 2 ) h ( N 1 , N / 2 ) h ( 0 , N 1 ) h ( 1 , N 1 ) h ( N 1 , N 1 ) ] [ y 0 y 1 y N 1 ]
[ x 0 x 1 x N 1 ] = [ h ( 0 , 0 ) h ( 1 , 0 ) h ( N 1 , 0 ) h ( 0 , 1 ) h ( 1 , 1 ) h ( N 1 , 1 ) h ( 0 , N / 2 1 ) h ( 1 , N / 2 1 ) h ( N 1 , N / 2 1 ) h ( 0 , N / 2 ) h ( 1 , N / 2 ) h ( N 1 , N / 2 ) h ( 0 , N / 2 1 ) * h ( 1 , N / 2 1 ) * h ( N 1 , N / 2 1 ) * h ( 0 , 1 ) * h ( 1 , 1 ) * h ( N 1 , 1 ) * ] [ y 0 y 1 y N 1 ]
x N n = 1 N m = 0 N 1 y m h ( m , n ) * = ( 1 N m = 0 N 1 y m h ( m , n ) ) * = ( x n ) *
I comp ( ω m ) = Re { 2 × Σ n S n ( ω m ) S r ( ω m ) exp ( i [ ω m τ n + Φ ( ω m ) ] ) } × exp ( i Φ ( ω m ) ) = ( Σ n A n ( ω m ) ( exp ( i [ ω m τ n + Φ ( ω m ) ] + exp ( i [ ω m τ n + Φ ( ω m ) ] ) ) ) exp ( i Φ ( ω m ) ) = n A n ( ω m ) exp ( i [ ω m τ n ] ) A 1 + n A n ( ω m ) exp ( i [ ω m τ n + 2 Φ ( ω m ) ] ) A 2
[ x 0 c x 1 c x N 1 c ] = [ h ( 0 , 0 ) h ( 1 , 0 ) h ( N 1 , 0 ) h ( 0 , N / 2 ) h ( 1 , N / 2 ) h ( N 1 , N / 2 ) h ( 0 , N 1 ) h ( 1 , N 1 ) h ( N 1 , N 1 ) ] ( [ y 0 y 1 y N 1 ] . * [ e i Φ ( ω 0 ) e i Φ ( ω 1 ) e i Φ ( ω N 1 ) ] )
[ x 0 c x 1 c x N 1 c ] = [ h ( 0 , 0 ) Φ 0 h ( 1 , 0 ) Φ 1 h ( N 1 , 0 ) Φ N 1 h ( 0 , N / 2 ) Φ 0 h ( 1 , N / 2 ) Φ 1 h ( N 1 , N / 2 ) Φ N 1 h ( 0 , N 1 ) Φ 0 h ( 1 , N 1 ) Φ 1 h ( N 1 , N 1 ) Φ N 1 ] [ y 0 y 1 y N 1 ]
[ x 0 c x 1 c x N 1 c ] = [ h ( 0 , 0 ) Φ 0 h ( 1 , 0 ) Φ 1 h ( N 1 , 0 ) Φ N 1 h ( 0 , 1 ) Φ 0 h ( 1 , 1 ) Φ 1 h ( N 1 , 1 ) Φ N 1 h ( 0 , N / 2 1 ) Φ 0 h ( 1 , N / 2 1 ) Φ 1 h ( N 1 , N / 2 1 ) Φ N 1 h ( 0 , N / 2 ) Φ 0 h ( 1 , N / 2 ) Φ 1 h ( N 1 , N / 2 ) Φ N 1 ( h ( 0 , N / 2 1 ) Φ 0 ) * ( h ( 1 , N / 2 1 ) Φ 1 ) * ( h ( N 1 , N / 2 1 ) Φ N 1 ) * ( h ( 0 , 1 ) Φ 0 ) * ( h ( 1 , 1 ) Φ 1 ) * ( h ( N 1 , 1 ) Φ N 1 ) * ] [ y 0 y 1 y N 1 ]
[ ( h ( 0 , 0 ) Φ 0 ) * ( h ( 0 , N / 2 1 ) Φ 0 ) * h ( 0 , N / 2 1 ) Φ 0 h ( 0 , 1 ) Φ 0 ( h ( N 1 , 0 ) Φ N 1 ) * ( h ( N 1 , N / 2 1 ) Φ N 1 ) * h ( N 1 , N / 2 1 ) Φ N 1 h ( N 1 , 1 ) Φ N 1 ]
local contrast = μ o μ b
SNR = 20 × log 10 ( 1 N o ( i , j ) object I ( i , j ) 2 1 N b ( i , j ) background I ( i , j ) 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.