Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Axial localization using time reversal multiple signal classification in optical scanning holography

Open Access Open Access

Abstract

This paper presents a method to identify the axial location of targets in an optical scanning holography (OSH) system. By combining time reversal (TR) technique with the multiple signal classification (MUSIC) method in OSH, axial location can be detected with high resolution. Both simulation and experimental work have been carried out to verify the feasibility of the proposed work.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As one of the digital holography techniques [1,2], optical scanning holography (OSH) can acquire and process the three-dimensional (3D) volumetric information via raster scanning [3]. It emerges as a promising candidate for detecting the incoherent emissions such as fluorescence [4], and has found wide applications such as microscopy [5–7], remote sensing [8,9], and image encryption etc [10,11].

Unlike other 3D imaging methods such as confocal microscopy, or time domain optical coherence tomography, OSH does not require axial scanning. Both intensity and phase information of a 3D object is recorded in a two-dimensional (2D) hologram. Thus, it is necessary to recover individual planar image of the object from the hologram, which is known as sectioning, or sectional image reconstruction. Traditional method for sectioning computes the convolution of the acquired hologram with the Fresnel zone plate (FZP) at the focused depth, which suffers from large undesired residual signals from other sections, i.e., defocus noise. To eliminate the defocus noise, many ways for sectioning have been proposed, including inverse imaging [12,13], Wiener filter [14], and the use of a random phase pupil [15]. Other methods based on double measurements of the hologram have further improved the depth resolution, such as ways of using a dual-wavelength laser [16], double-location detection [17], and reconfigurable pupil [18].

All these sectioning methods require prior knowledge of the depth location of the focused section, which is hard to realize for certain applications [19]. In order to locate objects, an autofocusing method based on Wigner distribution has been proposed to extract the depth information of a 3D object [20]. Zhang et al. utilize the edge information to determine the target FZP automatically [21]. The algorithm based on diffraction-free vortices and frequency analysis have also been proved to effectively locate object from digital hologram [22,23]. The structure tensor has also been used for effective autofocusing [24].

In this paper, we propose an alternative autofocusing approach in OSH, which extends it with time-reversal (TR) imaging. We refer to it as TR-OSH. TR imaging can locate targets as well as achieving super-resolution in turbid medium, and has received considerable interest in the past few years [25,26]. We then combine the TR imaging with multiple signal classification (MUSIC), a subspace-based process to locate unknown targets [27]. The focus of the proposed method is to detect the targets, and retrieve the axial location.

This paper is organized as follows. In Section 2, we first introduce the OSH system and analyze the principle of TR-OSH. The theory of axial localization using TR-MUSIC is also explained in this section. Simulation results are then presented in Section 3 to demonstrate the visibility of the presented method. Section 4 shows the experimental results, while Section 5 gives some concluding remarks.

2. Principle

2.1. Optical scanning holography

Figure. 1 illustrates an OSH system [3]. The output of the laser is divided into two beams by the beam splitter (BS1). One of the beams is modulated by the acousto-optic frequency shifter (AOFS) and then passes through the pupil p2(x, y) and lens L2, after which the wavefront is planar. Meanwhile, the other one becomes a spherical wavefront after passing through the pupil p1(x, y) and lens L1. The two parts are then combined together by BS2 to scan the object. Lens L3 is used to collect both the transmitted and the diffracted light from the object. The generated electrical signal by the photo-detector (PD) is sent to the electrical demodulation part, which contains a band-pass filter (BPF), two electrical multipliers and low-pass filters. After processing, a 2D hologram containing the 3D volume information of the object is recorded in a computer [3].

 figure: Fig. 1

Fig. 1 OSH system setup [3]. BS, beam splitter; M, mirror; AOFS, acousto-optic frequency shifter; LPF, low-pass filter; BPF, band-pass filter; p1(x, y), p2(x, y) are pupils.

Download Full Size | PDF

In an OSH system, typically, we set the pupil function to be p1(x, y) = 1 and p2(x, y) = δ(x, y). In this way, the optical transfer function (OTF) can be expressed as

(kx,ky;z)=exp[jz2k0(kx2+ky2)]
where j=1, k0 is the wavenumber of the laser source, kx and ky are the horizontal and vertical spatial frequency coordinates, and z is the distance between the X–Y scanner and the object. The corresponding impulse response is
h(x,y;z)=jk02πzexp[jk0(x2+y2)2z]
where x and y are the horizontal and vertical spatial coordinates [3].

For a 3D object with a complex field Γ(x, y; z), the recorded hologram g(x, y) is expressed as

g(x,y)=+|Γ(x,y;z)|2h(x,y;z)dz,
where ⊗ is the convolution operation. The corresponding frequency domain expression is
G(kx,ky)=+{𝒪(x,y;z)}(kx,ky;z)dz,
where represents the Fourier transform, G(kx, ky) = {g(x, y)} is the Fourier transform of the hologram, and 𝒪(x, y; z) = |Γ(x, y; z)|2. In a discrete OSH system, we assume that the object can be discretized into N sections along the z-axis. Equation. (4) thus becomes
G(kx,ky)=i=1N{𝒪(x,y;zi)}×(kx,ky;zi)
where i presents the i-th section.

2.2. Time reversal optical scanning holography

Time reversal optical scanning holography combines two methods together: the first one is TR imaging based on time reversal invariance [28], and the other one is Multiple Signal Classification (MUSIC) to retrieve the location of targets in turbid media.

Suppose there are M point targets in the object. The generated hologram g(xs, ys) by the source located in (xs, ys) is

g(xs,ys)=i=1N𝒪(xs,ys;zi)h(xs,ys,zi)=i=1N++𝒪(x,y;zi)h(xxs,yys,zi)dxdy=m=1M𝒪(xm,ym,zm)h(xmxs,ymys,zm),
where m is the m-th point, and xm, ym, and zm are the corresponding coordinates.

In TR-OSH, the recorded hologram can be regarded as the response matrix K, where

K={g(xs,ys)}.

As a consequence of the reciprocity of light propagation, the TR matrix can be constructed to represent light propagation from source positions to detector positions and back (TSDDS), or the opposite way from detectors to sources and back (TDSSD).

In the time domain, the impulse response representing light propagation from detectors to sources is given by

h(x,y;z)=jk02π(z)exp[jk0(x2+y2)2(z)]=jk02πzexp[jk0(x2+y2)2z],
where −z = c × (−t), c is the speed of light, while −t stands for the time reversal operation. By comparing Eq. (2) and Eq. (8), we can have
h(x,y;z)=h*(x,y;z)
where * represents the conjugate operation. It can be observed from this equation that the time reversal operation has caused a space reversal or counter-propagation effect in the OSH system. Similar relationship can be achieved based on Eq. (1) and Eq. (8), i.e.,
(kx,ky;z)=*(kx,ky;z)

Thus in the spatial domain, the TR matrix TDSSD can be expressed as

TDSSD=1{KKH}=g(xs,ys)g(xs,ys),
where g′(xs, ys) = −1{KH}, and H is the conjugate transpose operation. By using singular value decomposition (SVD), g(xs, ys) can be expressed as [29]
g(xs,ys)=m=1M𝒪(xm,ym,zm)h(xmxs,ymys,zm)+0m=M+1N1h(xmxs,ymys,zm)=m=1Mvx(m)𝒪(xm,ym,zm)vyT(m)
where vx and vy are the column vectors of x and y direction, and T is transpose operation. According to Eq. (11) and Eq. (12), the TR matrix thus becomes
TDSSD=[m=1Mvx(m)𝒪(xm,ym;zm)vyT(m)][n=1Mvy*(n)𝒪(xn,yn;zn)vxH(n)]=m=1Mvx(m)|𝒪(xm,ym,zm)|2vy(m)vxH(m)
where | · | is absolute value, ‖ · ‖ is norm. We have made the approximation that vyT(m)vy*(n)=0 while mn. In a similar way, the TR matrix TSDDS can be expressed as
TSDDS={KHK}=g(xs,ys)g(xs.ys)=[m=1Mvy*(m)𝒪(xm,ym;zm)vxH(m)][n=1Mvx(n)𝒪(xn,yn;zn)vyT(n)]=m=1Mvy*(m)|𝒪(xm,ym,zm)|2vx(m)vyT(m).

Due to the Hermitian characteristics of the TR matrices TDSSD and TSDDS, there exists a series of N1 dimension orthonormal eigenvectors, with a common set of non-negative real eigenvalues. For M point-like targets located in the object, there would be M dominate eigenvalues λ, with λm > 0, when m = 1, . . ., M, and λm ≈ 0, for m > M. The eigenvalues and eigenvectors can be derived from the equation shown below, where

TDSSDvx(m)=|𝒪(xm,ym,zm)|2vy(m)vx(m)vx(m)TSDDSvy*(m)=|𝒪(xm,ym,zm)|2vx(m)vy(m)vy*(m).
Note that vx(m) and vy*(m) are the eigenvectors, and the eigenvalues are
λm=|𝒪(xm,ym,zm)|2vy(m)vx(m),m=1,,M
The rank of the TR matrix is M, and the dimensions of the signal subspaces of TDSSD and TSDDS are all equal to M. Thus we can have
vx(m=1,,M),vx(m=M+1,,N1)0vy*(m=1,,M),vy*(m=M+1,,N1)0
where 〈a, b〉 = aH b is the inner product of vectors. One can deduce from Eq. (17) that the inner products of the signal subspace and the noise subspace are approximately zero.

The MUSIC method is next used to locate the targets. As mentioned above, the object can be discretized into N layers along the z-axis, and each layer contains N1 × N1 squares. The locations of targets can be determined by computing

Qx(Xp,zi)=m=M+1N1|(vx(m))Tv1*(Xp,zi)|2Qy(Xp,zi)=m=M+1N1|(vy*(m))Tv2(Xp,zi)|2
where Xp represents the horizontal position of the test target, v1*(Xp,zi) and v2(Xp, zi) are the eigenvectors of K(Xp)KH(Xp) and KH(Xp)K(Xp). One can deduce from Eq. (18) that the squared sum of inner products Qx(Xp, zi) and Qy(Xp, zi) would vanish, if Xp is a true target.

Equation. (19) is then used to calculate the pseudo-spectrum.

Px(Xp,zi)=v1(Xp,zi)2/Qx(Xp,zi)Py(Xp,zi)=v2(Xp,zi)2/Qy(Xp,zi)
where Px(Xp, zi) and Py(Xp, zi) are pseudo-spectra of x direction and y direction, respectively. ‖v1(Xp, zi)‖2 and ‖v2(Xp, zi)‖2 are used for normalization. The true positions of the targets can thus be decided by
P(Xp,zi)=Px(Xp,zi)Py(Xp,zi),
where P(Xp, zi) represents the pseudo-spectrum, and its poles would correspond to the target locations.

It is worth noting that, if multiple point-like targets are located on the same layer, i.e., with the same z-axis, except the true locations, Eq. (20) may also introduce some erroneous locations. For example, if two point-like targets are located at (x1, y1) and (x2, y2) with the same z, two fake targets at (x1, y2) and (x2, y1) would emerge by calculating Eq. (20).

However, these fake points would not damage the result if only the axial location is required, this is due to the reason that the pseudo-spectrum would only have non-zero values at the section distance where the targets exist. One can accumulate the pseudo-spectrum P(Xp, zi) on each section along the z-axis, such that

R(zi)=p=1N12P(Xp,zi),
where R(zi) represents the computed pseudo-spectrum along the z-axis. The local maximum of R(zi) would only emerge at axial locations where the targets exist.

3. Simulation

In this section, we analyze the proposed method with both a single-point target and a multi-point target. The application in complex objects is also presented.

3.1. Single point target

To verify the visibility of the proposed method, a single point target is first considered. In the simulation, we assume the wavelength of the laser source is λ = 632.8 nm. The object has a section size of 1 mm × 1 mm, and is sampled to 100 × 100 pixels size. The single point target has been placed at (10, 5) in the xy plane, while its axial location is z = 34 mm. The generated FZP as well as the corresponding hologram in the frequency domain are shown in Figs. 2(a) and 2(b), respectively.

 figure: Fig. 2

Fig. 2 (a) Fresnel zone plate, (b) the generated hologram in the frequency domain.

Download Full Size | PDF

As can be expected in the simulation, the size of the response matrix K is 100 × 100. The TR matrices Tx = KKH and Ty = KHK are formed with the same size. Thus 100 eigenvalues with 100 eigenvectors can be found.

The leading 40 eigenvalues of the TR matrix are shown in Fig. 3(a) on a logarithmic scale. One can see that only the first eigenvalue dominates, which indicates that the dimension of the signal subspace and the number of targets are determined to be 1. Figure. 3(b) shows the 2D slice of the calculated pseudo spectrum at z = 34 mm. One can easily find a single target at the exact location (10, 5) in this figure.

 figure: Fig. 3

Fig. 3 (a) The leading 40 eigenvalues of the TR matrix, (b) 2D slice of the pseudo spectrum at z = 34 mm.

Download Full Size | PDF

The corresponding axial localization of the object is shown in Fig. 4. One can see from this figure that there is only one local maximum at z = 34 mm on the curve. This would definitely help finding the axial location of the object.

 figure: Fig. 4

Fig. 4 The accumulated pseudo-spectrum along the z-axis.

Download Full Size | PDF

3.2. Multi-point target

In this subsection, the situation of multi-point target is tested. Multi-point target located in different sections and in the same section are both considered.

We first deal with the case of two point targets in two sections. One of the points is at (10, 5) with z1 = 34 mm, while the other one is at (40, 14) with z2 = 34.4 mm. The size of the response matrix K as well as the TR matrix are kept the same as 100 × 100.

Figure. 5(a) shows the first 40 computed eigenvalues in a descending order of magnitude. One can see from this figure that the first two eigenvalues are much bigger than the others. This indicates that there are two targets in the object, and the dimension of the signal space is 2 as well. The calculated pseudo-spectra at z1 and z2 are shown in Figs. 5(b) and 5(c), respectively. A point target at (10, 5) can be identified while z = 34 mm, which is also the case for the point target at (40, 14) while z = 34.4 mm.

 figure: Fig. 5

Fig. 5 (a) The first 40 eigenvalues of time reversal matrix, (b) and (c) are the pseudo spectrum at z1 = 34 mm and z2 = 34.4 mm, respectively.

Download Full Size | PDF

The accumulated pseudo spectrum along the z-axis is shown in Fig. 6. The two local maxima at z1 = 34 mm and z2 = 34.4 mm show where the targets can be found on the axial direction.

 figure: Fig. 6

Fig. 6 The accumulated pseudo spectrum along the z-axis.

Download Full Size | PDF

The situation of two point targets located in the same section is then studied. In the simulation, the two points are at (10, 5) and (40, 14) in the xy plane, whereas the axial location are z = 34 mm for both of them. The distribution of eigenvalues also shows that two targets can be detected, as can be seen in Fig. 7(a). The corresponding pseudo-spectrum is shown in Fig. 7(b). One can observe that except the targets at (10, 5) and (40, 14), two fake points at (10, 14) and (40, 5) also emerge. The fake points are the consequence of the cross product shown in Eq. (20). However, as we mentioned above, the fake points would only be generated on the section plane where real targets exist. So, it does not affect the result of the axial localization, as is shown in Fig. 7(c). A local maximum at z = 34 mm shows exactly the axial location of the targets.

 figure: Fig. 7

Fig. 7 (a) The leading 40 eigenvalues of the TR matrix, (b) 2D slice of the pseudo spectrum at z = 34 mm, and (c) the accumulated pseudo spectrum along the z-axis.

Download Full Size | PDF

We then consider the situation of 9 point targets in a simulated object. The known position of these nine targets are listed in Table 1. One can see that these points are distributed in three different sections, with four points at z1 = 100μm, three points at z2 = 101μm and two points at z3 = 102μm. The section distance is set to be 1μm. No random noise is added.

Tables Icon

Table 1. The coordinates of the nine points

The calculated pseudo spectrum along the z-axis is shown in Fig. 8. One can observe that there are three local maxima on the curve. This shows that the targets are located at three different sections. The retrieved axial locations are 100μm, 101μm, and 102μm, which match the simulated model very well.

 figure: Fig. 8

Fig. 8 The accumulated pseudo-spectrum along the z-axis.

Download Full Size | PDF

3.3. Resolution analysis

In this subsection, the resolution of the proposed method is analyzed. One can deduced from Eq. (1) and Eq. (4) that the hologram is highly related to wavelength λ and the section distance z, and the hologram itself is crucial for the resolution of TR-MUSIC. The wavelength of the laser source is λ = 632.8 nm, while the focal length of the lens L1 and L2 are set to be f = 50 mm. The object is of size 100μm × 100μm, and is sampled to 16 × 16 pixels. Two point targets are placed at (10, 5) with different axial locations.

In the first stage, we set the axial location of the target at around 34 mm. The accumulated pseudo-spectra along the z-axis are shown in Fig. 9, with (a) z1 = 34 mm, z2 = 35 mm, (b) z1 = 34 mm, z2 = 34.4 mm, and (c) z1 = 34 mm, z2 = 34.04 mm, respectively. One can observe from this figure that, as the two targets get close to each other, the full width at half maximum (FWHM) becomes larger. When they are located at z1 = 34, z2 = 34.04 mm, the two targets cannot be distinguished. The measured resolution with z1 = 34 mm is around 0.25 mm.

 figure: Fig. 9

Fig. 9 Accumulated pseudo-spectrum along the z-axis, with (a) z1 = 34 mm, z2 = 35 mm, (b) z1 = 34 mm, z2 = 34.4 mm, and (c) z1 = 34 mm, z2 = 34.04 mm.

Download Full Size | PDF

The two targets are then put closer to the scanning mirror, with (a) z1 = 100μm, z2 = 100.1μm, (b) z1 = 100μm, z2 = 100.01μm, and (c) z1 = 100μm, z2 = 100.001μm, respectively. The accumulated pseudo-spectra are shown in Fig. 10. One can see that the two targets can be clearly located even with 0.01μm section difference. The measured resolution with z1 = 100μm is around 0.003μm.

 figure: Fig. 10

Fig. 10 Accumulated pseudo-spectrum along the z-axis, with (a) z1 = 100μm, z2 = 100.1μm, (b) z1 = 100μm, z2 = 100.01μm, and (c) z1 = 100μm, z2 = 100.001μm.

Download Full Size | PDF

Table 2 lists the measured resolutions with different z1, and the relationship between them is shown in Fig. 11 as well. One can observe an improved depth resolution as z1 gets smaller. This is due to the intrinsic characteristic of the OSH system that the FZP would have more opaque and transparent zones when the object is closer to the scanning mirror [18,30].

Tables Icon

Table 2. Different values of z1 corresponding with different resolution.

 figure: Fig. 11

Fig. 11 The relationship between resolution and the value of z1.

Download Full Size | PDF

We then analyze the influence of wavelength on resolution. In this stage, the objects are located at z1 = 34 mm, and z2 = 34.4 mm, with section distance equal to 0.4 mm. The objects are scanned with different wavelength, the accumulated pseudo-spectrum along the z-axis are shown in Fig. 12. It can be observed that the two points with two distinguishable peaks in Figs. 12(a) and 12(b) become barely resolved as wavelength λ increases. The measured resolution with different wavelength are listed in Table 3. It is clear from these data that by employing a shorter wavelength source in the TR-based OSH system, the depth resolution would be improved.

 figure: Fig. 12

Fig. 12 Accumulated pseudo-spectrum along the z-axis, with (a) λ = 405 nm, (b) λ = 632 nm, and (c) λ = 780 nm.

Download Full Size | PDF

Tables Icon

Table 3. Different values of wavelength corresponding with different resolution.

3.4. Noise analysis

The system noise is also considered in this subsection. Two point targets located at (10, 5) with z1 = 100μm are used in the simulation. The object is of size 100μm × 100μm, and is sampled to 16 × 16 pixels. The relationship between the resolution and signal-to-noise ratio (SNR) is shown in Fig. 13. One can observe that the resolution degrades from 0.0035μm to 0.015μm as the SNR decreases from 100 dB to 10 dB, which shows good anti-noise property. This is due to the intrinsic characteristic of MUSIC, which divides the information into signal subspace and noise subspace.

 figure: Fig. 13

Fig. 13 The relationship between the resolution and SNR.

Download Full Size | PDF

The impact of noise on signal subspace and noise subspace are then analyzed. In the simulation, the cases for SNR equal to 10 dB, 30 dB, and 60 dB are considered. The eigenvalue distribution of signal subspace and noise subspace are shown in Fig. 14. The relative value Ere of each point is calculated by Ere = Ei/Eav, where i = 1, 2, . . ., 16, Ei is the calculated i-th eigenvalue, and Eav is the average value of the i-th eigenvalue under different SNR. It can be observed that the relative values in signal subspace is almost equal to 1, whereas the distribution of the relative values in noise subspace are quite dispersive. This indicates that the signal subspace is rarely influenced by the noise, which is not the case for the noise subspace. The high resolution is due to the orthogonality of signal subspace and the noise subspace in the TR-MUSIC method.

 figure: Fig. 14

Fig. 14 The distribution of eigenvalues with different SNR.

Download Full Size | PDF

3.5. Computing cost analysis

TR-MUSIC captured the pseudo-spectrum by regarding every spatial point as test point, which would lead to high computational cost. Suppose there exists a point P(x, y) on certain 2D plane, as is illustrated in Fig. 15. In the proposed method, we would first calculate the x-axis of point P, and then the y-axis. By multiplying the two items, we can have the final 2D pseudo-spectrum of the plane. However, as can been observed from the figure, both the calculated x-axis and y-axis would definitely cross the diagonal at (x, x) and (y, y), respectively. So, except regarding every space point, we can only calculate the pseudo-spectrum of the digonal elements on each depth location, and the positions that need to be calculated are reduced from N12 to N1. This would greatly improve the effciency of the proposed algorithm, which has been verified via simulation. The computer is configured as Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz, 8G RAM. Suppose there is a single point at (10, 5) with z = 34 mm. The computing cost with different matrix dimensions are listed in Table 4. It can be seen from this table that the high computational cost of traditional TR-MUSIC can be overcome by just computing the pseudo-spectrum of the diagonal elements.

 figure: Fig. 15

Fig. 15 The schematic diagram of the points response in x and y directions.

Download Full Size | PDF

Tables Icon

Table 4. Computational cost with different matrix size.

3.6. The application in complex objects

In addition to point targets, the application of the proposed method in complex objects has also been considered in this section. In the simulation, two sections containing complex patterns are used, as are shown in Figs. 16(a) and 16(b). The sections are of size 1 mm × 1 mm, and are sampled to 128 × 128 pixels. The axial location of the two sections are z1 = 31 mm, and z2 = 32 mm, respectively. Figure. 16(c) shows the recorded hologram.

 figure: Fig. 16

Fig. 16 (a) section 1 with z1 = 31mm, (b) section 2 with z2 = 32mm, and (c) the generated hologram.

Download Full Size | PDF

The eigenvalues of the TR matrix are shown in Fig. 17(a). One can observe that the distribution of the eigenvalues is more complicated than that of the point targets. In this situation, the signal subspace and the noise subspace can be separated by using the L-curve method [31]. The calculated pseudo-spectra along the z-axis are shown in Fig. 17(b), in which the red line represents the autofocusing method using entropy minimization [32], while the blue one stands for the proposed technique. Entropy minimization is based on the measurement of blur and randomness of sectional images, and the local minimum of the entropy curve denotes the axial location. It can be seen that the entropy minimization method fails to locate the two sections, while the two local maxima with the proposed method shows exactly the axial location of the object. This indicates that the proposed method can also be adapted to the object where complex patterns are included.

 figure: Fig. 17

Fig. 17 (a) The eigenvalues of the TR matrix, (b) the accumulated diagonal elements along the z-axis.

Download Full Size | PDF

We have also compared the two methods with different section distance. In the simulation, we have kept z1 = 31 mm, while the section distances are set as 10 mm, 4 mm, and 1 mm. The corresponding results are shown in Fig. 18. One can see that the local minimum on the red curve vanishes as the section distance decreases. While for the case of the proposed method, the local maximum on the blue curve dominates even when section distance decreases to 1 mm.

 figure: Fig. 18

Fig. 18 The axial localization distribution of TR-MUSIC and entropy minimization, (a) z1 = 31 mm, z2 = 41 mm, (b) z1 = 31 mm, z2 = 35 mm, (c) z1 = 31 mm, z2 = 32 mm.

Download Full Size | PDF

It is worth noting that the computing time of entropy minimization is 0.16s, while that of TR-MUSIC is 4.7s, using the same computer as described in the last section. Compared to the entropy method, the proposed method requires more computing time, while features higher resolution at the same time.

4. Experiments

The proposed TR-OSH method has also been verified experimentally. In the experiment, a He-Ne laser with wavelength centered at λ = 632.8 nm is used. The object has two sections located at z1 = 87 cm, and z2 = 107 cm, respectively. It has been sampled to 500 × 500 pixels size. The recorded hologram are shown in Fig. 19, including the real part (a) and the imaginary part (b). To retrieve the axial location, we have utilized the proposed method as well as the entropy minimization one. The eigenvalue distribution is shown in Fig. 20(a), while the results for axial localization are shown in Fig. 20(b). The computing time of entropy minimization is 0.5s, and that of TR-MUSIC is 223s. It can be observed from Fig. 20(b) that only one local minimum exits with the entropy minimization method. The retrieved section distance is at z = 104 cm. The sectioning result is shown in Fig. 22 by using inverse image method. One can observed from this figure that large defocus noise exits. This indicates that the entropy minimization method fails to determine the axial localization of the object. While for the case of the proposed method, two local maxima on the curve clearly tells the locations of the two sections. The reconstructed sectional images are shown in Figs. 21(a) and 21(b), in which the inverse imaging method are used [13]. As can be seen from this figure, the proposed method succeeds in detecting the right location of each section.

 figure: Fig. 19

Fig. 19 (a) real part, and (b) imaginary part of the hologram.

Download Full Size | PDF

 figure: Fig. 20

Fig. 20 (a) The eigenvalues of the TR matrix, (b) the accumulated diagonal elements along the z-axis.

Download Full Size | PDF

 figure: Fig. 21

Fig. 21 Reconstructed sectional images at (a) z1, and (b) z2.

Download Full Size | PDF

 figure: Fig. 22

Fig. 22 Reconstructed image by using the entropy minimization and inverse image.

Download Full Size | PDF

5. Conclusions

The TR-MUSIC algorithm can be used to confirm the axial localization of targets in OSH, which is vital for sectional image reconstruction. In this paper, we use the spatial focusing property of TR and the high precision property of MUSIC algorithm to realize the axial localization. Simulation results show that the proposed method can be used to find the axial location of the point-like targets as well as the complex graphics. The feasibility of the method has also been verified via experiment. Due to the temporal and spatial focusing properties of the time reversal techniques, the proposed method can achieve higher resolution. Meanwhile, the improved diagonal algorithm makes it more time efficient compared to traditional TR-MUSIC method.

Funding

National Natural Science Foundation of China (NSFC) (61361166008, 61107018); Research Grants Council, University Grants Committee, Hong Kong (RGC, UGC) (HKU 17203217, N_HKU714/13).

References and links

1. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2, 190–195 (2008). [CrossRef]  

2. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the rayleigh limit by finch fluorescence microscopic imaging,” Opt. Express 19, 26249 (2011). [CrossRef]  

3. T.-C. Poon, Optical Scanning Holography with MATLAB (Springer, 2007). [CrossRef]  

4. B. W. Schilling, T.-C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, “Three-dimensional holographic fluorescence microscopy,” Opt. Lett. 22, 1506–1508 (1997). [CrossRef]  

5. J. Swoger, M. Martínez-Corral, J. Huisken, and E. H. K. Stelzer, “Optical scanning holography as a technique for high-resolution three-dimensional biological microscopy,” J. Opt. Soc. Am. A 19, 1910–1918 (2002). [CrossRef]  

6. G. Indebetouw and W. Zhong, “Scanning holographic microscopy of three-dimensional fluorescent specimens,” J. Opt. Soc. Am. A 23, 1699–1707 (2006). [CrossRef]  

7. A. C. S. Chan, K. K. Tsia, and E. Y. Lam, “Subsampled scanning holographic imaging (SuSHI) for fast, non-adaptive recording of three-dimensional objects,” Optica 3, 911–917 (2016). [CrossRef]  

8. B. W. Schilling and G. C. Templeton, “Three-dimensional remote sensing by optical scanning holography,” Appl. Opt. 40, 5474–5481 (2001). [CrossRef]  

9. T. Kim, T.-C. Poon, and G. Indebetouw, “Depth detection and image recovery in remote sensing by optical scanning holography,” Opt. Eng. 41, 1331–1338 (2002). [CrossRef]  

10. T.-C. Poon, T. Kim, and K. Doh, “Optical scanning cryptography for secure wireless transmission,” Appl. Opt. 42, 6496–6503 (2003). [CrossRef]   [PubMed]  

11. H. Di, K. Zheng, X. Zhang, E. Y. Lam, T. Kim, Y. S. Kim, T.-C. Poon, and C. Zhou, “Multiple-image encryption by compressive holography,” Appl. Opt. 51, 1000–1009 (2012). [CrossRef]   [PubMed]  

12. E. Y. Lam, X. Zhang, H. Vo, T.-C. Poon, and G. Indebetouw, “Three-dimensional microscopy and sectional image reconstruction using optical scanning holography,” Appl. Opt. 48, H113–H119 (2009). [CrossRef]   [PubMed]  

13. X. Zhang, E. Y. Lam, and T.-C. Poon, “Reconstruction of sectional images in holography using inverse imaging,” Opt. Express 16, 17215–17226 (2008). [CrossRef]   [PubMed]  

14. T. Kim, “Optical sectioning by optical scanning holography and a Wiener filter,” Appl. Opt. 45, 872–879 (2006). [CrossRef]   [PubMed]  

15. Z. Xin, K. Dobson, Y. Shinoda, and T.-C. Poon, “Sectional image reconstruction in optical scanning holography using a random-phase pupil,” Opt. Lett. 35, 2934–2936 (2010). [CrossRef]   [PubMed]  

16. J. Ke, T.-C. Poon, and E. Y. Lam, “Depth resolution enhancement in optical scanning holography with a dual-wavelength laser source,” Appl. Opt. 50, H285–H296 (2011). [CrossRef]   [PubMed]  

17. H. Ou, T.-C. Poon, K. K. Wong, and E. Y. Lam, “Depth resolution enhancement in double-detection optical scanning holography,” Appl. Opt. 52, 3079–3087 (2013). [CrossRef]   [PubMed]  

18. H. Ou, T.-C. Poon, K. K. Y. Wong, and E. Y. Lam, “Enhanced depth resolution in optical scanning holography using a configurable pupil,” Photon. Res. 2, 64 (2014). [CrossRef]  

19. Z. Ren, N. Chen, and E. Y. Lam, “Extended focused imaging and depth map reconstruction in optical scanning holography,” Appl. Opt. 55, 1040–1047 (2016). [CrossRef]   [PubMed]  

20. T. Kim and T.-C. Poon, “Autofocusing in optical scanning holography,” Appl. Opt. 48, H153–H159 (2009). [CrossRef]   [PubMed]  

21. X. Zhang, E. Y. Lam, T. Kim, Y. S. Kim, and T.-C. Poon, “Blind sectional image reconstruction for optical scanning holography,” Opt. Lett. 34, 3098–3100 (2009). [CrossRef]   [PubMed]  

22. S. Oh, C.-Y. Hwang, I. K. Jeong, S.-K. Lee, and J.-H. Park, “Fast focus estimation using frequency analysis in digital holography,” Opt. Express 22, 28926 (2014). [CrossRef]   [PubMed]  

23. P. Bouchal and Z. Bouchal, “Non-iterative holographic axial localization using complex amplitude of diffraction-free vortices,” Opt. Express 22, 30200 (2014). [CrossRef]  

24. Z. Ren, N. Chen, and E. Y. Lam, “Automatic focusing for multisectional objects in digital holography using the structure tensor,” Opt. Lett. 42, 1720 (2017). [CrossRef]   [PubMed]  

25. B. Wu, W. Cai, M. Alrubaiee, M. Xu, and S. K. Gayen, “Time reversal optical tomography: locating targets in a highly scattering turbid medium,” Opt. Express 19, 21956–21976 (2011). [CrossRef]   [PubMed]  

26. B. Wu, W. Cai, and S. K. Gayen, “Three-dimensional localization of fluorescent targets in turbid media using time reversal optical tomography,” Appl. Phys. Lett. 101, 251103 (2012). [CrossRef]  

27. A. J. Devaney, E. A. Marengo, and F. K. Gruber, “Time-reversal-based imaging and inverse scattering of multiply scattering point targets,” J. Acoust. Soc. Am. 118, 3129–3138 (2005). [CrossRef]  

28. M. Fink, “Time reversed acoustics,” Reports on Prog. Phys. 50, 34–40 (1997).

29. K. Dan, “A singularly valuable decomposition: The svd of a matrix,” Coll. Math. J. 27, 2–23 (1996). [CrossRef]  

30. J. Ke, T. C. Poon, and E. Y. Lam, “Depth resolution enhancement in optical scanning holography with a dual-wavelength laser source,” Appl. Opt. 50, 285–296 (2011). [CrossRef]  

31. B. Wu, W. Cai, M. Alrubaiee, M. Xu, and S. K. Gayen, “Three dimensional time reversal optical tomography,” in “SPIE BiOS,” (2011), pp. 242–243.

32. Z. Ren, N. Chen, A. Chan, and E. Y. Lam, “Autofocusing of optical scanning holography based on entropy minimization,” in “Digital Holography and Three-Dimensional Imaging,” (Optical Society of America, 2015), pp. DT4A–4.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (22)

Fig. 1
Fig. 1 OSH system setup [3]. BS, beam splitter; M, mirror; AOFS, acousto-optic frequency shifter; LPF, low-pass filter; BPF, band-pass filter; p1(x, y), p2(x, y) are pupils.
Fig. 2
Fig. 2 (a) Fresnel zone plate, (b) the generated hologram in the frequency domain.
Fig. 3
Fig. 3 (a) The leading 40 eigenvalues of the TR matrix, (b) 2D slice of the pseudo spectrum at z = 34 mm.
Fig. 4
Fig. 4 The accumulated pseudo-spectrum along the z-axis.
Fig. 5
Fig. 5 (a) The first 40 eigenvalues of time reversal matrix, (b) and (c) are the pseudo spectrum at z1 = 34 mm and z2 = 34.4 mm, respectively.
Fig. 6
Fig. 6 The accumulated pseudo spectrum along the z-axis.
Fig. 7
Fig. 7 (a) The leading 40 eigenvalues of the TR matrix, (b) 2D slice of the pseudo spectrum at z = 34 mm, and (c) the accumulated pseudo spectrum along the z-axis.
Fig. 8
Fig. 8 The accumulated pseudo-spectrum along the z-axis.
Fig. 9
Fig. 9 Accumulated pseudo-spectrum along the z-axis, with (a) z1 = 34 mm, z2 = 35 mm, (b) z1 = 34 mm, z2 = 34.4 mm, and (c) z1 = 34 mm, z2 = 34.04 mm.
Fig. 10
Fig. 10 Accumulated pseudo-spectrum along the z-axis, with (a) z1 = 100μm, z2 = 100.1μm, (b) z1 = 100μm, z2 = 100.01μm, and (c) z1 = 100μm, z2 = 100.001μm.
Fig. 11
Fig. 11 The relationship between resolution and the value of z1.
Fig. 12
Fig. 12 Accumulated pseudo-spectrum along the z-axis, with (a) λ = 405 nm, (b) λ = 632 nm, and (c) λ = 780 nm.
Fig. 13
Fig. 13 The relationship between the resolution and SNR.
Fig. 14
Fig. 14 The distribution of eigenvalues with different SNR.
Fig. 15
Fig. 15 The schematic diagram of the points response in x and y directions.
Fig. 16
Fig. 16 (a) section 1 with z1 = 31mm, (b) section 2 with z2 = 32mm, and (c) the generated hologram.
Fig. 17
Fig. 17 (a) The eigenvalues of the TR matrix, (b) the accumulated diagonal elements along the z-axis.
Fig. 18
Fig. 18 The axial localization distribution of TR-MUSIC and entropy minimization, (a) z1 = 31 mm, z2 = 41 mm, (b) z1 = 31 mm, z2 = 35 mm, (c) z1 = 31 mm, z2 = 32 mm.
Fig. 19
Fig. 19 (a) real part, and (b) imaginary part of the hologram.
Fig. 20
Fig. 20 (a) The eigenvalues of the TR matrix, (b) the accumulated diagonal elements along the z-axis.
Fig. 21
Fig. 21 Reconstructed sectional images at (a) z1, and (b) z2.
Fig. 22
Fig. 22 Reconstructed image by using the entropy minimization and inverse image.

Tables (4)

Tables Icon

Table 1 The coordinates of the nine points

Tables Icon

Table 2 Different values of z1 corresponding with different resolution.

Tables Icon

Table 3 Different values of wavelength corresponding with different resolution.

Tables Icon

Table 4 Computational cost with different matrix size.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

( k x , k y ; z ) = exp [ j z 2 k 0 ( k x 2 + k y 2 ) ]
h ( x , y ; z ) = j k 0 2 π z exp [ j k 0 ( x 2 + y 2 ) 2 z ]
g ( x , y ) = + | Γ ( x , y ; z ) | 2 h ( x , y ; z ) d z ,
G ( k x , k y ) = + { 𝒪 ( x , y ; z ) } ( k x , k y ; z ) d z ,
G ( k x , k y ) = i = 1 N { 𝒪 ( x , y ; z i ) } × ( k x , k y ; z i )
g ( x s , y s ) = i = 1 N 𝒪 ( x s , y s ; z i ) h ( x s , y s , z i ) = i = 1 N + + 𝒪 ( x , y ; z i ) h ( x x s , y y s , z i ) d x d y = m = 1 M 𝒪 ( x m , y m , z m ) h ( x m x s , y m y s , z m ) ,
K = { g ( x s , y s ) } .
h ( x , y ; z ) = j k 0 2 π ( z ) exp [ j k 0 ( x 2 + y 2 ) 2 ( z ) ] = j k 0 2 π z exp [ j k 0 ( x 2 + y 2 ) 2 z ] ,
h ( x , y ; z ) = h * ( x , y ; z )
( k x , k y ; z ) = * ( k x , k y ; z )
T DSSD = 1 { K K H } = g ( x s , y s ) g ( x s , y s ) ,
g ( x s , y s ) = m = 1 M 𝒪 ( x m , y m , z m ) h ( x m x s , y m y s , z m ) + 0 m = M + 1 N 1 h ( x m x s , y m y s , z m ) = m = 1 M v x ( m ) 𝒪 ( x m , y m , z m ) v y T ( m )
T DSSD = [ m = 1 M v x ( m ) 𝒪 ( x m , y m ; z m ) v y T ( m ) ] [ n = 1 M v y * ( n ) 𝒪 ( x n , y n ; z n ) v x H ( n ) ] = m = 1 M v x ( m ) | 𝒪 ( x m , y m , z m ) | 2 v y ( m ) v x H ( m )
T SDDS = { K H K } = g ( x s , y s ) g ( x s . y s ) = [ m = 1 M v y * ( m ) 𝒪 ( x m , y m ; z m ) v x H ( m ) ] [ n = 1 M v x ( n ) 𝒪 ( x n , y n ; z n ) v y T ( n ) ] = m = 1 M v y * ( m ) | 𝒪 ( x m , y m , z m ) | 2 v x ( m ) v y T ( m ) .
T DSSD v x ( m ) = | 𝒪 ( x m , y m , z m ) | 2 v y ( m ) v x ( m ) v x ( m ) T SDDS v y * ( m ) = | 𝒪 ( x m , y m , z m ) | 2 v x ( m ) v y ( m ) v y * ( m ) .
λ m = | 𝒪 ( x m , y m , z m ) | 2 v y ( m ) v x ( m ) , m = 1 , , M
v x ( m = 1 , , M ) , v x ( m = M + 1 , , N 1 ) 0 v y * ( m = 1 , , M ) , v y * ( m = M + 1 , , N 1 ) 0
Q x ( X p , z i ) = m = M + 1 N 1 | ( v x ( m ) ) T v 1 * ( X p , z i ) | 2 Q y ( X p , z i ) = m = M + 1 N 1 | ( v y * ( m ) ) T v 2 ( X p , z i ) | 2
P x ( X p , z i ) = v 1 ( X p , z i ) 2 / Q x ( X p , z i ) P y ( X p , z i ) = v 2 ( X p , z i ) 2 / Q y ( X p , z i )
P ( X p , z i ) = P x ( X p , z i ) P y ( X p , z i ) ,
R ( z i ) = p = 1 N 1 2 P ( X p , z i ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.