Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional tomographic reconstruction using Voronoi weighting

Open Access Open Access

Abstract

Three-dimensional tomographic reconstruction requires careful selection of the illumination angles, often under certain measurement constraints. When the angular distribution must be nonuniform, appropriate selection of the reconstruction weights is necessary. We show that Voronoi weighting can significantly improve the fidelity of optical diffraction tomography.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Quantitative phase imaging (QPI) has been widely used for imaging phase objects [1]. Being label-free and non-invasive, refractive-index (RI) tomographic imaging techniques are used for reconstructing a wide class of transparent objects, such as biological cells, soft tissues, and optical fibers [25]. A typical tomographic experiment begins with a data acquisition process, where the target object is illuminated from various directions and the diffracted fields (or projections) are recorded for reconstruction [6].

When the wavelength of the imaging beam, $\lambda $, is much smaller than the size of inhomogeneous features in the imaged object, diffraction effects are neglected, and reconstruction methods based on computerized tomography (CT) are established [7,8]. Contrarily, when the wavelength is comparable to the feature size in the imaged objects, the diffractive nature of light must be accounted for. In this paradigm, optical diffraction tomography (ODT) can be implemented using linear and iterative inversions [9,10].

In general, tomographic reconstructions are sensitive to the selection of illumination angles. In most cases, reconstructions suffer from the so-called “missing cone problem”, caused primarily by the limited range of accessible illumination angles [11]. This well-known problem has been addressed using optimized regularization [12] and deep learning algorithms [13].

For three-dimensional (3D) imaging, another potential problem arises when illumination angles are not uniformly distributed over the surface of a unit sphere. In such scenarios, the nonuniform angular coverage may lead to artifacts if such nonuniformity is not properly accounted for. One less flexible solution that applies a transfer function computed using Ewald spheres construction was proposed for conical illumination [14]. In this paper, we expand our previously presented study [15] with a mathematical analysis of the aforementioned problem and propose a modification to the standard ODT algorithm in which the contribution of each angle is precisely calculated as a solid angle contribution of the total angular coverage on the unit sphere.

We begin by showing the effect of angular sampling using a simple ODT reconstruction model and then propose a technique that uses a Voronoi diagram on the surface of the unit sphere to provide weightings for each illumination angle’s contribution in the reconstruction process, thereby reducing the effects of nonuniform sampling. A centroidal Voronoi tessellation was previously used to quantify the maximum area of each beamlet’s diffraction pattern in single-shot ptychography using Fermat spiral pinholes’ distribution [16]. Here, we validate our proposed Voronoi weighting (VW) technique in a series of simulated experiments for various illumination angle selections and demonstrate a signal-to-noise ratio (SNR) improvements of reconstructed RI of up to 5.49 dB. Although this study focuses on ODT, Voronoi weighting can be applied to any reconstruction algorithm that includes an integration over a set of incident angles, and hence, helps to improve the accuracy of 3D tomographic reconstructions in general.

2. Mathematical model

A reconstruction algorithm represents an inversion of a forward model relating the diffracted fields to the object RI distribution. In this section, we first discuss several tomographic reconstruction algorithms including filtered back-projection (FBPJ) [17], filtered back-propagation (FBPP) [18], and a modified version of the latter for off-axis reconstructions [9]. We then introduce the proposed mathematical modification to the existing algorithms.

The tomographic image configuration is shown in Fig. 1. The object with RI profile $n({x,y,z} )$ is illuminated by a set of plane waves directed at different angles and the diffracted field is collected at each direction. For an illumination direction characterized by elevation and azimuthal angles $({\theta ,\varphi } )$, respectively, the incident and diffracted fields, ${u_o}({x^{\prime},y^{\prime},z^{\prime}} )$ and $u({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$, are expressed in the rotated “local” coordinates $({x^{\prime},y^{\prime},z^{\prime}} )$ which are related to the fixed “global” coordinates $({x,y,z} )$ by the following rotation matrices:

$$\begin{array}{{@{}c@{}}} {\left[ {\begin{array}{{@{}c@{}}} {x^{\prime}}\\ {y^{\prime}}\\ {z^{\prime}} \end{array}} \right] = \left[ {\begin{array}{{@{}ccc@{}}} {cos({ - \varphi } )}&{sin({ - \varphi } )}&0\\ { - sin({ - \varphi } )}&{cos({ - \varphi } )}&0\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{{@{}ccc@{}}} {cos({ - \theta } )}&0&{ - sin({ - \theta } )}\\ 0&1&0\\ {sin({ - \theta } )}&0&{cos({ - \theta } )} \end{array}} \right]\left[ {\begin{array}{{@{}ccc@{}}} {cos(\varphi )}&{sin(\varphi )}&0\\ { - sin(\varphi )}&{cos(\varphi )}&0\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{{@{}c@{}}} x\\ y\\ z \end{array}} \right].\; } \end{array}$$

 figure: Fig. 1.

Fig. 1. Schematic of the object rotation tomographic image configuration.

Download Full Size | PDF

2.1 Tomographic reconstruction

In imaging low-contrast objects with much greater sizes as compared to the wavelength of the illumination plane wave, optical diffraction is negligible. Hence, measurement at each illumination angle produces a projection of the object function in accordance with the projection-slice theorem [19]. This class of objects is appropriate for CT and reconstruction is implemented by means of the FBPJ algorithm [17]. When the illuminating wavelength is comparable with the size of the imaged feature, diffraction cannot be ignored. In the case of weakly-scattering objects, the first-Born or Rytov approximation leads to a linear relationship between the measured diffracted fields and the object function $f({x,y,z} )= {({2\pi /\lambda } )^2}[{{n^2}({x,y,z} )- {n_b}^2} ]$, where ${n_b}$ represents the RI of the background medium. Then, a simple ODT algorithm such as FBPP can be used for reconstruction [18]. For this paper, we elect to use the first Rytov approximation, due to its versatility [20].

When reconstructing an off-axis object, a modified FBPP algorithm, namely the extended-depth-of-focus (EDOF)-FBPP, is preferred since it offers considerable improvement in the reconstruction of features at locations away from the center of rotation [9]. This Rytov-based EDOF-FBPP algorithm involves the following steps:

  • i) Each measured diffracted field is backpropagated through the background medium to obtain the field ${u_b}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$ needed to compute the Rytov complex phase, ${\Phi _R}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$, as follows
    $$\begin{array}{{c}} {\; {\Phi _R}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )= ln\frac{{{u_b}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )}}{{{u_o}({x^{\prime},y^{\prime},z^{\prime}} )}}.} \end{array}$$
  • ii) The Rytov complex phase is unwrapped and filtered in the Fourier domain,
    $$\begin{array}{{c}} {{\Pi _{EDOF}}({{k_{x^{\prime}}},{k_{y^{\prime}}},z^{\prime};\theta ,\varphi } )= {\Phi _R}({{k_{x^{\prime}}},{k_{y^{\prime}}},z^{\prime};\theta ,\varphi } )|{{k_r}} |,\; {k_r} = \sqrt {{k_{x^{\prime}}}^2 + {k_{y^{\prime}}}^2} ,} \end{array}$$
    where $({{k_{x^{\prime}}},{k_{y^{\prime}}}} )$ represent the spatial frequencies along the $({x^{\prime},y^{\prime}} )$ axis, respectively. Moreover, ${\Pi _{EDOF}}({{k_{x^{\prime}}},{k_{y^{\prime}}},z^{\prime};\theta ,\varphi } )$ and ${\Phi _R}({{k_{x^{\prime}}},{k_{y^{\prime}}},z^{\prime};\theta ,\varphi } )$ represent the 2D Fourier transforms of ${\Pi _{EDOF}}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$ and ${\Phi _R}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$, respectively, with respect to the rotated coordinates ($x^{\prime},y^{\prime}$).
  • iii) Finally, the calculated π-images, ${\Pi _{EDOF}}({x^{\prime},y^{\prime},z^{\prime};\theta ,\varphi } )$, are summed over all illumination angles to reconstruct the object function
    $$\begin{matrix}f\left( {x,y,z} \right) = -j\left( {\displaystyle{{2n_b} \over \lambda }} \right)\mathop \smallint \limits_0^{4\pi } \Pi _{EDOF}\left( {{x}^{\prime},{y}^{\prime},{z}^{\prime};\theta ,\varphi } \right){\rm \; }d\Omega ,\end{matrix}$$
    where $\Omega $ represents the solid angle (in steradians) of the Jacobian that relates to the direction of the illumination angles $({\theta ,\varphi } )$ through the following relation:
    $$\begin{array}{{c}} {\; \mathop \smallint \limits_0^{4\pi } d\Omega \equiv \mathop \smallint \limits_0^{2\pi } \mathop \smallint \limits_0^\pi \sin \theta \textrm{}d\theta \textrm{}d\varphi .} \end{array}$$

It is worth mentioning that the right side of Eq. (4) represents a generalized 3D form of the 2D equation used in Ref. [9,10], from which the 2D form can be extracted by substituting $\theta $ by $\frac{\pi }{2}$ in Eq. (5) and resubstituting in Eq. (4).

2.2 Voronoi diagram weighting

While implementing the reconstruction algorithm based on Eq. (4), the integral turns into a Reimann summation over N illumination angles,

$$\begin{array}{{c}} {\; \; f({x,y,z} )={-} j\left( {\frac{{2{n_b}}}{\lambda }} \right)\mathop \sum \limits_{n = 1}^N {\Pi _{EDOF}}({x^{\prime},y^{\prime},z^{\prime};n} )\; \varDelta {\Omega _n}.\; } \end{array}$$

If all angles were uniformly distributed on the surface of a unit sphere, their solid angle partition would be equal and contributions from all illumination angles can be summed with equal weighting (EW). However, EW do not usually occur in ODT of 3D objects because complete and uniform coverage over the unit sphere is difficult [6] due to the limitations of the angular scanning configurations and the finite numerical apertures of the used lenses.

Firstly, finiteness of the numerical aperture leads to incomplete coverage of the illuminated 3D object, limiting it to a maximum elevation angle ${\theta _{max}}$. Hence, the weighting used to add up all back-projections from N illumination angles is $\varDelta {\Omega _n} = 2\pi ({1 - \cos {\theta_{max}}} )/N$.

Additionally, many popular illumination scenarios are not uniformly distributed on the surface of the unit sphere, and so precise non-equal weighting proportional to the solid angle assigned to the Reimann partition of each illumination angle is needed. Assigning the solid angle in Eq. (5) associated with each illumination angle for nonuniform angular distributions is not unique in 3D. We propose to define the solid angle associated with each illumination angle using the Voronoi diagram mapped onto the surface of the unit sphere [15,21].

The Voronoi diagram is a partition of a 2D surface into areas, called Voronoi cells, surrounding specific points, known as seeds, such that all locations within the cell are closest to the seed they surround. In the 2D case, the edges of each cell are formed from the circle centered at each seed intersecting with other circles from neighboring cells as shown in Fig. 2. To assign weights or solid angles for illumination angles in the 3D case, the Voronoi map is drawn onto the 2D surface of the unit sphere.

 figure: Fig. 2.

Fig. 2. Formation of the Voronoi diagram in: (a-c) 2D, and (d-f) 3D cases. In the 2D case, we begin by (a) the under-weighting case, where not all edges have been fully formed. As the circles expand more, all edges are fully formed and (b) the contiguous weighting case is reached. Finally, (c) the “unwanted” over-weighting case is reached if the circles continue to expand, surpassing the correct weighting. In the 3D case, (d) a nonuniform annular illumination distribution in the back focal plane of the imaging system is shown. Finally, the Voronoi diagram for the annular set in: (e) the over-weighting, and (f) the contiguous weighting cases are illustrated.

Download Full Size | PDF

The formation of the Voronoi diagram can be described as follows. While circles are expanding and beginning to intercept, we first reach the case where some edges have been formed while other edges have not. This represents the “under-weighting” case shown in Fig. 2(a). As the radius of intersecting circles increase, more edges are formed. At a certain specific radius, all inter-space gaps are filled, as illustrated in Fig. 2(b), correspond to the “contiguous weighting” case. If the circles continue to expand, all inner cells will not be affected, however, all outer cells begin to gain excess incorrect “over-weighting”, as depicted in Fig. 2(c). Therefore, the “contiguous weighting” case is the best option for assigning the solid angle or the weighting for illumination angles. At contiguous weighting, the outer perimeter of the Voronoi map (i.e., the convex hull) is obtained at the minimum cell radius for which no inter-space gap remains within the perimeter.

By applying the same concept in the 3D case and replacing the intersecting circles by intersecting spheres, we can build the Voronoi diagram for 3D angular distributions. We use an illumination configuration consisting of 5 latitude circles of 24 angles each, equally spaced azimuthally, as shown in Fig. 2(d) on the back focal plane (BFP) of the imaging system to illustrate proper formation of the Voronoi diagram. When the radius of the spheres is too large, over-weighting for the outer cells is clearly visible in Fig. 2(e). Contiguous weighting is shown in Fig. 2(f). The difference between the Voronoi diagrams shown in Figs. 2(e) and 2(f) arises from the curved edges limiting the outer cells in the “contiguous weighting” case, Fig. 2(f), compared to the outer cells extending to the south pole in the “over-weighting” case, Fig. 2(e) [22]. A block diagram that illustrates the main steps of the proposed Voronoi weighting is shown in Fig. 3. We modified the initial Voronoi diagram from [23] to illustrate these steps and applied them before the inversion algorithm to determine the Voronoi weights corresponding to each illumination direction within the algorithm. The use of the Voronoi weights in the inversion algorithm involves simple multiplication, and so its impact on the speed of the inversion algorithm is insignificant; however, the computational time for determining these weights is less than a second on a standard desktop computer.

 figure: Fig. 3.

Fig. 3. Block diagram of the main steps in the calculation of the Voronoi weights.

Download Full Size | PDF

We have applied the same steps for other 3D illumination patterns, with 120 illumination angles for each pattern. Both their distribution on the BFP of the imaging system and their corresponding contiguous weighting Voronoi diagram are depicted in Fig. 4: three-axis star pattern is illustrated in Fig. 4(a) [24]. This is followed by a single spiral distribution evenly spaced along the curvilinear abscissa in Fig. 4(b) [24,25], a spiral-latitude pattern consisting of a spiral trajectory ending by a latitude circle at the edge of the accessible BFP, as depicted in Fig. 4(c) [26] and a periodic distribution grid on the BFP in Fig. 4(d) [24,27]. This latter grid, although being fully uniform on the BFP, suffers from a slight non-uniformity on the surface of the unit sphere and hence, Voronoi diagram weighting is still applicable for this pattern.

 figure: Fig. 4.

Fig. 4. Various illumination angular distributions on the BFP of the imaging system and their related Voronoi diagram weighting including: (a) star grid with three axes, (b) equally spaced single spiral pattern, (c) spiral-latitude pattern and (d) periodic distribution pattern. Finally, (e) 1D and (f) 2D randomized versions of the annular grid.

Download Full Size | PDF

The proposed illumination scenarios are frequently used in experimental setups [2527]. However, they are still considered not truly uniform in terms of angular spacing on the surface of the unit sphere. Our proposed VW provides a method to enhance the fidelity of reconstruction in the non-uniform cases, provided the angles are known.

Finally, in Figs. 4(e) and 4(f), 1D and 2D random sets represent randomized versions of the annular grid having 5 latitude circles – of Fig. 2(d) – over only its azimuthal angles, and both elevation and azimuthal angles, respectively.

3. Simulation results and discussion

In this section, we report the results of several tests that demonstrate the effectiveness of VW for reconstruction using the Rytov-based EDOF-FBPP algorithm. The tested objects include: two microspheres of real RI, a 3D modified Shepp-Logan phantom of real RI, and a microsphere of real RI with two absorbing inclusions. The illumination wavelength is $\lambda = 1\mathrm{\;\ \mu m}$ and the pixel size along the three global axes is $0.1\mathrm{\;\ \mu m}$. The background RI is 1.563. In this paper, we use the wide-angle beam propagation method (WABPM) [28] as the numerical solver for calculating the scattered optical fields. A normalization factor, defined as the ratio of the full illumination coverage (4π) to the total solid angle encompassing actual illumination directions, was used to account for the limited range of the actual illumination. In VW reconstructions, the total solid angle of illumination is subtended by all Voronoi cells. In EW reconstructions, the total solid angle of illumination is approximated by the solid angle subtended by the spherical cap up to ${\theta _{max}}$, which is equal to 65° for the illumination scenarios under evaluation. For simplicity, the detailed reconstruction comparison in the case of the annular grid will be provided for each tested object. The comparison results for all other tested illumination scenarios will then be tabulated at the end of each example.

It is worth mentioning that the proposed maximum accessibility affects the imaging systems based on the illumination scanning configuration (ISC) [2,29]. Although the reconstruction algorithm used in this paper follows an object rotation configuration (ORC) [29,30], an ISC-to-ORC conversion is considered. One possible way to do this conversion is by first refocusing each ISC scattered field to the $z = 0$ plane in the background medium. Next, the field is propagated from the $z = 0$ plane to a rotated plane, $z^{\prime} = 0$, in local coordinates by following the steps laid out in [31] for propagating scalar fields (or alternatively in [32] for vector fields). Lastly, a standard, on-axis propagation solver can be used to propagate the field in the background medium to the output boundary of the simulation volume, $z^{\prime} = d$.

As a measure of the fidelity of reconstruction, we compute the SNR in dB:

$$\begin{array}{{c}} {SNR = 10\; log({|{{n_t} - {n_b}} |_2^2/|{{n_t} - {n_r}} |_2^2} ),} \end{array}$$
where ${n_t}$ and ${n_r}$ represent the RI of the true and the reconstructed object, respectively. The reconstruction algorithm is implemented in MATLAB 2021b and the simulation is performed in a computer equipped with an Intel Core i7-12700 K 3.6 GHz CPU and a 128GB RAM.

3.1 Two micro-spheres phantom

This phantom contains two microspheres each of 2-$\mathrm{\mu m}$ radius, separated by 8 $\mathrm{\mu m}$ along the z axis, as illustrated in Fig. 5(a). The RI is real with ${n_t}\textrm{} = 1.583$ and the RI of the background medium is ${n_b} = 1.563$. Reconstructed images based on the annular grid shown in Fig. 2(d) and using EW and VW weighting are compared in Fig. 5(b) and Fig. 5(c). It is notable that the blurring effect along the z direction evident in the EW reconstruction, which is a consequence of the missing cone problem, is clearly eliminated in the VW reconstruction. The profiles shown in Fig. 5(d) also illustrate the effectiveness of VW weighting.

 figure: Fig. 5.

Fig. 5. Reconstructions of object 1 (two spheres with real RI) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines.

Download Full Size | PDF

The reconstruction SNR was calculated within a cubic region of interest (ROI) having a side length of $20\mathrm{\;\ \mu m}$, centered at mid distance between the two microspheres. The VW reconstruction SNR is 5.16 dB greater than that of EW when angular grid illumination is used. However, such improvement is sensitive to the degree of uniformity in the illumination configuration, as can be seen in the listing of EW and VW reconstruction SNRs in Table 1. The annular grid, and its two randomized versions, have the largest improvement, while the periodic pattern provides minimal improvement. One possible reason for the large improvements acquired in the annular grid case might be due to its low optical transfer function fill factor in the frequency domain, as pointed out in [24]. To address those differences in the improvements, we calculated the normalized log scale of the point spread function (PSF) along the y-z plane for both the annular grid and the periodic pattern as shown in Fig. 6. To calculate the PSF, an object consisting of 1 pixel located at the center is reconstructed using both scenarios in both tested weighting cases (EW and VW). Two additional copies of this pixel at $y,z ={+} 5\; \mathrm{\mu m}$ were added to assess the shift invariance of the PSF. In the annular grid, a blurring artifact is noticed along the z-axis in the EW case, Fig. 6(a), which was clearly eliminated in the VW case, Fig. 6(b). The PSF of the periodic pattern for both cases – EW and VW – are almost identical, as illustrated in Figs. 6(c) and 6(d), resulting in minimal improvement for this pattern. The effect of changing the total number of illumination angles has also been investigated and the VW reconstruction SNR was found to be 5.26 and 3.46 dB greater than that of EW when using the annular grid with only 60 and 30 acquired projections, respectively.

Tables Icon

Table 1. SNR (dB) of reconstructions of object 1 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

 figure: Fig. 6.

Fig. 6. Point spread function (PSF) for (a-b) the annular-grid illumination using (a) equal weighting (EW) and (b) Voronoi weighting (VW), and for (c-d) the periodic pattern using (c) EW and (d) VW.

Download Full Size | PDF

3.2 3D modified Shepp-Logan phantom

This 3D test phantom is a Shepp-Logan distribution modified by Schabel [33], which was used previously in [15]. The object extends from $- $5 to +5 $\mathrm{\mu m}$ in all dimensions. The RI is real, and the index contrast is $\varDelta n = \textrm{max}({{n_t} - {n_b}} )= 0.006$. To assess the effectiveness of reconstruction for off-axis objects, three microspheres of radius $1\mathrm{\;\ \mu m}$, each located $6.5\mathrm{\;\ \mu m}$ away from the center, are added along the three global axes. The RI distribution of the object in three orthogonal planes passing through the origin is shown in Fig. 7(a).

 figure: Fig. 7.

Fig. 7. Reconstructions of object 2 (3D modified Shepp Logan phantom with real RI) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z, x-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines.

Download Full Size | PDF

Reconstructed images based on the annular grid of Fig. 2(d) and using EW and VW weighting are compared in Fig. 7(b) and Fig. 7(c). Again, the blurring effect along the z direction evident in the EW reconstruction, which is a consequence of the missing cone problem, is largely eliminated in the VW reconstruction. Moreover, the off-axis microspheres became clearly distinguishable. As the profiles shown in the top row of Fig. 7(d) indicate, the separation between the microsphere and the central object – around $1.5\mathrm{\;\ \mu m}$ long – is clearly identified in the VW reconstruction (solid red) while nearly absent in the EW reconstruction (solid blue). The overall SNR within the same cubic ROI used for phantom 1 has improved by 5.49 dB. The profiles shown in Fig. 7(d) also illustrate the effectiveness of VW weighting. The efficacy of VW was also tested for various illumination configuration and the results are listed in Table 2. As for phantom 1, the annular grid, and its two randomized versions, have the largest improvement, while the periodic grid provides no improvement.

Tables Icon

Table 2. SNR (dB) of reconstructions of object 2 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

3.3 Complex sphere with two absorbing inclusions

This phantom is a microsphere of radius $4\mathrm{\;\ \mu m}$ with two embedded smaller microspheres each of radius $1.6\mathrm{\;\ \mu m}$ located at $z = \textrm{} \pm 2\mathrm{\;\ \mu m}.$ The RI has a real component of 1.583 everywhere within the large microsphere and an imaginary component of 0.01 and 0.02 in the right and left spheres, respectively, as depicted in Fig. 8(a).

 figure: Fig. 8.

Fig. 8. Reconstructions of object 3 (dielectric sphere with two absorbing inclusions) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines. Real (up) and imaginary (down) components of RI are shown.

Download Full Size | PDF

Reconstructed images based on the annular illumination grid of Fig. 2(d) and using EW and VW weighting are compared in Fig. 8(b) and Fig. 8(c). Here again, the blurred area along the z direction is suppressed considerably. The SNR, within a spherical ROI of radius $4.25\mathrm{\;\ \mu m}$ surrounding the microsphere for both real and imaginary components of the RI, has increased by use of VW, as shown in Fig. 8(c). Moreover, the two absorbing inclusions are notably separated in the VW reconstruction. The improvement is also manifested in the profiles shown in Fig. 8(d).

It is notable that a crosstalk effect between the real and the imaginary components is observed in both EW and VW reconstructions. This problem can be addressed by use of the error-subtraction iterative algorithm [34], but this issue is out of the scope of this study.

Finally, dependence of the improvement in SNR provided by VW on the illumination configuration, as tabulated in Table 3, is similar to that observed for phantoms 1 and 2. VW offers its greatest enhancement for nonuniform illumination and has an insignificant effect for uniform illumination.

Tables Icon

Table 3. SNR (dB) of reconstructions of real and imaginary components of object 3 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

4. Conclusion

In this paper, we presented a modified scheme for ODT reconstruction under conditions of nonuniform illumination angles, by applying Voronoi weighting when considering the contribution of each angle to the reconstruction. These weights are proportional to the solid angle assigned to each discretized filtered back propagated (or projected) term in the reconstruction algorithm. We introduced the concept of Voronoi diagrams as a graphical method to assign a solid angle to each illumination angle of the nonuniform angular scenarios. Additionally, we have limited the Voronoi diagram extension by precisely calculating curved edges to account for the limited accessibility of the imaging systems. We applied VW to three simulated objects, and showed that, in nearly all cases, VW enhances the SNR of the reconstruction. For certain configurations, such as the annular, VW was shown to enhance the reconstruction SNR by as much as 5.49 dB, demonstrating a significant impact on reconstruction quality. The degree of enhancement provided to each illumination configuration depends on uniformity of the Voronoi weights. As a result, the annular grid exhibited the highest SNR improvements while the uniform-BFP grid provided the least improvements. In general, the proposed weighting scheme can be applied to enhance the performance of any reconstruction algorithm that ideally requires integration over a continuum of directions in the 3D space, but it can realistically use only a finite set of directions. In particular, this approach has important implications for improving the accuracy of other 3D optical or acoustic tomographic reconstructions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

2. Y. Sung, W. Choi, C. Fang-Yen, et al., “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17(1), 266–277 (2009). [CrossRef]  

3. K. Kim, J. Yoon, S. Shin, et al., “Optical diffraction tomography techniques for the study of cell pathophysiology,” J. Biomed. Photon. Eng. 2(2), 1–16 (2016).

4. K. Kim, K. S. Kim, H. Park, et al., “Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography,” Opt. Express 21(26), 32269–32278 (2013). [CrossRef]  

5. A. D. Yablon, “Multi-wavelength optical fiber refractive index profiling by spatially resolved Fourier transform spectroscopy,” J. Lightwave Technol. 28(4), 360–364 (2010). [CrossRef]  

6. A. Kuś, W. Krauze, P. L. Makowski, et al., “Holographic tomography: hardware and software solutions for 3D quantitative biomedical imaging (Invited paper),” ETRI J. 41(1), 61–72 (2019). [CrossRef]  

7. T. M. Buzug, “Computed Tomography,” in Springer Handbook of Medical Technology, (Springer, 2011), pp. 311–342.

8. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances, 3rd ed. (SPIE, 2015).

9. J. Kostencka and T. Kozacki, “Computational and experimental study on accuracy of off-axis reconstructions in optical diffraction tomography,” Opt. Eng. 54(2), 024107 (2015). [CrossRef]  

10. S. Fan, S. Smith-Dryden, J. Zhao, et al., “Optical fiber refractive index profiling by iterative optical diffraction tomography,” J. Lightwave Technol. 36(24), 5754–5763 (2018). [CrossRef]  

11. J. Lim, K. Lee, K. H. Jin, et al., “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23(13), 16933 (2015). [CrossRef]  

12. Y. Sung and R. R. Dasari, “Deterministic regularization of three-dimensional optical diffraction tomography,” J. Opt. Soc. Am. A 28(8), 1554 (2011). [CrossRef]  

13. H. Chung, J. Huh, G. Kim, et al., “Missing cone artifact removal in ODT using unsupervised deep learning in the projection domain,” IEEE Trans. Comput. Imaging 7, 747–758 (2021). [CrossRef]  

14. J. Kostencka, T. Kozacki, A. Kuś, et al., “Holographic tomography with scanning of illumination: space-domain reconstruction for spatially invariant accuracy,” Biomed. Opt. Express 7(10), 4086–4101 (2016). [CrossRef]  

15. J. A. B. Aziz, S. Smith-Dryden, G. Li, et al., “Voronoi Weighting for Optical Diffraction Tomography using Nonuniform Illumination Angles,” in 2023 IEEE Photonics Conference (IPC) (2023), pp. 1–2.

16. D. Goldberger, J. Barolak, C. G. Durfee, et al., “Three-dimensional single-shot ptychography,” Opt. Express 28(13), 18887–18898 (2020). [CrossRef]  

17. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, (SIAM, 2001).

18. A. J. Devaney, “A filtered backpropagation algorithm for diffraction tomography,” Ultrason. Imag. 4(4), 336–350 (1982). [CrossRef]  

19. D. H. Garces, W. T. Rhodes, and N. M. Peña, “Projection-slice theorem: a compact notation,” J. Opt. Soc. Am. A 28(5), 766–769 (2011). [CrossRef]  

20. B. Chen and J. J. Stamnes, “Validity of diffraction tomography based on the first Born and the first Rytov approximations,” Appl. Opt. 37(14), 2996–3006 (1998). [CrossRef]  

21. P. A. Penczek, J. Zhu, and J. Frank, “A common-lines based method for determining orientations for N > 3 particle projections simultaneously,” Ultramicroscopy 63(3-4), 205–218 (1996). [CrossRef]  

22. A. Okabe, B. Boots, and K. Sugihara, “Nearest neighbourhood operations with generalized Voronoi diagrams: a review,” Int. J. Geogr. Inf. Syst. 8(1), 43–71 (1994). [CrossRef]  

23. B. Luong, “Voronoi Sphere,” MATLAB Central File Exchange (2024), https://www.mathworks.com/matlabcentral/fileexchange/40989-voronoi-sphere.

24. A. M. Taddese, N. Verrier, M. Debailleul, et al., “Optimizing sample illumination scanning in transmission tomography diffractive microscopy,” Appl. Opt. 60(6), 1694–1704 (2021). [CrossRef]  

25. Y. Sung, W. Choi, N. Lue, et al., “Stain-free quantification of chromosomes in live cells using regularized tomographic phase microscopy,” PLoS One 7(11), e49502 (2012). [CrossRef]  

26. S. Chowdhury, M. Chen, R. Eckert, et al., “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211–1219 (2019). [CrossRef]  

27. J. M. Soto, J. A. Rodrigo, and T. Alieva, “Optical diffraction tomography with fully and partially coherent illumination in high numerical aperture label-free microscopy,” Appl. Opt. 57(1), A205–A214 (2018). [CrossRef]  

28. K. Kawano and T. Kitoh, Introduction to Optical Waveguide Analysis, Solving Maxwell’s Equations and the Schrodinger Equation (Wiley, 2001).

29. C. Park, S. Shin, and Y. Park, “Generalized quantification of three-dimensional resolution in optical diffraction tomography using the projection of maximal spatial bandwidths,” J. Opt. Soc. Am. A 35(11), 1891–1898 (2018). [CrossRef]  

30. M. Ziemczonok, A. Kuś, P. Wasylczyk, et al., “3D-printed biological cell phantom for testing 3D quantitative phase imaging systems,” Sci. Rep. 9(1), 18872 (2019). [CrossRef]  

31. T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. 17(8), 556–558 (1992). [CrossRef]  

32. S. Zhang, D. Asoubar, C. Hellmann, et al., “Propagation of electromagnetic fields between non-parallel planes: a fully vectorial formulation and an efficient implementation,” Appl. Opt. 55(3), 529–538 (2016). [CrossRef]  

33. M. Schabel, “3D Shepp-Logan phantom,” MATLAB Central File Exchange (2024), https://www.mathworks.com/matlabcentral/fileexchange/9416-3d-shepp-logan-phantom.

34. S. Fan, S. Smith-Dryden, G. Li, et al., “Reconstructing complex refractive-index of multiply-scattering media by use of iterative optical diffraction tomography,” Opt. Express 28(5), 6846–6858 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic of the object rotation tomographic image configuration.
Fig. 2.
Fig. 2. Formation of the Voronoi diagram in: (a-c) 2D, and (d-f) 3D cases. In the 2D case, we begin by (a) the under-weighting case, where not all edges have been fully formed. As the circles expand more, all edges are fully formed and (b) the contiguous weighting case is reached. Finally, (c) the “unwanted” over-weighting case is reached if the circles continue to expand, surpassing the correct weighting. In the 3D case, (d) a nonuniform annular illumination distribution in the back focal plane of the imaging system is shown. Finally, the Voronoi diagram for the annular set in: (e) the over-weighting, and (f) the contiguous weighting cases are illustrated.
Fig. 3.
Fig. 3. Block diagram of the main steps in the calculation of the Voronoi weights.
Fig. 4.
Fig. 4. Various illumination angular distributions on the BFP of the imaging system and their related Voronoi diagram weighting including: (a) star grid with three axes, (b) equally spaced single spiral pattern, (c) spiral-latitude pattern and (d) periodic distribution pattern. Finally, (e) 1D and (f) 2D randomized versions of the annular grid.
Fig. 5.
Fig. 5. Reconstructions of object 1 (two spheres with real RI) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines.
Fig. 6.
Fig. 6. Point spread function (PSF) for (a-b) the annular-grid illumination using (a) equal weighting (EW) and (b) Voronoi weighting (VW), and for (c-d) the periodic pattern using (c) EW and (d) VW.
Fig. 7.
Fig. 7. Reconstructions of object 2 (3D modified Shepp Logan phantom with real RI) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z, x-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines.
Fig. 8.
Fig. 8. Reconstructions of object 3 (dielectric sphere with two absorbing inclusions) using equal weighting (EW) and Voronoi weighting (VW) and annular-grid illumination. (a)-(c) Slices in the y-z and y-x planes of the original, EW-reconstructed, and VW-reconstructed objects. (d) 1D profiles along dashed lines. Real (up) and imaginary (down) components of RI are shown.

Tables (3)

Tables Icon

Table 1. SNR (dB) of reconstructions of object 1 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

Tables Icon

Table 2. SNR (dB) of reconstructions of object 2 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

Tables Icon

Table 3. SNR (dB) of reconstructions of real and imaginary components of object 3 based on equal weighting (EW) and Voronoi weighting (VW) for various configurations of illumination angles.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

[ x y z ] = [ c o s ( φ ) s i n ( φ ) 0 s i n ( φ ) c o s ( φ ) 0 0 0 1 ] [ c o s ( θ ) 0 s i n ( θ ) 0 1 0 s i n ( θ ) 0 c o s ( θ ) ] [ c o s ( φ ) s i n ( φ ) 0 s i n ( φ ) c o s ( φ ) 0 0 0 1 ] [ x y z ] .
Φ R ( x , y , z ; θ , φ ) = l n u b ( x , y , z ; θ , φ ) u o ( x , y , z ) .
Π E D O F ( k x , k y , z ; θ , φ ) = Φ R ( k x , k y , z ; θ , φ ) | k r | , k r = k x 2 + k y 2 ,
f ( x , y , z ) = j ( 2 n b λ ) 0 4 π Π E D O F ( x , y , z ; θ , φ ) d Ω ,
0 4 π d Ω 0 2 π 0 π sin θ d θ d φ .
f ( x , y , z ) = j ( 2 n b λ ) n = 1 N Π E D O F ( x , y , z ; n ) Δ Ω n .
S N R = 10 l o g ( | n t n b | 2 2 / | n t n r | 2 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.