Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-SMOLM: deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution

Open Access Open Access

Abstract

Dipole-spread function (DSF) engineering reshapes the images of a microscope to maximize the sensitivity of measuring the 3D orientations of dipole-like emitters. However, severe Poisson shot noise, overlapping images, and simultaneously fitting high-dimensional information–both orientation and position–greatly complicates image analysis in single-molecule orientation-localization microscopy (SMOLM). Here, we report a deep-learning based estimator, termed Deep-SMOLM, that achieves superior 3D orientation and 2D position measurement precision within 3% of the theoretical limit (3.8° orientation, 0.32 sr wobble angle, and 8.5 nm lateral position using 1000 detected photons). Deep-SMOLM also demonstrates state-of-art estimation performance on overlapping images of emitters, e.g., a 0.95 Jaccard index for emitters separated by 139 nm, corresponding to a 43% image overlap. Deep-SMOLM accurately and precisely reconstructs 5D information of both simulated biological fibers and experimental amyloid fibrils from images containing highly overlapped DSFs at a speed ~10 times faster than iterative estimators.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Single-molecule orientation-localization microscopy (SMOLM) is a versatile tool for visualizing interactions between biomolecules and the architecture of the structures they create; it measures simultaneously the 3D orientations and positions of individual fluorescent molecules with nanoscale resolution. Researchers have used molecular orientations [1] to elucidate the architectures of amyloid aggregates [24], the organization of proteins in actin filaments [5,6], and changes in lipid membrane polarity and fluidity induced by cholesterol [4,7,8]. To effectively use limited photon budgets in single-molecule (SM) imaging, dipole-spread functions (DSFs), i.e., the vectorial extension of the optical microscope’s point-spread function, must be engineered to encode additional information about a molecule’s 3D orientation [5,915]. However, simultaneously estimating the 3D orientation and position of an emitter is challenging because 1) it is difficult to estimate SM parameters in 5-dimensional space (3D orientation, wobble, 2D position) without getting trapped in local minima instigated by severe Poisson shot noise [4,14], 2) engineered DSFs have larger footprints and cause SM images to overlap frequently, and 3) dim emitters are difficult to detect when large DSFs spread photons across many camera pixels.

To estimate SM orientations, existing techniques match noisy experiment images to a pre-computed library of sampled DSFs [16] or construct parametric fits using models of the imaging system [5,911,13,17,18]. These methods either suffer from 1) reduced precision due to finite sampling and/or DSF approximation or 2) a heavy computational burden during the slow, iterative optimization process. In addition, estimating parameters of dim SMs whose images overlap is extremely challenging. Fundamentally, estimating 3D orientation and 2D position is equivalent to deconvolving five-dimensional features from two-dimensional noisy images. Convolutional neural networks have achieved great success in shift-invariant pattern recognition [1921]. Early neural networks for measuring orientation are limited to images containing one emitter [22,23]. Recently, DeepSTORM3D and DECODE have been developed for estimating the 3D positions of single molecules, even for a high density of emitters whose images overlap [24,25]. However, techniques that are capable of high-dimensional estimates, i.e., measuring five or more parameters, from overlapping images of dense emitters are still missing.

In this paper, we propose a deep-learning based estimator, termed Deep-SMOLM, for simultaneously estimating the 3D orientations and 2D positions of single molecules. Through carefully designed synergy between the physical forward imaging model of the microscope and the neural network architecture, Deep-SMOLM estimates both SM positions and orientations on par with the theoretical best-possible precision at a speed ~10 times faster than iterative estimators. Morever, Deep-SMOLM accurately and precisely analyzes overlapping images of dense emitters in both realistic imaging simulations and in experimental imaging of amyloid fibrils. To the best of our knowledge, Deep-SMOLM is the first deep-learning based estimator capable of estimating 5D information from overlapping images of single molecules.

2. Methods

We represent the mean orientation of a dipole-like emitter [2628] using a polar angle $\theta$ and an azimuthal angle $\phi$ in spherical orientation space (Fig. 1(a)). During a camera exposure, the dipole rotates or “wobbles” through a range of directions represented by a solid angle $\Omega$ (Fig. 1(a)) [29,30]. Traditional unpolarized microscope images contain little information about the 3D orientation of a dipole [1,12,31,32]. To remedy these issues, we use a polarization-sensitive microscope that splits fluorescence into x- and y-polarized detection channels, along with a pixOL phase mask placed at the pupil or back focal plane (Fig. S14) [14]. The resulting pixOL DSF changes dramatically for molecules oriented in various directions (Fig. 1(b)).

 figure: Fig. 1.

Fig. 1. Estimating 3D orientations and 2D positions of single molecules (SMs) using Deep-SMOLM. (a) The orientation of a dipole-like emitter is parameterized by a polar angle $\theta \in [0,90^{\circ }]$, an azimuthal angle $\phi \in (-180^{\circ },180^{\circ }]$, and a wobble solid angle $\Omega \in [0,2\pi ]$ sr. (b) Simulated (left, red) x- and (right, blue) y- polarized images captured by a polarization-sensitive microscope with a pixOL phase mask [14] of emitters with orientations $[\theta,\phi,\Omega ]$ shown in (a). Emitter 1: $\Omega =2\pi$ sr; emitter 2: $[0^{\circ }, 0^{\circ }, 0]$; emitter 3: $[45^{\circ }, 0^{\circ }, 0~\text {sr}]$; emitter 4: $[90^{\circ }, 0^{\circ }, 0~\text {sr}]$. (c) Schematic of Deep-SMOLM. (i) A set of (top, red) x- and (bottom, blue) y-polarized images of size of $N\times P$ is input to (ii) the neural network, which outputs (iii) six images $\boldsymbol {h}$ of size $6N\times 6P$ (Eqn. 3). Each detected emitter is represented by a 2D Gaussian spot ((iii) inset) located at corresponding positions across the six images. An emitter’s 2D position $\hat {r}$ is encoded into the center position of the Gaussian pattern, and the signal-weighted moments are encoded as the intensities of the Gaussian patterns across the six images $\boldsymbol {h}$. (iv) A post-processing algorithm is used to transform the Deep-SMOLM images into a list of SMs, each with a measured 2D position $\hat {\boldsymbol {r}}$, intensity $\hat {s}$, and 3D orientation $\left [\hat {\theta }, \hat {\phi }, \hat {\Omega }\right ]$.

Download Full Size | PDF

To estimate these orientations, the network must detect a DSF above background in the presence of Poisson shot noise and estimate the 3D orientation and 2D position of a molecule based on the shape of the DSF. Directly estimating orientation angles is generally ill-conditioned: the mean orientation direction $[\theta,\phi ]$ is periodic [33,34] and occasionally degenerate, e.g., for an isotropic emitter ($\Omega =2\pi$), the mean orientation angle $[\theta, \phi ]$ is undefined. To mitigate these issues, Deep-SMOLM estimates the brightness-weighted orientational second moments $\boldsymbol {m}=[\langle \mu _x^{2}\rangle, \langle \mu _y^{2}\rangle, \langle \mu _z^{2}\rangle, \langle \mu _x\mu _y\rangle, \langle \mu _x\mu _z\rangle, \langle \mu _y\mu _z\rangle ] \in \mathbb {R}^{6}$ instead of orientation angles (see SI Section 1 for the relation between orientation angles and second moments). The image of an SM produced by the microscope is linear in these second moments, as modeled by vectorial wave diffraction [31,35], such that

$$\boldsymbol{I}=s\sum_{l=1}^{6} {\mathbf{B}}_l m_l+\boldsymbol{b},$$
where $\boldsymbol {I} \in \mathbb {R}^{N\times P \times 2}$ is the measured fluorescence intensity within a pair of x- and y-polarized images with ${N\times P}$ pixels, $s$ is the number of signal photons detected from the emitter, and $\boldsymbol {b}$ is the background in each pixel. The $l^{\text {th}}$ basis image $\mathbf {B}_{l}\in \mathbb {R}^{N \times P \times 2}$ corresponds to the imaging system’s response to the $l^{\text {th}}$ orientational second moment $m_l$ (Fig. S10). Importantly, while the image $\boldsymbol {I}$ of an SM is linear with respect to the second moments $\boldsymbol {m}$, the second moments are nonlinear with respect to the orientation angles (SI Section 1 and Eqn. S1). Moreover, there exists a unique one-to-one mapping between orientational second moments $\boldsymbol {m}$ and SM images $\boldsymbol {I}$ (Eqn. 1), but a single image can be produced by multiple orientation angles $[\theta,\phi,\Omega ]$.

Extending the forward model (Eqn. 1) to images containing $Q$ emitters with the $q^{\text {th}}$ emitter located at $\boldsymbol {r}_q=[x_q,y_q]$, we have

$$\begin{aligned} \boldsymbol{I}(\boldsymbol{r}')&=\sum_{l=1}^{6} \mathbf{B}_{l}(\boldsymbol{r}';\boldsymbol{r}) \circledast {\left(\sum_{q}{s_q m_{q,l}\bar{\delta}(\boldsymbol{r}-\boldsymbol{r}_q)}\right)} + \boldsymbol{b}\\ &=\sum_{l=1}^{6} \mathbf{B}_{l}(\boldsymbol{r}';\boldsymbol{r}) \circledast \boldsymbol{u}_{l}(\boldsymbol{r}) + \boldsymbol{b}, \end{aligned}$$
where $\bar {\delta }(\boldsymbol {r})$ is the 2D Dirac delta function, $m_{q,l}$ is the $l^{\text {th}}$ orientational second moment for $q^{\text {th}}$ emitter, and $\circledast$ is the convolution operator. Thus, both the 3D orientations and 2D positions of all emitters within a camera frame can be explicitly and uniquely represented by $\boldsymbol {u}(\boldsymbol {r})$, a six-dimensional vector field defined for all possible molecule positions $\boldsymbol {r}$. Each entry of $\boldsymbol {u}_l(\boldsymbol {r})$ corresponds to one brightness-weighted orientational second moment. Deep-SMOLM specifically leverages the linearity of the imaging model in Eqn. 2 to estimate brightness-weighted orientational second moment images $\boldsymbol {u}(\boldsymbol {r})$ for improved estimation performance.

It is impossible for a network to estimate $\boldsymbol {u}(\boldsymbol {r})$ containing Dirac delta functions within a continuous 2D domain $\boldsymbol {r}$. Inspired by Deep-STORM and Deep-STORM3D [24,36], we designed Deep-SMOLM to instead generate six Gaussian-blurred images $\boldsymbol {h}_l(\boldsymbol {r})$, each corresponding to a brightness-weighted second moment image, as

$$\boldsymbol{h}_l(\boldsymbol{r}) = \boldsymbol{u}_l(\boldsymbol{r}) \circledast \boldsymbol{g}, \quad l\in\{1,\ldots,6\},$$
where $\boldsymbol {g}$ is a Gaussian kernel with a (standard-deviation) width of 1 pixel represented by a $7\times 7$ matrix. The pixel-grid spacing is $9.7$ nm, close to the localization precision of single-molecule imaging. Thus, Deep-SMOLM represents each detected emitter by 2D Gaussian spots co-located across its 6 output images $\boldsymbol {h}_l(\boldsymbol {r})$ (Figs. 1(c)(iii) and S5). To compile a list of position and orientation estimates, each corresponding to a detected SM, a post-processing algorithm simply identifies and crops each Gaussian pattern. The SM’s 2D position is computed from the center of the Gaussian pattern, and the SM’s orientation and intensity are measured from the amplitudes of the Gaussian patterns across the six images $\boldsymbol {h}_l(\boldsymbol {r})$ (Fig. 1(c)(iv), see SI Section 2.iii for more details).

We use a fully convolutional neural network architecture adapted from DeepSTORM3D [24] (Figs. 1(c)(ii) and S1). The network takes paired input images and feeds them into several convolutional layers, upsamples the images laterally by a factor of 6, and then processes the images using another few convolution layers to output six images. Thus, the network is estimating spatial maps of six coefficients or weights associated with our linear forward imaging model (Eqn. 1); we believe the straightforward linearity of this estimation task increases the robustness of the network. We train the network using 30K simulated images, each containing a random number of molecules, with a density between 0.6 and 1.2 molecules$\cdot \mu \text {m}^{-2}$, placed at random positions with random orientations (SI section 4.ii). The images contain both well-separated and overlapped DSFs. The network is optimized by minimizing the loss given by

$$\ell(\hat{\boldsymbol{h}},\boldsymbol{h})= \frac{1}{LK}\sum_{L,K}{\left( \hat{\boldsymbol{h}}_l^{k}- \boldsymbol{h}_l^{k} \right)^{2}}$$
where $[\cdot ]_l^{k}$ represents the $k^{\text {th}}$ pixel of the $l^{\text {th}}$ ground truth image $\boldsymbol {h}$ and $\hat {\boldsymbol {h}}$ are the output images from the network. Notably, it is generally difficult to design loss functions for high-dimensional estimation, especially when those dimensions do not share the same units, i.e., degrees for 3D angles, steradians for wobble angle $\Omega$, and nanometers for 2D position, because these parameters affect the input data in complex and disparate ways. Deep-SMOLM measures 3D orientation through estimating the orientational second moments, which each affect the final DSF shape approximately equally (Eqn. 1). In addition, position and orientation are encoded into the output images in an orthogonal and easily distinguishable manner, namely the spatial location and intensity of Gaussian patterns, respectively. Therefore, this encoding strategy of Deep-SMOLM’s neural network simplifies the design of its loss function (Eqn. 4). The same loss function is used throughout all training epochs to evaluate performance. Full network training takes ~2 h on a Nvidia GeForce RTX 2080 Ti GPU with 11 GB memory.

3. Results

3.1. Estimation accuracy and precision

In contrast to image-based algorithms, which output discrete position estimates and are therefore limited to pixel-level precision [24,36,37], Deep-SMOLM encodes the 2D position of an SM into the center of a Gaussian spot in its output images, and a post-processing algorithm outputs a continuous 2D position estimate $\boldsymbol {r}=[x,y]$. To test localization accuracy, we place four emitters at various locations with respect to the output pixel grid (spacing $=9.7$ nm), namely at $0$ nm, $2.9$ nm, $5.9$ nm, and $8.8$ nm. For each emitter, we generate 2000 noisy images, each containing one emitter with a random orientation sampled from a uniform distribution and with 1000 detected signal photons and 2 background photons/pixel. The estimated positions from Deep-SMOLM for the four emitters are centered at their ground-truth positions (Fig. 2(a)) with an average bias $x_{\text {bias}}$ of −0.14 nm.

 figure: Fig. 2.

Fig. 2. Precision of Deep-SMOLM for estimating 3D orientations and 2D positions of SMs. (a) Estimated lateral position $x$ for emitters located at (blue) $x=y=0$ nm, (red) 2.9 nm, (yellow) 5.9 nm, and (green) 8.8 nm. For each case, 2000 noisy images are simulated containing an SM located at the designed position with a random orientation. (b) Deep-SMOLM measurement performance, as quantified by mean angular standard deviation $\sigma _\delta$ (MASD, Eqn. S8), wobble angle precision $\sigma _\Omega$, and lateral precision $\sigma _r$ averaged uniformly over all $\theta$. Solid line: Deep-SMOLM precision, dashed line: Cramér-Rao bound precision, purple: $\Omega =0$, green: $\Omega =2$ sr. (e) Simulated noiseless (top, red) x- and (bottom, blue) y-polarized images containing two emitters separated by distances of (left to right) 1 nm, 139 nm, 414 nm, and 620 nm. Magenta dot: center position for each emitter. (f-i) Detection rate, precision, and accuracy of Deep-SMOLM for estimating positions and orientations from images containing two emitters at various separations. (f) Deep-SMOLM (black) Jaccard index and the corresponding number of (orange solid) true-positive (TP), (orange dash) false-negative (FN), and (orange dot) false-positive (FP) emitters. (g) Deep-SMOLM (black) precision $\sigma _r$ and (orange) accuracy $r-r_0$ for estimating 2D position $\boldsymbol {r}$. (h) Deep-SMOLM (black) orientation precision $\sigma _\xi$ and (orange) absolute mean orientation bias $\xi$ (Eqn. 5). (i) Deep-SMOLM (black) precision $\sigma _\Omega$ and (orange) accuracy $\Omega -\Omega _0$ for measuring wobble angle $\Omega$.

Download Full Size | PDF

We quantify Deep-SMOLM’s precision using simulated images of single emitters whose positions and orientations are random; for each orientation, 200 images are used to calculate the precision (Figs. 2(b-d) and S6). We use mean angular standard deviation $\sigma _{\delta }$ (MASD, Eqn. S8) to quantify the combined precision for measuring $\theta$ and $\phi$. Deep-SMOLM gives an average 3D orientation estimation precision $\sigma _\delta$ of $3.8^{\circ }$ and an average wobble angle precision $\sigma _\Omega$ of 0.32 sr for emitters with wobble angle $\Omega$ of $0$ or $2$ sr and with 1000 signal photons and 2 background photons detected per pixel. Deep-SMOLM also shows great performance for estimating the 2D position with an average precision of $\sigma _r$ of 8.5 nm. For comparison, we compute the Cramér–Rao bound (CRB), which quantifies the best-possible estimation precision for any unbiased estimator [38]. For estimators with insignificant bias, CRB serves as a measure of optimal performance but may not be a true lower bound [39]. The precision for estimating the 3D orientation and 2D position is close to CRB precision ($0\%$ and $3\%$ better than the CRB precision for 3D orientation and wobble angle, and $2\%$ worse than the CRB precision for measuring 2D position) averaged over emitters with $\Omega =0$ sr and $\Omega =2$ sr, indicating the excellent optimal performance of Deep-SMOLM.

To quantify the bias $\xi$ in mean orientation, we calculate the (non-negative) angular distance between Deep-SMOLM’s estimated orientation $\left [\hat {\theta },\hat {\phi }\right ]$ and the ground-truth orientation [$\theta,\phi$], given by

$$\xi = \arccos \left( \left[ \sin \hat{\theta} \cos \hat{\phi}, \sin \hat{\theta} \cos \hat{\phi}, \cos \hat{\theta} \right] \left[ \sin \theta \cos \phi, \sin \theta \sin \phi, \cos \theta \right]^{T} \right),$$
where the superscript $T$ denotes a matrix-vector transpose. We note that Deep-SMOLM shows bias in estimating 3D orientations (an average mean orientation bias $\xi$ of $1.3^{\circ }$ and average wobble angle bias $\Omega -\Omega _0$ of $0.13$ sr, Fig. S6); this bias could enable Deep-SMOLM to achieve slightly better estimation precision than the CRB [39] (Fig. S6).

Robust single-molecule imaging, especially in vivo, necessitates an estimation algorithm that can reliably detect and estimate parameters from emitters whose images overlap [40,41]. Early algorithms for measuring simultaneously SM orientations and positions either cannot cope with image overlap [9,11,23], are very computationally expensive [18], or can become stuck in local minima leading to correlated orientation and position biases [14]. To facilitate Deep-SMOLM’s robustness to these imaging conditions, we train it using simulated images containing both well-separated and overlapped DSFs corrupted by Poisson shot noise (SI section 4.ii).

We validate Deep-SMOLM’s ability to analyze overlapping images of SMs by simulating images containing 2 molecules separated by various distances (0-1000 nm) with fixed ($\Omega =0$) and random orientations and 1000 signal photons and 2 background photons detected per pixel. Deep-SMOLM is able to achieve a Jaccard index $>0.95$ as long as the two emitters are separated by at least 139 nm, corresponding to an average $43\%$ area overlap in their DSFs (Fig. 2(e,f), see SI section 3 for details on the Jaccard index and DSF overlap). More surprisingly, Deep-SMOLM achieves accuracy and precision performance on par with non-overlapping emitters when analyzing images of emitters separated by just $414$ nm, corresponding to $17\%$ overlap in their DSFs (Fig. 2(e-i), see Fig. S15 for performance at a lower signal-to-background ratio (SBR)).

3.2. 5D imaging of simulated biological fibers

To validate Deep-SMOLM for imaging a complex and densely labelled structure, we devised a synthetic structure containing nine 1D fibers as shown in Figs. 3(b) and S12. We designed the 3D orientations of the emitters to vary systematically with polar angles $\theta$ shown in Fig. 3(b) and azimuthal angles perpendicular to each fiber. In addition, all emitters are fixed in orientation ($\Omega =0$, see Fig. S17 for imaging emitters with wobble $\Omega =2$ sr). The emitters have a broad signal distribution with a mean of 1000 photons and 2 background photons detected per pixel (See Fig. S11(a) for photon distribution). Each simulated x- and y-polarized image (SI section 4) analyzed by Deep-SMOLM contains a random sampling of molecules with high activation density; each frame contains 7 to 15 emitters (the diameter of the structure is $2~\mu$m, see SI section 4.iii for details on image generation, and see Visualization 1 for simulated single molecule images). Importantly, the emitter density is lowest near the vertical midpoint of the structure, while a high density of emitters exists at the top and bottom “poles” where the fibers intersect.

 figure: Fig. 3.

Fig. 3. Deep-SMOLM 5D imaging of a model structure of 1D fibers. (a)(top) Simulated raw image compared to (bottom) images reconstructed from Deep-SMOLM estimates. Magenta dots: center position of each SM. (b) Synthetic structure containing nine 1D fibers color-coded with the ground truth polar angle $\theta _{\text {GT}}$. (c) Deep-SMOLM measured wobble angle $\hat {\Omega }$ (ground truth $\Omega _{\text {GT}}=0$ sr). (d) Estimated polar angle $\hat {\theta }$. (e) Emitters within the white box shown in (b). Colormap: estimated signals $\hat {s}$ (photons). (f) Estimated azimuthal angle $\hat {\phi }$, where the length and direction of each line depict the magnitude of the in-plane orientation $\sin \hat {\theta }$ and direction of estimated azimuthal angle $\hat {\phi }$, respectively. The ground truth orientations are perpendicular to the fibers. (g) Wobble angle estimation bias $\left |\hat {\Omega }-0\right |$ versus mean orientation estimation bias $\xi$ (Eqn. 5). (Right) Distribution of wobble angle estimation bias and (top) mean orientation estimation bias. Scalebars: (c,f) 200 nm, (e) 50 nm.

Download Full Size | PDF

Despite the extremely high degree of overlap in DSFs (Fig. 3(a)), Deep-SMOLM detects and localizes each emitter with excellent detection efficiency, achieving a 0.84 Jaccard index near the middle of the structure where fibers are more separated versus a 0.77 Jaccard index near the poles where DSF overlaps dominate (Fig. S16(a)). Deep-SMOLM readily resolves all nine fibers very well as shown in Fig. 3(e). However, Deep-SMOLM cannot resolve fibers <~10 nm apart from each other due to the low SBR of the dataset; Deep-SMOLM achieves a mean spatial resolution of $\sigma _r=9.2$ nm for an average of 1000 photons detected (Fig. S16(b)).

Deep-SMOLM’s 3D orientation estimates also match the ground truth very well (Fig. 3(c,d,f,g)). The median estimation bias in 3D orientation is $\xi =4.7^{\circ }$, and the median bias in wobble is $0.13 \pi$ sr, as expected from the impact of Poisson shot noise and measuring absolute (non-negative) bias in $\xi$ and $\Omega$ [14,42]. Furthermore, Deep-SMOLM achieves excellent orientation precision (a standard deviation $\sigma _\xi$ in orientation bias of $7.1^{\circ }$ and a wobble angle precision $\sigma _\Omega$ of $0.5$ sr, Figs. 3(g) and S16(c,d)), despite frequent DSF overlaps in the raw data (Visualization 1).

3.3. Experimental 5D imaging of amyloid fibrils

Amyloid aggregates are linked to various pathological diseases, e.g., Parkinson’s and Alzheimer’s disease [43,44]. Previous studies have shown that upon binding transiently to amyloid fibrils, Nile red (NR) orients itself parallel to their backbones [3,4,15]. Thus, imaging NR enables us to validate Deep-SMOLM’s performance for 5D SM imaging of orientations and positions from experimental data.

In typical biological imaging experiments, both optical aberrations and drift of the objective’s focal plane (FP) affect the image of a SM. To mitigate microscope aberrations, we train Deep-SMOLM using DSFs calibrated [45] from experimental microscope images (SI section 5.i and Fig. S13). To accommodate possible FP drift in experimental data, we train Deep-SMOLM using simulated images of emitters captured with various FP positions $z\in (-150,150)$ nm, where $z=0$ denotes focusing at the coverslip interface.

To validate Deep-SMOLM’s performance for analyzing images containing overlapped DSFs, we use NR at a high concentration, and thus an average high blinking rate of 0.6 emitters per µm$^{2}$ (Visualization 2). Compared to diffraction-limited imaging (Fig. 4(b)), Deep-SMOLM’s reconstructed localization microscopy image resolves the network of intertwined amyloid fibrils with superior detail (Fig. 4(a)). Based on Fourier ring correlation [46], Deep-SMOLM attains a spatial resolution of $\sigma _r=11.0$ nm (SI section 3.vi and Fig. S9). Indeed, fibers spaced 55 nm apart are clearly resolved (Fig. 4(a) inset).

 figure: Fig. 4.

Fig. 4. 5D SMOLM images of Nile red (NR) transiently bound to A$\beta$42 amyloid fibrils. (a) SMLM reconstruction compared to (b) the corresponding diffraction-limited image. Colorbars: (a) signal photons for each detected emitter and (b) photons per $58.5 \times 58.5 \text { nm}^{2}$ pixel. (a) Inset: distribution of molecule positions $x$ along the white line in (a). Red line: double-Gaussian fit. (c) Spatial map (colorbar: deg) and (d) overall distribution of NR polar angles $\hat {\theta }$. (e) Spatial map (colorbar: sr) and (f) overall distribution of NR wobble angles $\hat {\Omega }$. (g) Spatial map of azimuthal angles $\hat {\phi }$ for NR with polar angle $\hat {\theta }>60^{\circ }$ and wobble angle $\Omega <2$ sr. Each SM is represented as a 1 nm filled circle in (a,c,d) and represented as a 2 nm filled circle in (g). (h,i) All NR positions and orientations detected within the dotted white boxes in (g), depicted as line segments. Their lengths and directions indicate the magnitude of their in-plane orientations $\sin (\hat {\theta })$ and their azimuthal orientations $\hat {\phi }$, respectively. Colorbar: $\hat {\phi }$ (deg). (j) Azimuthal angle $\hat {\phi }$ distribution of all NR molecules within each solid white box in (g-i). Scale bars: (a-c,e,g) 1 µm, (h,i) 200 nm.

Download Full Size | PDF

Deep-SMOLM’s orientation estimates show that 75$\%$ of NR’s polar orientations $\hat {\theta }$ are greater than $60^{\circ }$, i.e., parallel to the covership (Fig. 4(c,d)). However, when bound to dense entangled fibrils, some NR molecules have smaller polar angles $\hat {\theta }$ as indicated by arrows in Fig. 4(c), which indicates that fibrils are tilted out of the coverslip plane when they overlap with one another. In addition, 72% of NR molecules have a wobble angle $\Omega$ smaller than 2 sr, indicating that they rigidly bind to fibrils (Fig. 4(e,f)).

Examining Deep-SMOLM’s measurements of NR azimuthal angles $\hat {\phi }$, we find that NR is well-aligned with the long axis of each amyloid fibril (Fig. 4(g-j)); the average orientation of NR varies smoothly as the amyloid fibrils bend and curve as shown in Figs. 4(g-j), especially in Fig. 4(h). Within crowded regions containing entangled fibrils, Deep-SMOLM accurately resolves NR orientations aligned with each individual fiber, as shown in Fig. 4(h,i); accordingly, multiple peaks are present in the histograms of azimuthal angles $\hat {\phi }$ (Fig. 4(j)(5-6)). Notably, DSFs located at these entangled regions are highly overlapped but still accurately estimated (Visualization 3).

4. Discussion and conclusion

Here, we demonstrate a deep learning-based estimator, called Deep-SMOLM, for simultaneously estimating 3D orientations and 2D positions of single molecules from a microscope implementing an engineered dipole-spread function [14]. Compared to traditional optimization approaches, Deep-SMOLM achieves superior estimation precision for both 3D orientation and 2D position that is on average within 3$\%$ of the best-possible precision (Fig. 2(b-d)). In general, designing a loss function for high-dimensional estimation and balancing weights among multiple parameters in particular, are always challenging. We attribute the superior performance of Deep-SMOLM to the linearity of estimating brightness-weighted orientational second moments from noisy SM images (Fig. 1(c) and Eqn. 2); otherwise, directly estimating orientation angles $[\theta,\phi,\Omega ]$ is ill-conditioned and unstable. Importantly, for high-performance DSFs, each brightness-weighted orientational moment contributes approximately equally to the final DSF shape [12,14]. Further, we have designed 3D orientations and 2D positions to be orthogonally encoded into the intensities and spatial positions, respectively, of Gaussian spots within Deep-SMOLM’s output images (Fig. 1(c)). These design strategies make the tuning of weights among the six output images unnecessary for training Deep-SMOLM (SI section 2.ii), and the resulting training among 5-dimensional estimates is well-balanced (Fig. S4).

As demonstrated on both simulated structures (Fig. 3) and experimental amyloid fibrils (Fig. 4), Deep-SMOLM shows excellent performance for estimating overlapped DSFs (Fig. 2(e-i)), e.g., at a density of ~0.6 emitter/µm$^{2}$, which is ~2 times denser than that allowed by traditional optimization-based algorithms. This capability should allow Deep-SMOLM to achieve a ~2$\times$ speed-up in SMOLM data acquisition by enabling fluorescent probes to blink at higher rates and to be used at higher concentrations. Moreover, Deep-SMOLM requires relatively little training time and data (~2 h and 30,000 noisy images containing ~330,000 total emitters). Once trained, Deep-SMOLM estimates 3D orientations and 2D positions ~10 times faster on a per-frame basis than iterative algorithms like RoSE-O [18] (~30 seconds for Deep-SMOLM, ~6 minutes for RoSE-O for analyzing 1,000 frames). Taking into account the fewer raw frames needed to compute reliable SMOLM reconstructions, Deep-SMOLM exhibits an overall $20 \times$ speed-up in computation time to obtain the same total number of localizations versus iterative algorithms. We note that correlations in SM blinking between frames [47] can also be used to further enhance Deep-SMOLM’s performance for estimating overlapped emitters. We anticipate that Deep-SMOLM will enable fast high-dimensional SMOLM imaging of dynamic processes and potentially discover structural changes on the sub-minute timescale.

In the near future, we plan to extend Deep-SMOLM to simultaneously estimate the 3D positions and 3D orientations of overlapping molecules; this analysis is inherently much more complex than 2D SMOLM due to the strong sensitivity of pixOL’s basis images to the axial position of each emitter [14]. Further, especially when imaging in vivo cellular structures in 3D, estimators need improved robustness against model mismatch, e.g., accommodating non-uniform background and field- and depth-dependent optical aberrations [13,48]. However, exhaustively training a network to anticipate all practical perturbations of an imaging system is extremely challenging. Thus, simultaneously learning model mismatch together with estimating 3D position and 3D orientation could enhance Deep-SMOLM’s performance in challenging imaging conditions. This adaptive approach may be key to ensuring that networks are sufficiently generalizable for in vivo super-resolution imaging and could unlock the full potential of SMOLM for cellular and tissue-scale imaging.

Funding

National Institute of General Medical Sciences (R35GM124858); National Science Foundation (ECCS-1653777).

Acknowledgments

The authors thank Hesam Mazidi, Oumeng Zhang, Joseph O’Sullivan, Joseph Culver, Carlos Fernandez-Granda, Qing Qu, Weiyan Zhou, Chunhui Yang, Aahana Bajracharya, Charlie Chen, and Xiyao Jin for helpful suggestions and comments. We are also grateful to Tianben Ding for help with amyloid aggregation. Amyloid-$\beta$ peptides were synthesized and purified by Dr. James I. Elliott (ERI Amyloid Laboratory, Oxford, CT).

Disclosures

The authors declare no conflicts of interest.

Data availability

The Deep-SMOLM algorithm, forward model, training data, validation data, and experimental data are available via Github [49], OSF [50], and by request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. P. Backlund, M. D. Lew, A. S. Backer, S. J. Sahl, and W. E. Moerner, “The role of molecular dipole orientation in single-molecule fluorescence microscopy and implications for super-resolution imaging,” ChemPhysChem 15(4), 587–599 (2014). [CrossRef]  

2. J. A. Varela, M. Rodrigues, S. De, P. Flagmeier, S. Gandhi, C. M. Dobson, D. Klenerman, and S. F. Lee, “Optical structural analysis of individual α-synuclein oligomers,” Angew. Chem. Int. Ed. 57(18), 4886–4890 (2018). [CrossRef]  

3. T. Ding, T. Wu, H. Mazidi, O. Zhang, and M. D. Lew, “Single-molecule orientation localization microscopy for resolving structural heterogeneities within amyloid fibrils,” Optica 7(6), 602–607 (2020). [CrossRef]  

4. T. Ding and M. D. Lew, “Single-molecule localization microscopy of 3D orientation and anisotropic wobble using a polarized vortex point spread function,” J. Phys. Chem. B 125(46), 12718–12729 (2021). [CrossRef]  

5. V. Curcio, L. A. Alemán-Casta neda, T. G. Brown, S. Brasselet, and M. A. Alonso, “Birefringent Fourier filtering for single molecule coordinate and height super-resolution imaging with dithering and orientation,” Nat. Commun. 11(1), 5307 (2020). [CrossRef]  

6. C. V. Rimoli, C. A. Valades-Cruz, V. Curcio, M. Mavrakis, and S. Brasselet, “4polar-STORM polarized super-resolution imaging of actin filament organization in cells,” Nat. Commun. 13(1), 301 (2022). [CrossRef]  

7. J. Lu, H. Mazidi, T. Ding, O. Zhang, and M. D. Lew, “Single-molecule 3D orientation imaging reveals nanoscale compositional heterogeneity in lipid membranes,” Angew. Chem. Int. Ed. 59(40), 17572–17579 (2020). [CrossRef]  

8. O. Zhang, W. Zhou, J. Lu, T. Wu, and M. D. Lew, “Resolving the three-dimensional rotational and translational dynamics of single molecules using radially and azimuthally polarized fluorescence,” Nano Lett. 22(3), 1024–1031 (2022). [CrossRef]  

9. M. P. Backlund, M. D. Lew, A. S. Backer, S. J. Sahl, G. Grover, A. Agrawal, R. Piestun, and W. E. Moerner, “Simultaneous, accurate measurement of the 3D position and orientation of single molecules,” Proc. Natl. Acad. Sci. 109(47), 19087–19092 (2012). [CrossRef]  

10. A. S. Backer, M. P. Backlund, A. R. von Diezmann, S. J. Sahl, and W. E. Moerner, “A bisected pupil for studying single-molecule orientational dynamics and its application to three-dimensional super-resolution microscopy,” Appl. Phys. Lett. 104(19), 193701 (2014). [CrossRef]  

11. O. Zhang, J. Lu, T. Ding, and M. D. Lew, “Imaging the three-dimensional orientation and rotational mobility of fluorescent emitters using the Tri-spot point spread function,” Appl. Phys. Lett. 113(3), 031103 (2018). [CrossRef]  

12. O. Zhang and M. D. Lew, “Single-molecule orientation localization microscopy II: a performance comparison,” J. Opt. Soc. Am. A 38(2), 288–297 (2021). [CrossRef]  

13. C. N. Hulleman, R. Ø. Thorsen, E. Kim, C. Dekker, S. Stallinga, and B. Rieger, “Simultaneous orientation and 3D localization microscopy with a Vortex point spread function,” Nat. Commun. 12(1), 5934 (2021). [CrossRef]  

14. T. Wu, J. Lu, and M. D. Lew, “Dipole-spread-function engineering for simultaneously measuring the 3D orientations and 3D positions of fluorescent molecules,” Optica 9(5), 505–511 (2022). [CrossRef]  

15. O. Zhang, Z. Guo, Y. He, T. Wu, M. D. Vahey, and M. D. Lew, “Six-dimensional single-molecule imaging with isotropic resolution using a multi-view reflector microscope,” bioRxiv (2022).

16. D. Patra, I. Gregor, and J. Enderlein, “Image analysis of defocused single-molecule images for three-dimensional molecule orientation studies,” J. Phys. Chem. A 108(33), 6836–6841 (2004). [CrossRef]  

17. F. Aguet, S. Geissbühler, I. Märki, T. Lasser, and M. Unser, “Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters,” Opt. Express 17(8), 6829–6848 (2009). [CrossRef]  

18. H. Mazidi, E. S. King, O. Zhang, A. Nehorai, and M. D. Lew, “Dense super-resolution imaging of molecular orientation via joint sparse basis deconvolution and spatial pooling,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), vol. 2019-April (IEEE, 2019), pp. 325–329.

19. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen, “Recent advances in convolutional neural networks,” Pattern Recognit. 77, 354–377 (2018). [CrossRef]  

20. A. Dhillon and G. K. Verma, “Convolutional neural network: a review of models, methodologies and applications to object detection,” Prog. Artif. Intell. 9(2), 85–112 (2020). [CrossRef]  

21. J. Geng, X. Zhang, S. Prabhu, S. H. Shahoei, E. R. Nelson, K. S. Swanson, M. A. Anastasio, and A. M. Smith, “3d microscopy and deep learning reveal the heterogeneity of crown-like structure microenvironments in intact adipose tissue,” Sci. Adv. 7(8), eabe2480 (2021). [CrossRef]  

22. Y. Zhang, L. Gu, H. Chang, W. Ji, Y. Chen, M. Zhang, L. Yang, B. Liu, L. Chen, and T. Xu, “Ultrafast, accurate, and robust localization of anisotropic dipoles,” Protein & Cell 4(8), 598–606 (2013). [CrossRef]  

23. P. Zhang, S. Liu, A. Chaurasia, D. Ma, M. J. Mlodzianoski, E. Culurciello, and F. Huang, “Analyzing complex single-molecule emission patterns with deep learning,” Nat. Methods 15(11), 913–916 (2018). [CrossRef]  

24. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods 17(7), 734–740 (2020). [CrossRef]  

25. A. Speiser, L.-R. Müller, P. Hoess, U. Matti, C. J. Obara, W. R. Legant, A. Kreshuk, J. H. Macke, J. Ries, and S. C. Turaga, “Deep learning enables fast and dense single-molecule localization with high accuracy,” Nat. Methods 18(9), 1082–1090 (2021). [CrossRef]  

26. L. Novotny and B. Hecht, Principles of nano-optics (Cambridge university press, 2012, Chap. 10).

27. T. Chandler, H. Shroff, R. Oldenbourg, and P. La Rivière, “Spatio-angular fluorescence microscopy I. Basic theory,” J. Opt. Soc. Am. A 36(8), 1334–1345 (2019). [CrossRef]  

28. T. Chandler, H. Shroff, R. Oldenbourg, and P. La Rivière, “Spatio-angular fluorescence microscopy II. Paraxial 4f imaging,” J. Opt. Soc. Am. A 36(8), 1346–1360 (2019). [CrossRef]  

29. S. Stallinga, “Effect of rotational diffusion in an orientational potential well on the point spread function of electric dipole emitters,” J. Opt. Soc. Am. A 32(2), 213–223 (2015). [CrossRef]  

30. A. S. Backer and W. E. Moerner, “Determining the rotational mobility of a single molecule from a single image: a numerical study,” Opt. Express 23(4), 4255–4276 (2015). [CrossRef]  

31. M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. B 20(3), 554–559 (2003). [CrossRef]  

32. O. Zhang and M. D. Lew, “Single-molecule orientation localization microscopy I: fundamental limits,” J. Opt. Soc. Am. A 38(2), 277–287 (2021). [CrossRef]  

33. J. Foytik and V. K. Asari, “A two-layer framework for piecewise linear manifold-based head pose estimation,” Int. J. Comput. Vis. 101(2), 270–287 (2013). [CrossRef]  

34. P. Fischer, A. Dosovitskiy, and T. Brox, “Image orientation estimation with convolutional networks,” in German conference on pattern recognition, vol. 9358 (Springer, 2015), pp. 368–378.

35. M. A. Lieb, J. M. Zavislan, and L. Novotny, “Single-molecule orientations determined by direct emission pattern imaging,” J. Opt. Soc. Am. B 21(6), 1210–1215 (2004). [CrossRef]  

36. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

37. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]  

38. J. Chao, E. Sally Ward, and R. J. Ober, “Fisher information theory for parameter estimation in single molecule microscopy: tutorial,” J. Opt. Soc. Am. A 33(7), B36–B57 (2016). [CrossRef]  

39. C. L. Matson and A. Haji, “Biased Cramér-Rao lower bound calculations for inequality-constrained estimators,” J. Opt. Soc. Am. A 23(11), 2702–2713 (2006). [CrossRef]  

40. D. Sage, H. Kirshner, T. Pengo, N. Stuurman, J. Min, S. Manley, and M. Unser, “Quantitative evaluation of software packages for single-molecule localization microscopy,” Nat. Methods 12(8), 717–724 (2015). [CrossRef]  

41. D. Sage, T.-A. Pham, H. Babcock, T. Lukes, T. Pengo, J. Chao, R. Velmurugan, A. Herbert, A. Agrawal, S. Colabrese, A. Wheeler, A. Archetti, B. Rieger, R. Ober, G. M. Hagen, J.-B. Sibarita, J. Ries, R. Henriques, M. Unser, and S. Holden, “Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software,” Nat. Methods 16(5), 387–395 (2019). [CrossRef]  

42. O. Zhang and M. D. Lew, “Fundamental limits on measuring the rotational constraint of single molecules using fluorescence microscopy,” Phys. Rev. Lett. 122(19), 198301 (2019). [CrossRef]  

43. E. Cohen, J. Bieschke, R. M. Perciavalle, J. W. Kelly, and A. Dillin, “Opposing activities protect against age-onset proteotoxicity,” Science 313(5793), 1604–1610 (2006). [CrossRef]  

44. M. Serra-Batiste, M. Ninot-Pedrosa, M. Bayoumi, M. Gairí, G. Maglia, and N. Carulla, “Aβ42 assembles into specific β-barrel pore-forming oligomers in membrane-mimicking environments,” Proc. Natl. Acad. Sci. 113(39), 10866–10871 (2016). [CrossRef]  

45. B. Ferdman, E. Nehme, L. E. Weiss, R. Orange, O. Alalouf, and Y. Shechtman, “VIPR: vectorial implementation of phase retrieval for fast and accurate microscopic pixel-wise pupil estimation,” Opt. Express 28(7), 10179–10198 (2020). [CrossRef]  

46. N. Banterle, K. H. Bui, E. A. Lemke, and M. Beck, “Fourier ring correlation as a resolution criterion for super-resolution microscopy,” J. Struct. Biol. 183(3), 363–367 (2013). [CrossRef]  

47. A. Saguy, O. Alalouf, N. Opatovski, S. Jang, M. Heilemann, and Y. Shechtman, “DBlink: Dynamic localization microscopy in super spatiotemporal resolution via deep learning,” bioRxiv (2022).

48. F. Xu, D. Ma, K. P. MacPherson, S. Liu, Y. Bu, Y. Wang, Y. Tang, C. Bi, T. Kwok, A. A. Chubykin, P. Yin, S. Calve, G. E. Landreth, and F. Huang, “Three-dimensional nanoscopy of whole cells and tissues with in situ point spread function retrieval,” Nat. Methods 17(5), 531–540 (2020). [CrossRef]  

49. M. D. Lew and T. Wu, “Deep-SMOLM: Deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution,” Github (2022), https://github.com/Lew-Lab/Deep-SMOLM.

50. M. D. Lew and T. Wu, “Deep-SMOLM: Deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution,” OSF (2022), https://osf.io/x6p8r/?view_only=b263a8693c5e4418a0b962df31ca0101.

Supplementary Material (4)

NameDescription
Supplement 1       Supplement 1
Visualization 1       SM detection and position-orientation estimation using Deep-SMOLM for simulated biological fibers shown in Fig. 3. (Top) Simulated raw polarized images (red: x-polarized and blue: y-polarized) are compared to (bottom) images reconstructed using the 3
Visualization 2       SM detection and position-orientation estimation using Deep-SMOLM for experimental fibrils shown in Fig. S15(a). (Top) Polarized images (red: x-polarized and blue: y-polarized) collected from the microscope are compared to (bottom) images reconstruct
Visualization 3       SM detection and position-orientation estimation using Deep-SMOLM for experimental intertwined amyloid fibrils shown in Fig. S15(b). (Top) Polarized images (red: x-polarized and blue: y-polarized) collected from the microscope are compared to (bottom

Data availability

The Deep-SMOLM algorithm, forward model, training data, validation data, and experimental data are available via Github [49], OSF [50], and by request.

49. M. D. Lew and T. Wu, “Deep-SMOLM: Deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution,” Github (2022), https://github.com/Lew-Lab/Deep-SMOLM.

50. M. D. Lew and T. Wu, “Deep-SMOLM: Deep learning resolves the 3D orientations and 2D positions of overlapping single molecules with optimal nanoscale resolution,” OSF (2022), https://osf.io/x6p8r/?view_only=b263a8693c5e4418a0b962df31ca0101.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Estimating 3D orientations and 2D positions of single molecules (SMs) using Deep-SMOLM. (a) The orientation of a dipole-like emitter is parameterized by a polar angle $\theta \in [0,90^{\circ }]$, an azimuthal angle $\phi \in (-180^{\circ },180^{\circ }]$, and a wobble solid angle $\Omega \in [0,2\pi ]$ sr. (b) Simulated (left, red) x- and (right, blue) y- polarized images captured by a polarization-sensitive microscope with a pixOL phase mask [14] of emitters with orientations $[\theta,\phi,\Omega ]$ shown in (a). Emitter 1: $\Omega =2\pi$ sr; emitter 2: $[0^{\circ }, 0^{\circ }, 0]$; emitter 3: $[45^{\circ }, 0^{\circ }, 0~\text {sr}]$; emitter 4: $[90^{\circ }, 0^{\circ }, 0~\text {sr}]$. (c) Schematic of Deep-SMOLM. (i) A set of (top, red) x- and (bottom, blue) y-polarized images of size of $N\times P$ is input to (ii) the neural network, which outputs (iii) six images $\boldsymbol {h}$ of size $6N\times 6P$ (Eqn. 3). Each detected emitter is represented by a 2D Gaussian spot ((iii) inset) located at corresponding positions across the six images. An emitter’s 2D position $\hat {r}$ is encoded into the center position of the Gaussian pattern, and the signal-weighted moments are encoded as the intensities of the Gaussian patterns across the six images $\boldsymbol {h}$. (iv) A post-processing algorithm is used to transform the Deep-SMOLM images into a list of SMs, each with a measured 2D position $\hat {\boldsymbol {r}}$, intensity $\hat {s}$, and 3D orientation $\left [\hat {\theta }, \hat {\phi }, \hat {\Omega }\right ]$.
Fig. 2.
Fig. 2. Precision of Deep-SMOLM for estimating 3D orientations and 2D positions of SMs. (a) Estimated lateral position $x$ for emitters located at (blue) $x=y=0$ nm, (red) 2.9 nm, (yellow) 5.9 nm, and (green) 8.8 nm. For each case, 2000 noisy images are simulated containing an SM located at the designed position with a random orientation. (b) Deep-SMOLM measurement performance, as quantified by mean angular standard deviation $\sigma _\delta$ (MASD, Eqn. S8), wobble angle precision $\sigma _\Omega$, and lateral precision $\sigma _r$ averaged uniformly over all $\theta$. Solid line: Deep-SMOLM precision, dashed line: Cramér-Rao bound precision, purple: $\Omega =0$, green: $\Omega =2$ sr. (e) Simulated noiseless (top, red) x- and (bottom, blue) y-polarized images containing two emitters separated by distances of (left to right) 1 nm, 139 nm, 414 nm, and 620 nm. Magenta dot: center position for each emitter. (f-i) Detection rate, precision, and accuracy of Deep-SMOLM for estimating positions and orientations from images containing two emitters at various separations. (f) Deep-SMOLM (black) Jaccard index and the corresponding number of (orange solid) true-positive (TP), (orange dash) false-negative (FN), and (orange dot) false-positive (FP) emitters. (g) Deep-SMOLM (black) precision $\sigma _r$ and (orange) accuracy $r-r_0$ for estimating 2D position $\boldsymbol {r}$. (h) Deep-SMOLM (black) orientation precision $\sigma _\xi$ and (orange) absolute mean orientation bias $\xi$ (Eqn. 5). (i) Deep-SMOLM (black) precision $\sigma _\Omega$ and (orange) accuracy $\Omega -\Omega _0$ for measuring wobble angle $\Omega$.
Fig. 3.
Fig. 3. Deep-SMOLM 5D imaging of a model structure of 1D fibers. (a)(top) Simulated raw image compared to (bottom) images reconstructed from Deep-SMOLM estimates. Magenta dots: center position of each SM. (b) Synthetic structure containing nine 1D fibers color-coded with the ground truth polar angle $\theta _{\text {GT}}$. (c) Deep-SMOLM measured wobble angle $\hat {\Omega }$ (ground truth $\Omega _{\text {GT}}=0$ sr). (d) Estimated polar angle $\hat {\theta }$. (e) Emitters within the white box shown in (b). Colormap: estimated signals $\hat {s}$ (photons). (f) Estimated azimuthal angle $\hat {\phi }$, where the length and direction of each line depict the magnitude of the in-plane orientation $\sin \hat {\theta }$ and direction of estimated azimuthal angle $\hat {\phi }$, respectively. The ground truth orientations are perpendicular to the fibers. (g) Wobble angle estimation bias $\left |\hat {\Omega }-0\right |$ versus mean orientation estimation bias $\xi$ (Eqn. 5). (Right) Distribution of wobble angle estimation bias and (top) mean orientation estimation bias. Scalebars: (c,f) 200 nm, (e) 50 nm.
Fig. 4.
Fig. 4. 5D SMOLM images of Nile red (NR) transiently bound to A$\beta$42 amyloid fibrils. (a) SMLM reconstruction compared to (b) the corresponding diffraction-limited image. Colorbars: (a) signal photons for each detected emitter and (b) photons per $58.5 \times 58.5 \text { nm}^{2}$ pixel. (a) Inset: distribution of molecule positions $x$ along the white line in (a). Red line: double-Gaussian fit. (c) Spatial map (colorbar: deg) and (d) overall distribution of NR polar angles $\hat {\theta }$. (e) Spatial map (colorbar: sr) and (f) overall distribution of NR wobble angles $\hat {\Omega }$. (g) Spatial map of azimuthal angles $\hat {\phi }$ for NR with polar angle $\hat {\theta }>60^{\circ }$ and wobble angle $\Omega <2$ sr. Each SM is represented as a 1 nm filled circle in (a,c,d) and represented as a 2 nm filled circle in (g). (h,i) All NR positions and orientations detected within the dotted white boxes in (g), depicted as line segments. Their lengths and directions indicate the magnitude of their in-plane orientations $\sin (\hat {\theta })$ and their azimuthal orientations $\hat {\phi }$, respectively. Colorbar: $\hat {\phi }$ (deg). (j) Azimuthal angle $\hat {\phi }$ distribution of all NR molecules within each solid white box in (g-i). Scale bars: (a-c,e,g) 1 µm, (h,i) 200 nm.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I = s l = 1 6 B l m l + b ,
I ( r ) = l = 1 6 B l ( r ; r ) ( q s q m q , l δ ¯ ( r r q ) ) + b = l = 1 6 B l ( r ; r ) u l ( r ) + b ,
h l ( r ) = u l ( r ) g , l { 1 , , 6 } ,
( h ^ , h ) = 1 L K L , K ( h ^ l k h l k ) 2
ξ = arccos ( [ sin θ ^ cos ϕ ^ , sin θ ^ cos ϕ ^ , cos θ ^ ] [ sin θ cos ϕ , sin θ sin ϕ , cos θ ] T ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.