## Abstract

Sub-diffraction resolution imaging has played a pivotal role in biological research by visualizing key, but previously unresolvable, sub-cellular structures. Unfortunately, applications of far-field sub-diffraction resolution are currently divided between fluorescent and coherent-diffraction regimes, and a multimodal sub-diffraction technique that bridges this gap has not yet been demonstrated. Here we report that structured illumination (SI) allows multimodal sub-diffraction imaging of both coherent quantitative-phase (QP) and fluorescence. Due to SI’s conventionally fluorescent applications, we first demonstrate the principle of SI-enabled three-dimensional (3D) QP sub-diffraction imaging with calibration microspheres. Image analysis confirmed enhanced lateral and axial resolutions over diffraction-limited QP imaging, and established striking parallels between coherent SI and conventional optical diffraction tomography. We next introduce an optical system utilizing SI to achieve 3D sub-diffraction, multimodal QP/fluorescent visualization of A549 biological cells fluorescently tagged for F-actin. Our results suggest that SI has a unique utility in studying biological phenomena with significant molecular, biophysical, and biochemical components.

© 2017 Optical Society of America

## 1. Introduction

Optical microscopy has played a crucial role in advancing the frontiers of biological sciences by allowing high-resolution, non-invasive visualization of important biological samples. Although developments and advances in optical design and manufacturing have made available high-resolution objectives with unprecedented numerical aperture (NA), microscopy faces a fundamental physical diffraction limit that can preclude visualization of important sub-cellular features in biological samples [1, 2]. In response, several imaging techniques have been developed which allow far-field sub-diffraction resolution imaging using a variety of unique and innovative mechanisms [3, 4].

Sub-diffraction imaging techniques introduced thus far operate in two main regimes: 1) imaging via spatially-coherent diffraction, or 2) imaging via spatially-incoherent fluorescence. Synthetic aperture (SA) is a popular choice for imaging in the first regime, and operates by using oblique illuminations to spatiotemporally encode a wider frequency support into the final image than directly allowed by the microscope’s physical aperture [5–7]. Applications of SA have resulted in both high-resolution imaging, where (Sparrow) resolutions of < 100 nm have been achieved [8], and high-throughput imaging, where gigapixel-scale images with resolutions > 5x over the diffraction limit have been obtained [9,10].

Sub-diffraction techniques for imaging in the fluorescent regime, often referred to as “super-resolution” techniques, typically require the sample’s fluorescent labels to have either photoswitching or depletion capabilities. Photoactivated localization microscopy (PALM) is a prominent example that uses photoswitchable fluorophores to localize individual emitters at sub-diffraction resolutions per acquisition, before combining them into a final super-resolved image [11]. Stimulated emission depletion (STED) is another successful example that utilizes point-scanning to directly minimize the size of the scanned focal spot by saturated stimulated emission with two synchronized ultrafast laser sources [12]. Such techniques have found tremendous success in biological imaging and have achieved resolutions well below 100 nm.

These two regimes operate on fundamentally different mechanisms, and thus enable distinct and complementary biological observations. Fluorescence imaging is the standard for molecular-specific, background-free, cellular imaging, and has enabled great insights into gene expression, protein interaction, cytoskeletal organization, endocytic dynamics, organelle structures, intracellular transport, cytokinesis, and general intracellular dynamics [13–18]. Coherent-diffraction imaging is the choice technique to image unstained and minimally prepared cellular samples with endogenous contrast to extract quantitative and biologically relevant parameters. Examples of coherent-diffraction imaging include quantitative-phase (QP) and light-scattering imaging, which can noninvasively probe structural, mechanical, biophysical, and biochemical properties of cells, and have been used for analysis of whole-cell morphology, mass, shear stiffness, refractive-index/optical-path-length (OPL), dispersion spectroscopy, and absorption/scattering [19–23]. Important biomedical applications even include non-invasive and quantitative measurements of hemoglobin oxygenation saturation [24] and detection of different cancers types [25].

Past works considering the synergistic combination of both fluorescent and coherent-diffraction have introduced imaging systems that efficiently combined the two regimes into 2D multimodal imaging solutions [26, 27]. More recently, this concept has also been extended to 3D, where a hybrid system was introduced that combined optical diffraction tomography (enabled with optical-tweezers for sample rotation) with confocal fluorescence microscopy [28]. This recent demonstration achieved 3D multimodal imaging of refractive-index and optically-sectioned fluorescence. However, though these past works have introduced the importance of multimodal imaging capability towards biological imaging, they were conducted at diffraction-limited resolutions.

Generally, sub-diffraction resolution enhances biological insight in both fluorescent and coherent-diffraction regimes – however, because the two regimes are fundamentally different, a sub-diffraction resolution method applicable to both has been difficult to find. This poses an obstacle in microscopy, as users conventionally must choose between either unimodal sub-diffraction or multimodal diffraction-limited imaging. Such a choice can prevent a synergistic, multimodal analysis of individual sub-cellular components beyond the diffraction limit and can hinder a cohesive understanding of biological morphology and function.

In this work, we demonstrate that structured illumination (SI) microscopy is a sub-diffraction technique compatible with both coherent-diffraction and fluorescent imaging. Indeed, SI is already an established technique which is conventionally associated with fluorescent super-resolution and has become popular due to its speed, cost, and simplicity [13–15]. However, SI also has applications in coherent imaging and has shown promise for sub-diffraction quantitative-phase (QP) imaging of diffractive samples [29–32]. Because SI enables fluorescent and coherent-diffraction sub-diffraction imaging, it offers a unique capability for biologists to conduct multimodal studies that probe relationships between the biophysical/biochemical properties of a cell with its molecular properties/processes, at sub-diffraction resolutions. In the following sections, we experimentally demonstrate, for the first time in our knowledge, a single optical system that uses SI to generate 3D QP and fluorescence visualizations at sub-diffraction resolutions, with potential applications for future multimodal analysis of sub-diffraction cellular components.

## 2. Theoretical framework

#### 2.1 Imaging transfer functions

Diffraction theory describes the maximum lateral spatial frequencies ${k}_{\parallel ,F}$ and ${k}_{\parallel ,QPM}$ and maximum axial spatial frequencies ${k}_{\perp ,F}$ and ${k}_{\perp ,QPM}$, respectively, that can be observed through a microscope for both conventional fluorescence and QP microscopies. For the case where QP and fluorescent modalities share the same physical microscope and the light used for QP imaging serves the dual purpose of also being the excitation for fluorescent imaging, these maximum spatial frequencies are given by ${k}_{\parallel ,F}=2\text{NA}/{\lambda}_{em}$, ${k}_{\parallel ,QP}=\text{NA}/{\lambda}_{ex}$, ${k}_{\perp ,F}=n(1-\mathrm{cos}\theta )/{\lambda}_{em}$, and ${k}_{\perp ,QP}=n(1-\mathrm{cos}\theta )/{\lambda}_{ex}$ [33]. Here, ${\lambda}_{ex}$ and${\lambda}_{em}$ are the excitation and emission wavelengths used for QP and fluorescent imaging, respectively, *n* is the index of refraction of the medium, $\theta $ is the maximum half-angle of light that the detection objective supports, and $\text{NA}=n\mathrm{sin}\theta $ is the detection objective’s numerical aperture. In three-dimensional Fourier space, these spatial frequencies set the bounds of observable spatial frequency content that can be passed through the microscope (note that the spatial frequency bounds for QP and fluorescence are only applicable in the electric-field and optical intensity regimes, respectively, so direct comparisons are inappropriate). In accordance with diffraction theory, these regions of observable spatial frequencies define the systems’ transfer functions (TF) and take the shape of a spherical cap, mathematically a subsection of Ewald’s sphere, in QP imaging and a torus-like structure in fluorescence imaging (Fig. 1) [2, 34]. Note that ${k}_{\perp ,QPM}$ does not equate to QP imaging’s axial resolution – because QP imaging’s TF has infinitesimal axial frequency support, the sample’s diffracted wave vector with the ${k}_{\perp ,QPM}$ axial component propagates through the whole image volume and offers little optical sectioning [35, 36].

The spatial frequencies outside the QP and fluorescent TFs are typically unobservable and are the target for visualization by sub-diffraction resolution imaging. SI achieves this by aliasing information into the TF by illuminating the sample with a spatially modulated illumination pattern. Because this aliasing effect happens naturally when two spatial patterns overlap regardless of fluorescent or diffractive imaging, SI can serve as a generalized platform for multimodal sub-diffraction imaging, up to a resolution gain factor of 2. Nonlinear SI, which uses fluorescent nonlinearities to achieve resolution gains of >2, is not considered here [37]. In this work, we demonstrate that SI allows 3D sub-diffraction multimodal QP and fluorescent imaging. We refer the reader to the work by Gustafsson *et al* [34], which beautifully illustrates the framework behind SI for 3D superresolution fluorescent imaging – in the next section, we introduce an analogous approach for QP imaging.

#### 2.2 Principle of SI-enabled 3D QP visualization

The SI framework for sub-diffraction resolution imaging is typically modelled mathematically as a modulation of the spatial frequencies in the sample by those in an illumination pattern, which is then imaged through a system that operates under constraints of linearity and translation-invariance. The illumination’s modulation of the sample’s spatial frequencies results in resolvable “beat” frequencies (i.e., Moiré patterns) that allow reconstruction of sample spatial frequencies that fall outside the system’s diffraction limit [38]. This mathematical treatment does not make any fundamental assumptions on coherence, and is thus equally applicable to both fluorescent and coherent-diffraction imaging (as long as complex electric-field is imaged in the coherent imaging case so that the requirement for linear and translation-invariant imaging is satisfied). Indeed, SI’s ability to enable sub-diffraction resolution coherent QP-microscopy (SI-QPM) in two dimensions has been clearly demonstrated [29–32].

To intuitively extend the conceptual framework towards 3D coherent imaging, we note that for coherent-diffraction imaging, SI is equivalent to multiplexed oblique illumination, where each tilted plane wave in the illumination’s plane-wave-decomposition contributes linearly to the electric-field at the image plane [39]. It follows from Fourier theory that because on-axis plane wave illumination results in a spherical-cap TF centered at the origin of frequency space, tilted plane-wave illumination, mathematically described with a phase ramp, displaces the TF in frequency space by an amount equal to the tilt angle. This concept is identical to and ubiquitously used in angle-scanning optical diffraction tomography (ODT) [22, 33, 40].

SI-QPM’s main difference from ODT is in implementation – because SI linearly multiplexes tilted illuminations onto the sample to achieve its structured pattern, the imaged field is a linear superposition of the sample’s spatial frequencies from correspondingly shifted TFs. In the case of a sinusoidal structured pattern, phase shifting the pattern allows for an analytical solution for these spatial frequencies, which can then be digitally shifted to their correct regions in 3D frequency space [37, 38]. Because the coherent TF is a spherical cap with infinitesimal axial frequency support, simply illuminating with structured patterns with spatial frequencies at the maximum-allowed magnitude, as is typically done to maximize lateral resolution gain in fluorescent SI, leaves much of the axial frequency space uncovered. To thoroughly cover 3D frequency-space, the tilt angle of the illuminations must be incremented. Conventionally, ODT accomplishes this by scanning the tilt angle of the illumination beam directly with a pair of scanning mirrors. SI can achieve the same effect by illuminating with spatial sinusoids of varying spatial frequencies (Fig. 2).

Coherent SI and ODT are conventionally considered separate imaging techniques, and so their connection as intuitively explained above may not be immediately obvious. For the interested reader, the Appendix contains a rigorous formulation of the theory for SI-enabled 3D QP sub-diffraction imaging, upon the fundamental principles of 3D fluorescent SI super-resolution, as introduced by Gustafsson *et al* [34]. Though this formulation was derived independently of any consideration of ODT’s conventional framework, the final conclusions in the Appendix, summarized in Figs. 8(g)-8(j) in the Appendix, is mathematically identical to those of ODT.

#### 2.3 Reporting diffraction-limited resolution

Reporting on an imaging system’s resolution usually involves reporting some metric of “minimum-resolvable distance”. We choose to report theoretical diffraction-limited resolution in terms of the Abbe limit, which gives the cycle-period of the highest spatial frequency of the sample that is transferred by the microscope onto the image plane. In our case of DC-centered, symmetric, and filled lateral TFs, diffraction-limited lateral Abbe resolutions are the reciprocal of the spatial frequency bounds described above – QP and fluorescent imaging have Abbe lateral resolutions of ${d}_{\parallel ,QP}=1/{k}_{\parallel ,QP}={\lambda}_{ex}/\text{NA}$ and ${d}_{\parallel ,F}=1/{k}_{\parallel ,F}={\lambda}_{em}/2\text{NA}$ in the electric-fields and optical intensity domains, respectively. As mentioned previously, due to its infinitesimal axial TF extent, conventional QP imaging has little axial resolution [35, 36]. SI-enabled 3D QP, however, has an expected Abbe axial resolution in electric-field of ${d}_{\perp ,QP}=1/{k}_{\perp ,QP}={\lambda}_{ex}/n(1-\mathrm{cos}\theta )$. Similarly, fluorescent imaging has an Abbe axial resolution of ${d}_{\perp ,F}=1/{k}_{\perp ,F}={\lambda}_{em}/n(1-\mathrm{cos}\theta )$ in optical intensity.

We choose the Abbe standard for “minimum-resolvable distance” because of its straightforward applicability to both coherent and incoherent modalities. Other popular alternatives that involve basic parameters of the image’s point-spread-function (FWHM, Rayleigh, Sparrow) have complex interpretations in coherent imaging [1, 41–44]. In the case of intensity-based coherent imaging, where the imaged intensity is nonlinearly related to sample structure, such interpretations may be outright misleading. Even in the case of electric-field imaging via holographic or computational methods, accurate interpretations must incorporate considerations of the system’s coherence properties or the sample’s phase dependences.

We also note that in the case of incoherent diffraction-based imaging, such as brightfield, darkfield, phase-contrast, or differential-interference contrast (DIC) microscopies, the lateral Abbe diffraction limit is generally considered to be ${d}_{\parallel ,inc}=\lambda /({\text{NA}}_{illum}+\text{NA})$, where $\lambda $ is the generalized imaging wavelength and ${\text{NA}}_{illum}$ is the numerical aperture of the illumination objective (i.e., condenser lens) that sets the angular range available for illumination [45, 46]. In the case where the illumination and detection objectives have equal numerical apertures, such that ${\text{NA}}_{illum}=\text{NA}$, the resolution limit ${d}_{\parallel ,inc}=\lambda /2\text{NA}$ matches the resolution limit ${d}_{\parallel ,F}$ of fluorescent microscopy. Because typical QP microscopies use on-axis plane wave illumination, it is tempting to conclude that the resolution limit ${d}_{\parallel ,QP}=\lambda /\text{NA}$ is simply a special instance of ${d}_{\parallel ,inc}$ under the constraint ${\text{NA}}_{illum}=0$. One could then naturally argue that it is unfair to consider $\lambda /\text{NA}$ as the coherent lateral diffraction limit when the sample could conceivably be illuminated with the full angular range allowed by the illumination objective to directly achieve $\lambda /2\text{NA}$ resolution imaging, without the addition of SI. To respond to this, we assert that the general expression for ${d}_{\parallel ,inc}$ is applicable only in the case of incoherent (or at best, partially-coherent) illumination, and thus is not an appropriate standard for coherent imaging. Indeed, simply illuminating the sample with coherent waves spanning the full range of illumination angles allowed by the illumination objective equates to illuminating the sample with a speckle pattern, which is unsuitable for widefield imaging. Hence, the general standard for coherent widefield QP imaging, which routinely achieves sub-wavelength phase-accuracies and visualizes sub-nanometer fluctuations, is to illuminate the sample with a single on-axis coherent beam (i.e., ${\text{NA}}_{illum}=0$), and consider the resulting resolution limit of ${d}_{\parallel ,QP}=\lambda /\text{NA}$ as the coherent system’s diffraction limit [5, 9, 36, 46–50].

## 3. Methods

#### 3.1 Optical system

The general design of our system is illustrated in Fig. 3 below. As shown, our system uses single-mode broadband light (NKT Photonics, EXW-6) spectrally filtered to 488 ± 12.5 nm (Semrock, FF01-482/25), which serves the dual purpose of being the illumination and excitation beam for QP and fluorescent imaging, respectively. This light was collimated and passed through a 50:50 polarization beam splitter (PBS, Thorlabs, PBS251) before being incident onto an amplitude spatial light modulator (SLM, Holoeye, HED 6001). Due to the SLM being nematic, and thus capable of accepting non-binary inputs, sinusoidal patterns were programmed into the SLM. Because the SLM is pixel-addressable, the sinusoidal patterns’ translations, rotations, and spatial-frequency magnitudes could be tuned with no mechanical movement. In the domain of the SLM’s coordinate space, the minimum spatial-period of written patterns was limited to ~27 um, corresponding to approximately 3.4 SLM pixels (SLM pixel pitch = 8.0 um) and thus satisfying the Nyquist limit for SLM pixel sampling. The pattern written onto the spatial light modulator was passed through the first 4f system (L1 → L2) before being imaged through the second 4f system (L3 → OBJ) onto the sample. To ensure faithful generation of a sinusoidal pattern at the sample, extraneous diffraction orders resulting from the SLM’s pixilation were spatially filtered out by an adjustable iris diaphragm (F, Thorlabs, ID25) placed in the Fourier plane of the first 4f system. The focal length of L3 was chosen so that the desired ± 1 diffraction orders arising from a ~27 um spatial-period sinusoidal pattern would be refocused to points near the opposite edges of the back focal plane of OBJ. Diffraction and fluorescence from the sample are collected in transmission from the sample through a detection objective (matched in NA to the imaging objective) and is magnified by the next 4f system (OBJ → L4). The sample’s fluorescence signal is split from the diffraction signal by a dichroic mirror (DM, Thorlabs, DMLP505) and further spectrally filtered (SF, Thorlabs, FEL500) before imaging onto the first camera (CMOS-F, Pixelink). The sample’s diffraction signal, after being split from the fluorescence via the DM, was passed through a diffraction-phase setup, where a grating (DG, Edmund Optics Ronchi 60 lpmm) was placed at the conjugate image plane to the sample to split the signal into various diffraction orders. A mask (M) was positioned at the Fourier plane of DG to physically block the −1st diffraction order while completely passing the + 1st order. The mask also contained a 20 um pinhole (PH, Edmund Optics, 52-869) to spatially filter the 0th order to generate a uniform wavefront reference beam to interfere with the + 1st diffraction order at the second camera (CMOS-QP, Pixelink). Due to broadband common-path off-axis interference, high temporal phase stability and low coherent noise (noise variance of < 0.001 rad for widefield imaging) were achieved [29]. Figure 3(c) shows in more detail how the mask was positioned relative to the diffraction orders arising from DG and the SLM.

For experiments involving calibration microspheres (Figs. 4 and 5), a 60X, 1.0 NA Nikon Physiology objective lens was used for imaging. For the experiments involving A549 cells (Fig. 6), this objective was replaced with a 40X, 1.3 NA Zeiss objective lens. The focal length of the condenser lens L4 was chosen to set the system magnification such that a coherent diffraction limited spot was sampled by ~6x6 camera pixels (6.7um pixel pitch). This oversampling, in conjunction with the off-axis interference frequency set by DG (60 lpmm), was sufficient to extract the sample’s complex electric-field via standard digital off-axis holography techniques [46, 47, 51]. The sample was mounted on an automated piezo-stage (Thorlabs, MAX312D) to allow sub-micron axial translations during acquisition, which was necessary for volumetric reconstructions.

#### 3.2 Image acquisition

Custom acquisition software was written in MATLAB. Three beam interference, with an unblocked 0 order, was used to generate periodic illumination patterns for 3D fluorescent and QP SI. The sample was axially scanned during image acquisition at increments of 200 nm, satisfying the Nyquist requirement set by our QP and fluorescent axial diffraction limits. A total of 60 axial slices were taken for a single volume. There was no physical change in the optical system between 3D fluorescent and QP SI imaging – however, the patterned illumination procedures for the two modalities were different and so the fluorescent/QP acquisitions were not simultaneous.

For 3D fluorescent SI imaging, the maximum allowed spatial frequency was used for the illumination pattern, such that the component ± 1 orders were at the edge of the illuminating objective’s back focal plane. For each imaged z-plane, acquisitions were taken for five phase-shifts, spaced $2\pi /5$, per rotation of the illumination pattern, with two rotations, spaced $\pi /2$ radians apart. In total, this corresponded to 600 raw acquisitions to reconstruct a single volume. Camera integration time was set to 120 ms per acquisition for sufficient fluorescent SNR.

For 3D QP SI imaging, the spatial frequency magnitude of the periodic illumination pattern was incremented 10 times through the domain $[0,\text{NA}/\lambda \text{}]$, per imaged z-plane. For each spatial frequency, acquisitions were taken for five phase-shifts, spaced $2\pi /5$, per rotation of the illumination pattern, with six total rotations, spaced $\pi /3$ radians apart. In total, this corresponded to 18000 raw acquisitions to sub-diffraction resolve a single volume. Camera integration time was set to 15 ms per acquisition to average out the high-frequency temporally-fluctuating “flickers” inherent with our SLM device.

For both fluorescent and QP imaging, phase-shifting of the spatial illumination pattern allowed solving for the sub-diffraction resolution spatial frequency components via linear system inversion. These components were then translated back to their appropriate regions in frequency space [29, 38] before being combined (this process is rigorously described in the Appendix). Standard Wiener deconvolution methods were applied to account for the non-uniform weighting of the final computed TF.

#### 3.3 Sample preparation

Microsphere Phantom Preparation: We prepared calibration samples of 400 nm and 520 nm diameter polystyrene microspheres (BangLaboratories). In order to attach the beads to a surface, 10 uL of the microsphere dilutions (2 uL stock-solution / 500 uL ethanol) were placed onto #1.5 coverslips and allowed to dry. 1X phosphate buffered solution (PBS) was placed over the regions of interest and served as the appropriate immersion fluid for our 1.0 NA Nikon Physiology objective lens.

Cell Preparation: A549 lung cancer cells were cultured using Dulbecco’s Modified Eagle Medium (DMEM) supplemented with 10% fetal bovine serum and 1 µL/mL pen-strep. Cells were plated at low-density onto #1.5 coverslips and allowed to attach to the substrate overnight. Cells were fixed using a 4% paraformaldehyde in PBS. Alexa Fluor 488 phalloidin (Life Technologies) was used to stain filamentous actin following the manufacturer’s suggested protocol. Following staining, an adhesive spacer and coverslip was placed on top of the sample to ensure a uniform PBS layer above the imaging field of view. Oil of refractive index $n=1.51$ was placed over the region of interest and served as the appropriate immersion fluid for our 40X 1.3 NA Zeiss objective lens.

## 4. Results

#### 4.1 2D lateral resolution enhancement of microspheres with SI-QPM

SI’s capability to enable 2D lateral QP subdiffraction resolution was previously demonstrated using low imaging NA [29–32]. Here, we extend these results to demonstrate SI-enabled sub-diffraction resolution QP imaging at high NA. We first demonstrate SI’s capabilities for lateral resolution enhancement by comparing conventional widefield (WF) and SI visualizations of calibration microspheres with physical dimensions smaller than the imaging resolution. As mentioned previously, for imaging experiments involving calibration microspheres, we used an imaging objective of NA = 1.0. Along with broadband, single-mode illumination at $\lambda =488\pm 12.5\text{nm}$, this NA yields a lateral diffraction limit for QP imaging of ${d}_{\parallel ,QP}=\lambda /\text{NA}\approx 488\text{nm}$ and an axial diffraction limit for tomography of ${d}_{\perp ,QP}=\lambda /\left[n\left(1-\mathrm{cos}\theta \right)\right]\approx 1.1\text{um}$.

We prepared a monolayer sample of 400 nm diameter polystyrene (*n *= 1.60 at $\lambda $ = 488nm) microspheres as our sample of choice with sub-diffraction resolution features (specific sample preparation methods described previously). In Fig. 4 below, we compare the QP visualization capabilities between SI and conventional widefield (WF) QP imaging. As is clear in Fig. 4(b),4(d), the 400 nm diameter microspheres are beyond the resolution allowed by widefield QP imaging, and cannot be individually visualized. However, Fig. 4(a),4(c) shows that SI drastically enhances QP imaging resolution, and enables clear visualization of individual microspheres.

#### 4.2 3D axial resolution enhancement of microspheres with SI-QPM

We next demonstrate that SI enables 3D QP visualization and emphasize how the added axial resolution may enhance the lateral visualization of structures even within the system’s diffraction limit. For this, we prepared a monolayer sample of 520 nm polystyrene microspheres that we imaged with our objective lens of NA = 1.0 (our lateral and axial diffraction limit remain ${d}_{\parallel ,QP}=488\text{nm}$ and ${d}_{\perp ,QP}\approx 1.0\text{um}$, respectively).

Figures 5(a) and 5(b) compare the SI-enhanced and conventional widefield (WF) QP images, respectively, of a central x-y slice through an imaging volume of this sample, demonstrating superior visualization with SI enhancement Figs. 5(c) and 5(d) show depth-slices through the location marked by the dashed yellow line in Figs. 5(a) and 5(b), respectively, and demonstrate that SI-enhancement can provide depth sectioning of the microspheres to a resolution of 1.2 um, which matches well with our expected axial resolution. No such depth localization is apparent with the conventional WF depth-slice, where QP signal from the microspheres propagates through all depth slices (conventional 2D QP, and other similar digital holographic techniques, is often done in conjunction with autofocusing schemes, due to this phenomenon [52]). We note in Fig. 5(c) that, although the beads are well localized, a haze of QP signal (indicated by yellow arrows) is present– this is an artifact of the “missing-cone” problem in ODT, arising from incomplete frequency coverage, and is typically dealt with in post-processing [53]. To demonstrate how SI increases the frequency content of the imaged volume, we show in Figs. 5(e),5(f) the axial profiles of radially-averaged 3D Fourier transforms of the SI-enhanced and WF imaging volumes, respectively. The Fourier transform of the WF imaging volume clearly shows that the imaged spatial frequencies lie on the spherical shell (Ewald cap) associated with coherent imaging. In contrast, the Fourier transform of the SI-enhanced volume depicts the distinct butterfly shape associated with ODT [22, 40], which allows optical sectioning and enables 3D QP. As expected, Fig. 5(e) also shows twice the lateral frequency support as Fig. 5(f). Figures 5(g) and 5(h) show zooms of the regions in Figs. 5(a), 5(b) outlined in yellow to emphasize the improvement in visualization capability of SI-enhanced QP over WF. Though the 520 nm microspheres are within the system’s diffraction limit and are visible in the conventional WF QP image zoom in regions of sparse microsphere density (indicated with green arrows in Fig. 5(h)), microsphere edges show significant defocus as well as susceptibility to diffraction artifacts. In regions of high microsphere density (indicated with yellow arrows in Fig. 5(h)), these defocus and diffraction artifacts can effectively hinder clear visualization of individual microspheres. In the SI QP image zoom, such defocus and diffraction artifacts are effectively sectioned out and result in clear and sharp visualization of all individual microspheres.

#### 4.3 Multimodal 3D sub-diffraction resolution biological visualization

SI 3D subdiffraction cellular resolution has been demonstrated for fluorescent imaging [34, 54] – here, however, we experimentally demonstrate for the first time to our knowledge, SI being used for 3D subdiffraction imaging of both QP and fluorescence in a single, multimodal optical system. As described previously, for this system, we used an imaging objective with NA of 1.3 to image through immersion media with refractive index $n=1.51$. Our excitation light remained the original broadband, single-mode, illumination at ${\lambda}_{exc}=488\pm 15\text{nm}$. This yields QP lateral diffraction limits of ${d}_{\parallel ,QP}={\lambda}_{exc}/\text{NA}\approx 375\text{nm}$ and a QP tomographic axial diffraction limit of ${d}_{\perp ,QP}={\lambda}_{exc}/\left[n\left(1-\mathrm{cos}\theta \right)\right]\approx 658\text{nm}$. Our fluorescent filter was designed to pass fluorescent emission at ${\lambda}_{f}=545\pm 20\text{nm}$. Thus, our expected fluorescent lateral and axial diffraction limits are ${d}_{\parallel ,F}={\lambda}_{f}/2\text{NA}\approx 210\text{nm}$ and ${d}_{\parallel ,F}={\lambda}_{f}/\left[n\left(1-\mathrm{cos}\theta \right)\right]\approx 735\text{nm}$, respectively.

To demonstrate imaging performance in a biological sample, we fluorescently labelled A549 cells with AlexFluor-488 phalloidin for F-actin visualization, and imaged QP and fluorescence (Fig. 6). While QP imaging offers endogenous mass-based contrast, it lacks sensitivity to specific organelles and cellular components. Therefore, coupling QP imaging with fluorescence allows for the simultaneous evaluation of mass and other descriptors of specific cytological components, and could be further used to delineate organelle boundaries for the determination of refractive index. For both modalities, SI imaging offered dramatic visualization enhancements when compared to conventional WF counterparts. We begin by comparing imaging performance between SI and WF QPM for an individual A549 cell (Fig. 6(a),6(b)). We zoom into a region above the nucleus to visualize a cluster of mass localizations. Molecular labelling would be required to truly ascertain the identity of these high phase-delay structures. However, given their small size and high phase-delay, we hypothesize that these are small lipid based vesicles which are known to have a high refractive index lipid bilayer, leading to the relatively high phase delay for the small object [55]. A previous work utilized total internal reflectance microscopy (TIRF) to monitor the secretion of ATP-containing vesicles from the same cell line, and reported these vesicles to surround the perinuclear region [56]. As seen in Figs. 6(a),6(b) the visible high phase-delay vesicles indeed surround the apical portion of the cell where the nucleus likely resides. From the SI and WF zooms in Fig. 6(c),6(d), respectively, we see that these QP structures are beyond the diffraction limit – the factor of 2 resolution enhancement enabled by SI, however, allows clear visualization of the individual localizations. Adjacent localizations were quantitatively measured to have a QP peak-to-peak distance of ~230 nm. Furthermore, Figs. 6(c)-6(j) demonstrate that axial translation of the sample results in severe diffraction artifacts from the out-of-focus localizations in the conventional WF visualization (indicated by yellow arrows in Figs. 6(h),6(j)). Depending on the specific features in the sample, these artifacts may even be indistinguishable from in-focus QP signal. With SI enhancement, however, the localizations effectively disappear from out-of-focus QP images. This phenomenon attributes to the 3D sectioning ability that SI enables in QP imaging by filling out axial frequency space.

The F-actin visualization was also drastically improved when comparing fluorescent SI super-resolution to diffraction-limited WF imaging (Figs. 6(k),6(l)). As can be seen, SI was required to visualize individual actin units. Previous work has demonstrated that 3-beam fluorescent SI results in 3D resolution gain as well as out-of-focus rejection via filling of the missing cone [34]. We experimentally confirm this by showing a zoom region (outlined in yellow in Figs. 6(k),6(l)) undergoing defocus (Figs. 6(m)-6(t)). Increased axial visualization is apparent when comparing the defocused zooms of SI to conventional WF. Defocused signal is abundant in the WF zooms and occludes clear visualization of the important high-frequency content associated with the F-actin – conversely, the SI-enhanced zooms demonstrate clear optical sectioning, and the imaged actin morphology shows distinct changes as the cells were axially scanned through the image focus. From the SI fluorescence image volume, the widths of individual actin filaments were measured to be ~180 nm between signal troughs. Often, no resolvable width could be measured in the corresponding WF fluorescence volume, due to the significant defocused fluorescent signal.

Importantly, we see that the QP signal highlights biological structures associated with high optical-path-length while the fluorescent signal offers background-free 3D visualization of molecularly tagged components. Thus, the QP and fluorescent imaging channels allow complementary 3D visualizations. From Figs. 6(a),6(k), we note that the biological information probed by the individual QP and fluorescent channels was inaccessible from the fluorescent and QP channels, respectively.

## 5. Discussion and conclusion

From a theoretical perspective, this work (1) introduced the framework for SI 3D sub-diffraction QP imaging, (2) drew parallels between SI-QPM and oblique illumination microscopy, and (3) equated SI-QPM to a multiplexed form of the more established ODT. However, more fundamentally, this work conveys that SI constitutes a sub-diffraction resolution technique compatible with both fluorescence and QP.

This multimodal compatibility has direct implications that can benefit microscopy for biological research. QP imaging provides a technology with which to noninvasively analyze endogenous cellular biophysical and biochemical parameters. It has shown promise in studying whole-cell spectroscopy, morphology, mass, stiffness, and refractive-index/optical-path-length distributions [19–23]. Furthermore, QP is effectively free and fast – no major sample preparation procedures are necessary to get strong imaging signal and camera integration times rarely pose a problem for longitudinal biological studies. However, QP imaging inherently has no molecular specificity, and the biological parameters extracted from QP cannot be localized to specific cellular components. This prevents analysis of localized and structure-specific QP, which can hinder studies exploring morphology, mechanics, mass, spectra, or density endogenous to individual sub-cellular components. Towards this end, due to its ability for molecular-specific contrast, fluorescence microscopy directly complements QP imaging. Fluorescence microscopy is the dominant imaging choice when studying interactions and dynamics of specific sub-cellular components, as is necessary in studies of gene expression, protein localization, intracellular transport, organelle dynamics, diffusion kinetics, etc [13–18]. A single optical system that incorporates QP and fluorescent imaging can allow registered, multimodal visualization of specific, localized regions of the sample and perhaps enable molecular-specific quantitative analysis. Indeed, a few past works may have considered this potential and introduced optical systems that efficiently combined both modalities [26–28]. Unfortunately, though such systems may represent important tools for biological analysis, they remain diffraction-limited, and a generalized scheme to achieve robust sub-diffraction resolution imaging with coherent and fluorescent contrast has remained elusive.

SI represents a solution to this problem. Both coherent and fluorescent imaging have individually showed that a factor 2 resolution gain dramatically enhances biological visualization. Figure 6 demonstrates this with QP imaging by significantly improving visualization of high phase-delay structures surrounding the nuclear region of the cell. ODT also exploits this factor 2 resolution gain when visualizing 3D refractive-index maps at sub-diffraction resolutions [22]. Though not specifically considered here, SI has also been exploited for sub-diffraction imaging of coherent brightfield and darkfield scatter [39, 57]. This may potentially represent SI’s multimodal compatibility with coherent imaging modalities apart from QP. In fluorescence imaging, previous applications utilizing SI-enabled resolution-doubling have included super-resolved visualization of microtubule dynamics associated with the polymerization of α-tubulin in Drosophila melanogaster S2 cells [54]. Mitochondria dynamics in HeLa cells were also imaged, and SI-enabled super-resolution allowed visualization of the mitochondrial cristae, usually seen only with electron microscopes. Though not considered here, resolution gains beyond a factor 2 may be achieved by utilizing fluorophore agents associated with strong non-linear behavior. Such resolution gains would simply require a greater number of translations of the SI pattern, and not require any changes to either the physical microscope (Fig. 3) or to the fundamental framework of multimodal coherent/fluorescent imaging.

We envision that biological insights extracted from such resolution enhancements from typically separate coherent and fluorescent imaging systems, can be potentially combined within the SI framework. Such a synergy may be important to comprehensively study biological components with distinct molecular and biophysical/biochemical functions. For example, cell cytoskeleton, known to be a network of F-actin, microtubules, and other intermediate filaments connecting most cellular structures, directly affects the mechanical properties of the cell. Previous studies have explored the distributions of mechanical stresses and displacements within the cell in response to applied loads and have modelled the cytoskeleton as a network of discrete, stressed elements [58, 59]. The cytoskeleton, however, also affects various important signal transduction pathways (STP) that control how information (from either mechanical or molecular stimuli) is passed from the cell membrane to structures within the cytoplasm or nucleus. Important cytoskeletal components that enable this include molecules from the glycolytic enzyme, protein kinase, lipid kinase, hydrolase, and GTPase families [60]. Multimodal QP/fluorescent sub-diffraction imaging can offer a unique ability to probe molecular and mechanical interactions in the cytoskeleton during STP events by enabling visualization of molecular-specific mechanical forces as well as whole-cell mechanical responses to signal transduction events. Applications of this can be important to explore how mechanical or mechanotransduction events affect gene expression and protein synthesis, as well as whole-cell functions such as cell growth, differentiation, locomotion, cytokinesis, and apoptosis.

## Appendix

In this section, we rigorously adapt the existing mathematical framework for 3D super-resolution fluorescent imaging via SI to 3D sub-diffraction resolution QP imaging. Though we highlight distinct differences between the two frameworks, we strive to maintain similar notation and derivation strategies as those presented in the original work by Gustafsson *et al* [34]. We hope that such similarities may encourage readers to draw comparisons between the 3D mathematical frameworks for SI-enabled sub-diffraction QP and super-resolution fluorescent reconstructions. We start by briefly reviewing how spatial frequencies propagate through a coherent imaging system.

## Coherent propagation of spatial frequencies through system aperture

For widefield coherent imaging, the illumination beam is typically considered to be a monochromatic plane-wave of wave-vector ${k}_{\text{illum}}=\left(0,0,{k}_{\lambda}\right)$ incident with a flat wavefront on a diffracting sample (shown below in Fig. 7(a)). Here, ${k}_{\lambda}=2\pi /\lambda $ is the wave-vector magnitude, and $\lambda $ is the illumination wavelength. The interaction of the sample with the illumination beam results in a total diffraction that can be decomposed into plane-wave components, each with a differently oriented wave-vector. Because $\lambda $ is maintained across all plane-wave components, these differently oriented wave-vectors share the common wave-vector magnitude of ${k}_{\lambda}$. Thus, in 3D Fourier space, the set of all diffracted wave-vectors trace a sphere (i.e., Ewald sphere) of radius ${k}_{\lambda}$ centered at the origin.

We denote ${k}_{\text{s}}$ as a wave-vector from the set of plane-wave components accepted through the numerical aperture of the imaging objective. The set of all possible wave-vectors ${k}_{\text{s}}$ traces a spherical cap on the Ewald sphere (illustrated below by a circular arc in Fig. 7(b)), and describes the 3D spatial frequencies of the component plane-waves that are allowed to propagate to the imaging detector. In the weak-scattering approximation, valid for largely transparent samples, ${k}_{\text{s}}\approx {k}_{\text{obj}}+{k}_{\text{illum}}$, where ${k}_{\text{obj}}$ is a 3D spatial frequency inherent in the sample. Thus, we see that ${k}_{\text{obj}}$ is simply shifted from ${k}_{\text{s}}$ by the constant illumination wave-vector, ${k}_{\text{obj}}\approx {k}_{\text{s}}-{k}_{\text{illum}}$, for the set of all detected ${k}_{\text{s}}$. In Fourier space, the set of all detected ${k}_{\text{obj}}$ thus traces a spherical cap coincident with the origin (shown below in Fig. 7(c) in the case of widefield imaging), and denotes the region of the sample’s 3D spatial-frequency spectrum that can be imaged. Thus, this region is considered the coherent system’s transfer function (TF). The aim of sub-diffraction resolution imaging is to capture more sample spatial frequencies than those directly encompassed by this TF.

## 3D image formation with axial scanning of sample

The imaging formation process in a coherent imaging system with complex electric-field imaging can be described with standard properties of linearity and translation-invariance. Namely, the electric-field of the system’s image is a blurred version of the electric-field diffracting through the sample. Mathematically, this can be described by:

where $r=\left(x,y,z\right)$ is the 3D spatial coordinate vector, $s\left(r\right)$is the sample’s complex transmittance function, $i\left(r\right)$ is the illumination electric-field through the sample,$y\left(r\right)$ is the acquired image’s complex electric-field (obtained in this manuscript via off-axis holography), $h\left(r\right)$ is the imaging system’s coherent point-spread-function (PSF), and $\otimes $ is the convolution operator.We note here that $h\left(r\right)$ describes the system’s PSF for the spatial frequencies present in the *diffraction propagating from the sample that are able to arrive at the imaging detector*, not simply the inherent spatial frequencies in the sample. Thus, for the purposes of this derivation, we label the Fourier transform of $h\left(r\right)$ as the system’s propagating transfer function (pTF), such that $h\left(r\right)\stackrel{\text{FFT}}{\iff}{H}_{p}\left(k\right)$, where $k=({k}_{x},{k}_{y},{k}_{z})$ is the 3D spatial frequency vector and ${H}_{p}\left(k\right)$ is the system pTF that encompasses the region of 3D Fourier space that describe the spatial frequencies present in the diffracted waves from the sample. Mathematically, the pTF is exactly the set of all possible detected wave-vectors ${k}_{s}$ (shown above in Fig. 7(b)).

In the specific case where the illumination of the sample is caused by multi-beam interference (as is the case in SI), the 3D illumination electric-field can generally be written as a sum of discrete components separable into axial and lateral harmonic functions:

*only*the sample’s reference coordinates. Conversely, if the sample was axially

*scanned*during image acquisition, the axial harmonic function would instead depend on the

*difference*between the sample’s and image’s coordinate frames, as first importantly noted by Gustafsson

*et al*[31]. Thus, in the case of axial scanning of the sample, the integral form of Eq. (3) is slightly modified:

Because the above mathematical treatment fundamentally relies only on modelling the imaging process as a linear and translation-invariant process (a condition satisfied by coherently imaging complex electric-fields), the final imaging expression Eq. (5) is identical to that presented in Eq. (6) in Gustafsson *et al* [31], where linearity and translation-invariance requirements were fulfilled by imaging intensity through a fluorescent (incoherent) system. We now apply Eq. (5) to specifically describe coherent 3D imaging under widefield and SI conditions.

## 3D image formation with orthogonal widefield illumination and axial scanning of sample

In the case of coherent widefield illumination, which consists of illuminating the sample with a flat wavefront, the illumination beam can be described simply as an axially propagating plane-wave, ${i}_{WF}\left(r\right)=\mathrm{exp}\left(j{k}_{\lambda}z\right)$. Comparing this to the expression in Eq. (2), we clearly see that ${i}_{WF}\left(r\right)$ has only the $m=0$ component and can be separated into axial and lateral harmonic functions ${i}_{WF}\left(r\right)={i}_{0}\left(z\right)\cdot {j}_{0}\left({r}_{x,y}\right)$ where ${i}_{0}\left(z\right)=\mathrm{exp}\left(j{k}_{\lambda}z\right)$ and ${j}_{0}\left({r}_{x,y}\right)=1$, respectively, and coefficient ${c}_{0}=1$. Fourier transforming ${i}_{0}\left(z\right)$ and ${j}_{0}\left({r}_{x,y}\right)$ results in ${I}_{0}\left({k}_{z}\right)=\delta \left({k}_{z}-{k}_{\lambda}\right)$ and ${J}_{0}\left({k}_{x,y}\right)=\delta \left({k}_{x,y}\right)$. After substitution back into Eq. (5), we see that the 3D image formed under widefield illumination with axial scanning of the sample has the spectrum ${Y}_{0}\left(k\right)={H}_{0}\left(k\right)\cdot S\left(k\right)$, where ${H}_{0}\left(k\right)=\left[{H}_{p}\otimes {I}_{0}\right]\left(k\right)={H}_{p}\left(k-{k}_{\lambda}\right)$ and ${k}_{\lambda}=\left(0,0,{k}_{\lambda}\right)$.

We note now that ${H}_{p}\left(k-{k}_{\lambda}\right)$ encompasses exactly the same region of Fourier space as the system TF, illustrated above in Fig. 7(c). Thus, ${Y}_{0}\left(k\right)$ is exactly $S\left(k\right)$ low-pass filtered with the system TF. Though this result is expected, it demonstrates the validity of Eq. (5) in describing the coherent image formation process. We now use Eq. (5) to mathematically describe the image formation process under structured illumination.

## 3D image formation with 3-beam sinusoidal illumination and axial scanning of sample

In this manuscript, we achieve structured illumination of the sample with three mutually coherent tilted plane-waves. Thus, the 3D illumination electric-field can be mathematically expressed as:

In the specific case where 3-beam interference is achieved by interfering only the 0th and ±1st orders diffracting from a sinusoidal patterned structured element with 2D spatial frequency vector ${k}_{T}$, we can impose the additional constraints ${k}_{0}=\left(0,0,{k}_{\lambda}\right)$, ${k}_{1}=\left({k}_{T},{k}_{T,z}\right)$, ${k}_{2}=\left(-{k}_{T},{k}_{T,z}\right)$, ${\varphi}_{0}=0$, and ${\varphi}_{1}=-{\varphi}_{2}=\varphi $, where ${k}_{T}$ and ${k}_{T,z}$ are the projections of ${k}_{1}$ onto the 2D lateral ${k}_{x,y}$ and 1D axial ${k}_{z}$ coordinate spaces, respectively. Because $\lambda $ is maintained throughout the imaging process, ${k}_{0},{k}_{1},{k}_{2}$ share the same wave-vector magnitude, ie. ${\left|{k}_{0}\right|}^{2}={\left|{k}_{1}\right|}^{2}={\left|{k}_{2}\right|}^{2}={\left|{k}_{T}\right|}^{2}+{\left|{k}_{T,z}\right|}^{2}={k}_{\lambda}^{2}$. Thus, we can rewrite Eq. (6) as:

In Fig. 8(a)-8(c) and 8(d)-8(f) below, we graphically illustrate the relationship between ${I}_{m}\left({k}_{z}\right)$, ${J}_{m}\left({k}_{x,y}\right)$, and ${H}_{m}\left(k\right)$ and the corresponding ${Y}_{m}\left(k\right)$ terms, respectively, for components $m=0,1,2$. As is clear in the case of $m=0$, the axial/lateral Fourier pair $\left\{{I}_{0}\left({k}_{z}\right);{J}_{0}\left({k}_{x,y}\right)\right\}$ is exactly equivalent to the Fourier representation of widefield illumination – thus, the sample’s spatial frequency information contained in ${Y}_{0}\left(k\right)$ is mathematically equivalent to what is imaged via widefield illumination. However, in the case of $m=1,2$, we see that the combination of axially translated ${H}_{m}\left(k\right)$ (from convolution of ${H}_{p}\left(k\right)$ with ${I}_{m}\left({k}_{z}\right)$) and laterally translated $S\left(k\right)$ (from convolution of $S\left(k\right)$ with ${J}_{m}\left({k}_{x,y}\right)$) results in ${Y}_{m}\left(k\right)$ imaging a region of the sample’s spatial frequency spectrum that is axially and laterally offset from the region imaged with widefield imaging. To reconstruct a high-resolution image where these image regions of the sample’s spatial frequency spectrum are appropriately synthesized, the component images ${Y}_{m}\left(k\right)$ for $m=0,1,2$ need to be separated and then appropriately Fourier shifted.

Fourier transforming Eq. (4) shows the Fourier spectrum of a single raw image to be a linear superposition of the Fourier spectra of the individual component images:

*N*equations. Standard linear inversion solvers can then be used to solve for the unknown ${Y}_{0}\left(k\right),{Y}_{1}\left(k\right)$ $\text{and}{Y}_{2}\left(k\right)$ components. Varying $\varphi $ is accomplished by simple translation of the sinusoidal patterned structured element. Conventional optical diffraction tomography also acquires exactly these components via sequential oblique illuminations (i.e., illuminating individually with the plane-wave components composing the SI, as expressed in Eq. (6): $\mathrm{exp}\left(j{k}_{0}\cdot r\right),\mathrm{exp}\left(j{k}_{1}\cdot r\right),\text{and}\mathrm{exp}\left(j{k}_{2}\cdot r\right)$, respectively).

Once the component terms are known, ${Y}_{1}\left(k\right)$ and ${Y}_{2}\left(k\right)$ can be digitally Fourier shifted by $-{k}_{T,3D}$ and ${k}_{T,3D}$ respectively, such that ${Y}_{1}\left(k+{k}_{T,3D}\right)={H}_{1}\left(k+{k}_{T,3D}\right)\cdot S\left(k\right)$ and ${Y}_{2}\left(k-{k}_{T,3D}\right)={H}_{2}\left(k-{k}_{T,3D}\right)\cdot S\left(k\right)$. As shown below in Fig. 8(g)-8(i), such shifts reposition the imaged content in ${Y}_{1}\left(k\right)$ and ${Y}_{2}\left(k\right)$ to their true positions in Fourier space. Simple addition of these repositioned terms yields:

This procedure, in turn, is repeated for incremented rotations of ${k}_{T}$ (i.e., rotating the structured element) to isotropically extend the resolution enhancements. Wiener deconvolution techniques can compensate for uneven weighting due to overlap of individual components in the final summations.

## Funding

National Science Foundation (CBET-1403905)

## Acknowledgments

We would like to acknowledge members of the Izatt lab for useful discussions and technical input. Additional datasets are presented in the pre-published version of this work: arXiv:1702.03582.

## References and links

**1. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts and Company Publishers, 2005).

**2. **M. Born and E. Wolf, *Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light* (CUP Archive, 2000).

**3. **L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. **190**(2), 165–175 (2010). [CrossRef] [PubMed]

**4. **D. Mendlovic, *Optical Superresolution* (Springer, 2012), Vol. 91.

**5. **M. Kim, Y. Choi, C. Fang-Yen, Y. Sung, R. R. Dasari, M. S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. **36**(2), 148–150 (2011). [CrossRef] [PubMed]

**6. **P. C. Sun and E. N. Leith, “Superresolution by spatial-temporal encoding methods,” Appl. Opt. **31**(23), 4857–4862 (1992). [CrossRef] [PubMed]

**7. **V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A **23**(12), 3162–3170 (2006). [CrossRef] [PubMed]

**8. **Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photonics **7**(2), 113–117 (2013). [CrossRef]

**9. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics **7**(9), 739–745 (2013). [CrossRef] [PubMed]

**10. **L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express **5**(7), 2376–2389 (2014). [CrossRef] [PubMed]

**11. **E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science **313**(5793), 1642–1645 (2006). [CrossRef] [PubMed]

**12. **B. Hein, K. I. Willig, and S. W. Hell, “Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell,” Proc. Natl. Acad. Sci. U.S.A. **105**(38), 14271–14276 (2008). [CrossRef] [PubMed]

**13. **P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods **6**(5), 339–342 (2009). [CrossRef] [PubMed]

**14. **R. Fiolka, L. Shao, E. H. Rego, M. W. Davidson, and M. G. Gustafsson, “Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination,” Proc. Natl. Acad. Sci. U.S.A. **109**(14), 5311–5315 (2012). [CrossRef] [PubMed]

**15. **D. Li, L. Shao, B.-C. Chen, X. Zhang, M. Zhang, B. Moses, D. E. Milkie, J. R. Beach, J. A. Hammer 3rd, M. Pasham, T. Kirchhausen, M. A. Baird, M. W. Davidson, P. Xu, and E. Betzig, “Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics,” Science **349**(6251), aab3500 (2015). [CrossRef] [PubMed]

**16. **J. G. Flannery, S. Zolotukhin, M. I. Vaquero, M. M. LaVail, N. Muzyczka, and W. W. Hauswirth, “Efficient photoreceptor-targeted gene expression in vivo by recombinant adeno-associated virus,” Proc. Natl. Acad. Sci. U.S.A. **94**(13), 6916–6921 (1997). [CrossRef] [PubMed]

**17. **O. Seksek, J. Biwersi, and A. S. Verkman, “Translational diffusion of macromolecule-sized solutes in cytoplasm and nucleus,” J. Cell Biol. **138**(1), 131–142 (1997). [CrossRef] [PubMed]

**18. **M. Okuda, K. Li, M. R. Beard, L. A. Showalter, F. Scholle, S. M. Lemon, and S. A. Weinman, “Mitochondrial injury, oxidative stress, and antioxidant gene expression are induced by hepatitis C virus core protein,” Gastroenterology **122**(2), 366–375 (2002). [CrossRef] [PubMed]

**19. **F. E. Robles and A. Wax, “Separating the scattering and absorption coefficients using the real and imaginary parts of the refractive index with low-coherence interferometry,” Opt. Lett. **35**(17), 2843–2845 (2010). [CrossRef] [PubMed]

**20. **R. Wang, Z. Wang, L. Millet, M. U. Gillette, A. J. Levine, and G. Popescu, “Dispersion-relation phase spectroscopy of intracellular transport,” Opt. Express **19**(21), 20571–20579 (2011). [CrossRef] [PubMed]

**21. **W. J. Eldridge, A. Sheinfeld, M. T. Rinehart, and A. Wax, “Imaging deformation of adherent cells due to shear stress using quantitative phase imaging,” Opt. Lett. **41**(2), 352–355 (2016). [CrossRef] [PubMed]

**22. **Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express **17**(1), 266–277 (2009). [CrossRef] [PubMed]

**23. **G. Popescu, Y. Park, N. Lue, C. Best-Popescu, L. Deflores, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Optical imaging of cell mass and growth dynamics,” Am. J. Physiol. Cell Physiol. **295**(2), C538–C544 (2008). [CrossRef] [PubMed]

**24. **F. E. Robles, C. Wilson, G. Grant, and A. Wax, “Molecular imaging true-colour spectroscopic optical coherence tomography,” Nat. Photonics **5**(12), 744–747 (2011). [CrossRef] [PubMed]

**25. **V. Backman, M. B. Wallace, L. T. Perelman, J. T. Arendt, R. Gurjar, M. G. Müller, Q. Zhang, G. Zonios, E. Kline, J. A. McGilligan, S. Shapshay, T. Valdez, K. Badizadegan, J. M. Crawford, M. Fitzmaurice, S. Kabani, H. S. Levin, M. Seiler, R. R. Dasari, I. Itzkan, J. Van Dam, and M. S. Feld, “Detection of preinvasive cancer cells,” Nature **406**(6791), 35–36 (2000). [CrossRef] [PubMed]

**26. **Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Diffraction phase and fluorescence microscopy,” Opt. Express **14**(18), 8263–8268 (2006). [CrossRef] [PubMed]

**27. **S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Spatial frequency-domain multiplexed microscopy for simultaneous, single-camera, one-shot, fluorescent, and quantitative-phase imaging,” Opt. Lett. **40**(21), 4839–4842 (2015). [CrossRef] [PubMed]

**28. **M. Habaza, B. Gilboa, Y. Roichman, and N. T. Shaked, “Tomographic phase microscopy with 180° rotation of live cells in suspension by holographic optical tweezers,” Opt. Lett. **40**(8), 1881–1884 (2015). [CrossRef] [PubMed]

**29. **S. Chowdhury and J. Izatt, “Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution, quantitative phase imaging,” Opt. Lett. **39**(4), 1015–1018 (2014). [CrossRef] [PubMed]

**30. **P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. **38**(8), 1328–1330 (2013). [CrossRef] [PubMed]

**31. **J. Zheng, P. Gao, B. Yao, T. Ye, M. Lei, J. Min, D. Dan, Y. Yang, and S. Yan, “Digital holographic microscopy with phase-shift-free structured illumination,” Photonics Research **2**(3), 87–91 (2014). [CrossRef]

**32. **S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express **4**(10), 1795–1805 (2013). [CrossRef] [PubMed]

**33. **V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc. **205**(2), 165–176 (2002). [CrossRef] [PubMed]

**34. **M. G. Gustafsson, L. Shao, P. M. Carlton, C. J. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. **94**(12), 4957–4970 (2008). [CrossRef] [PubMed]

**35. **S. S. Kou and C. J. Sheppard, “Imaging in digital holographic microscopy,” Opt. Express **15**(21), 13640–13648 (2007). [CrossRef] [PubMed]

**36. **M. Debailleul, V. Georges, B. Simon, R. Morin, and O. Haeberlé, “High-resolution three-dimensional tomographic diffractive microscopy of transparent inorganic and biological samples,” Opt. Lett. **34**(1), 79–81 (2009). [CrossRef] [PubMed]

**37. **M. G. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. U.S.A. **102**(37), 13081–13086 (2005). [CrossRef] [PubMed]

**38. **M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. **198**(2), 82–87 (2000). [CrossRef] [PubMed]

**39. **S. Chowdhury, A.-H. Dhalla, and J. Izatt, “Structured oblique illumination microscopy for enhanced resolution imaging of non-fluorescent, coherently scattering samples,” Biomed. Opt. Express **3**(8), 1841–1854 (2012). [CrossRef] [PubMed]

**40. **R. Fiolka, K. Wicker, R. Heintzmann, and A. Stemmer, “Simplified approach to diffraction tomography in optical microscopy,” Opt. Express **17**(15), 12407–12417 (2009). [CrossRef] [PubMed]

**41. **R. Horstmeyer, R. Heintzmann, G. Popescu, L. Waller, and C. Yang, “Standardizing the resolution claims for coherent microscopy,” Nat. Photonics **10**(2), 68–71 (2016). [CrossRef]

**42. **S. Van Aert, D. Van Dyck, and A. J. den Dekker, “Resolution of coherent and incoherent imaging systems reconsidered - Classical criteria and a statistical alternative,” Opt. Express **14**(9), 3830–3839 (2006). [CrossRef] [PubMed]

**43. **V. Nayyar, “Rayleigh two-point resolution by a semi-transparent and φ-phase annular optical system operating in partially coherent light,” Opt. Commun. **13**(3), 254–258 (1975). [CrossRef]

**44. **R. Barakat, “Application of apodization to increase two-point resolution by the Sparrow criterion. I. Coherent illumination,” JOSA **52**(3), 276–283 (1962). [CrossRef]

**45. **R. Heintzmann and G. Ficz, “Breaking the resolution limit in light microscopy,” Brief. Funct. Genomics Proteomics **5**(4), 289–301 (2006). [CrossRef] [PubMed]

**46. **B. Bhaduri, C. Edwards, H. Pham, R. Zhou, T. H. Nguyen, L. L. Goddard, and G. Popescu, “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics **6**(1), 57–119 (2014). [CrossRef]

**47. **T. Slabý, P. Kolman, Z. Dostál, M. Antoš, M. Lošťák, and R. Chmelík, “Off-axis setup taking full advantage of incoherent illumination in coherence-controlled holographic microscope,” Opt. Express **21**(12), 14747–14762 (2013). [CrossRef] [PubMed]

**48. **P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. **30**(5), 468–470 (2005). [CrossRef] [PubMed]

**49. **N. T. Shaked, L. L. Satterwhite, N. Bursac, and A. Wax, “Whole-cell-analysis of live cardiomyocytes using wide-field interferometric phase microscopy,” Biomed. Opt. Express **1**(2), 706–719 (2010). [CrossRef] [PubMed]

**50. **P. Girshovitz and N. T. Shaked, “Fast phase processing in off-axis holography using multiplexing with complex encoding and live-cell fluctuation map calculation in real-time,” Opt. Express **23**(7), 8773–8787 (2015). [CrossRef] [PubMed]

**51. **B. E. Saleh, M. C. Teich, and B. E. Saleh, *Fundamentals of Photonics* (Wiley, 1991), Vol. 22.

**52. **P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. **47**(19), D176–D182 (2008). [CrossRef] [PubMed]

**53. **J. Lim, K. Lee, K. H. Jin, S. Shin, S. Lee, Y. Park, and J. C. Ye, “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express **23**(13), 16933–16948 (2015). [CrossRef] [PubMed]

**54. **L. Shao, P. Kner, E. H. Rego, and M. G. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods **8**(12), 1044–1046 (2011). [CrossRef] [PubMed]

**55. **Y. L. Jin, J. Y. Chen, L. Xu, and P. N. Wang, “Refractive index measurement for biomaterial samples by total internal reflection,” Phys. Med. Biol. **51**(20), N371–N379 (2006). [CrossRef] [PubMed]

**56. **I. Akopova, S. Tatur, M. Grygorczyk, R. Luchowski, I. Gryczynski, Z. Gryczynski, J. Borejdo, and R. Grygorczyk, “Imaging exocytosis of ATP-containing vesicles with TIRF microscopy in lung epithelial A549 cells,” Purinergic Signal. **8**(1), 59–70 (2012). [CrossRef] [PubMed]

**57. **P. von Olshausen and A. Rohrbach, “Coherent total internal reflection dark-field microscopy: label-free imaging beyond the diffraction limit,” Opt. Lett. **38**(20), 4066–4069 (2013). [CrossRef] [PubMed]

**58. **S. Hu, J. Chen, B. Fabry, Y. Numaguchi, A. Gouldstone, D. E. Ingber, J. J. Fredberg, J. P. Butler, and N. Wang, “Intracellular stress tomography reveals stress focusing and structural anisotropy in cytoskeleton of living cells,” Am. J. Physiol. Cell Physiol. **285**(5), C1082–C1090 (2003). [CrossRef] [PubMed]

**59. **N. Wang, K. Naruse, D. Stamenović, J. J. Fredberg, S. M. Mijailovich, I. M. Tolić-Nørrelykke, T. Polte, R. Mannix, and D. E. Ingber, “Mechanical behavior in living cells consistent with the tensegrity model,” Proc. Natl. Acad. Sci. U.S.A. **98**(14), 7765–7770 (2001). [CrossRef] [PubMed]

**60. **P. A. Janmey, “The cytoskeleton and cell signaling: component localization and mechanical coupling,” Physiol. Rev. **78**(3), 763–781 (1998). [PubMed]