Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiplexed wavefront sensing with a thin diffuser

Open Access Open Access

Abstract

In astronomy or biological imaging, refractive index inhomogeneities of, e.g., atmosphere or tissues, induce optical aberrations that degrade the desired information hidden behind the medium. A standard approach consists of measuring these aberrations with a wavefront sensor (e.g., Shack–Hartmann) located in the pupil plane, and compensating for them either digitally or by adaptive optics with a wavefront shaper. However, in its usual implementation this strategy can only extract aberrations within a single isoplanatic patch, i.e., a region where the aberrations remain correlated. This limitation severely reduces the effective field-of-view in which the correction can be performed. Here, we propose a wavefront sensing method capable of measuring, in a single shot, various pupil aberrations corresponding to multiple isoplanatic patches. The method, based on a thin diffuser (i.e., a random phase mask), exploits the dissimilarity between different speckle regions to multiplex several wavefronts incoming from various incidence angles. We present proof-of-concept experiments carried out in widefield fluorescence microscopy. A digital deconvolution procedure in each isoplanatic patch yields accurate aberration correction within an extended field-of-view. This approach is of interest for adaptive optics applications as well as diffractive optical tomography.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Image degradation caused by aberrations can be a critical issue when imaging objects of interest located behind a complex medium, e.g., a turbulent atmosphere or biological tissues, in which the distribution of the refractive index is both heterogeneous and dynamic. To correct aberrations and increase the image contrast and resolution, the most common technique is to use pupil adaptive optics (AO), in which the aberration is first characterized in the pupil plane of the optical system, and then corrected either digitally or by a wavefront shaper [14].

Aberration characterization methods can be classified into either direct or indirect wavefront sensing techniques. Indirect techniques estimate aberrations by optimizing, over time, a feedback image or signal with certain metrics (e.g., image contrast, sharpness, or intensity [57], nonlinear signals [810], etc.). These techniques, also referred to as “sensorless,” do not require any dedicated optical device to estimate aberrations, thus potentially reducing cost and hardware complexity. However, the convergence of the optimization process can be unstable and/or may lead to local minima. More importantly, these techniques require multiple measurements, which limits their application to static, or slowly varying, aberrating media, thus excluding most astronomical and ophthalmologic applications.

The other class of approaches, direct wavefront sensing, relies on a deterministic measurement of aberrations, typically using point sources called guide-stars (GSs). The most common wavefront sensor (WFS) is indisputably the Shack–Hartmann [11]. This WFS generally samples the pupil plane with a microlens array, where each microlens directly yields the local phase gradient of the wavefront in the corresponding subset of camera pixels, which defines a macropixel. A significant advantage of this method is that it only requires a single image acquisition to measure a wavefront, thus allowing higher temporal resolution. Direct wavefront sensing is preferred–and often crucial–whenever dynamic aberrations are involved, like those caused by the atmospheric turbulence in astronomy or the lachrymal film in ophthalmology [4,12].

However, aberrations only remain correlated within a limited region called the isoplanatic patch, whose typical size depends on the position and properties of the aberrating medium. A pupil aberration measurement typically assumes a 2D projection of the 3D refractive index of this medium along the direction of the GS. Wavefront measurements obtained with a single GS can therefore only be used to correct the aberration within a limited field-of-view (FoV) in the case of volumetric aberrating media. Aberration correction is then only valid in the angular vicinity of the GS, but decorrelates away from it. Extensive efforts have been made to address this critical issue using direct or indirect wavefront sensing methods. Conjugate AO can increase the isoplanatic patch size by conjugating the WFS (and the corrector) to the main aberrating layer [1316]. This approach is therefore highly efficient to widen the FoV in situations where aberrations originate from a single, well-defined planar layer. However, it requires a much larger number of sensing/shaping modes and it is not optimal in the most common scenarios where aberrations originate from a non-planar interface, from several layers, or from a volume. Ideally, a tomographic characterization of the aberrating medium is needed to retrieve high resolution over a large FoV or volume. This concept is at the heart of multi-conjugate AO, which uses several WFSs (${\rm typ.}\; {\gt} {6}$) to probe aberrations along several GS directions in order to estimate volumetric aberrations and correct them in several planes [17,18]. While increasingly used in astronomy, this approach is too complex, cumbersome, and expensive to be routinely applied in microscopy and ophthalmology.

Some approaches operating in the pupil plane rather than in a conjugated plane have also emerged. An interesting method sequentially applies various masks in the pupil plane to retrieve the local phase gradient in multiple isoplanatic patches [1921]. Alternatively, the beam or GS can be scanned to sequentially measure pupil aberrations in different locations [22]. Both approaches however require a time-consuming scan, which degrades temporal resolution. Shack–Hartmann WFSs could in principle provide parallelized multi-GS, multi-angle measurements in a single shot but their angular dynamics is limited by aliasing problems and their ability to measure multiple, distributed GSs is strongly constrained (see Supplement 1, S1).

While some of these emerging methods increasingly allow aberration characterization along multiple angles, they can hardly address dynamic modifications. Conversely, fast methods are essentially limited to a single GS. In microscopy, the speed constraint is partially relieved because aberrations in tissues typically evolve over time scales of a few minutes. However, the development of fast, multiplexed wavefront sensing methods remains essential when imaging moving organisms, or deep inside tissues, where wavefronts are also affected by rapidly evolving, multiple scattering [23]. It is also crucial for the volumetric imaging (e.g., light sheet, light field, multiplane imaging, etc.) of large samples since aberrations need to be characterized for a high number of isoplanatic volumes, which is time-consuming with indirect or sequential methods [2427]. In most fields requiring aberration correction, a simultaneous parallel measurement of multiple pupil aberrations would provide a broad FoV. Such approaches could also put single-shot tomographic microscopy within reach, while drastically simplifying the instrumentation.

In this context, recent advances in light manipulation through scattering media appear as decisive. The unique signature associated to random speckle patterns makes them particularly suited to the unambiguous retrieval of multiplexed information. Notably, the use of diffusers as encoding masks has recently shown its potential in areas including compressive ghost imaging [28], single-exposure 2D or 3D imaging [2931], 3D super-resolution microscopy [32], and hyperspectral imaging (using either diffusers [3335] or coded apertures [36]). Furthermore, the analysis of speckle pattern distortions has recently emerged as an accurate way to perform high-resolution wavefront sensing both in the X-ray [37,38] and visible light domains [3941].

This paper addresses the single-shot multiplexed measurement of multiple pupil aberrations, i.e., the parallel measurement of several wavefronts coming from different propagation directions. To solve the assignment ambiguity problem inherent to multiple wavefronts and periodic mask patterns (e.g., Shack–Hartmann), we propose to use a thin diffuser (i.e., a random phase mask) as a Hartmann mask. The wavefronts related to several GSs are thus encoded into an incoherent sum of distorted speckle patterns. By exploiting both the dissimilarity between different speckle regions and the large memory effect of a thin diffuser [42,43], we show that each speckle pattern can be unambiguously ascribed to its GS, allowing to extract the associated phase gradient maps from speckle pattern distortions. We experimentally validate this concept in microscopy with the multiplexed acquisition of wavefronts originating from several fluorescent GSs. This demonstrates the efficacy of this single-shot method for the characterization of pupil aberrations in multiple isoplanatic patches. Finally, we show in a proof-of-concept experiment that this multiplexed WFS can feed a deconvolution procedure, to digitally correct aberrations in several isoplanatic patches and obtain high-resolution images in a large FoV.

2. PRINCIPLE

Figure 1(a) illustrates the general concept of multiplexed wavefront sensing in the context of fluorescence microscopy. Fluorescent GSs are imaged by a microscope objective, through an aberrating medium. The latter induces spatially varying pupil aberrations as the GSs are located in different isoplanatic patches. In the pupil plane, each GS will then generate an aberrating wavefront carrying a global tilt offset according to the GS lateral position. A thin-diffuser-based WFS, conjugated to the pupil plane, then allows measuring these aberrating wavefronts incoming with different propagation directions from a single image acquisition.

 figure: Fig. 1.

Fig. 1. Principle of angularly multiplexed wavefront sensing. (a) Schematic description of the concept, here applied to fluorescence microscopy. $N = {3}$ fluorescent guide stars are imaged through an aberrating layer by a microscope objective. This leads to $N = {3}$ tilted and aberrated wavefronts in its pupil plane. A wavefront sensor (WFS) based on a thin diffuser allows to (g) multiplex the acquisition of these wavefronts. (b) Basic working principle of the thin diffuser-based WFS: a plane wave illuminating a 1° holographic diffuser generates a reference speckle pattern $R$ on a camera sensor located at a distance $d$. For a distorted wavefront, measurement of the speckle grain displacements ${\boldsymbol u}$ gives access to the phase gradient (${\nabla _ \bot}\varphi \simeq {k_0}{\boldsymbol u}/d$). (c), (d) The reference speckle pattern $R$ and the multiplexed speckle pattern $M$ measured by the gray-level camera contain information on the three wavefronts. $M$ is the incoherent sum of three distorted speckle patterns related to each wavefront. The pattern measured within a chosen region, the macropixel ${M_A}$ [gray square in (d)], can be described as the sum of three patterns of the reference [green, blue, and red boxes in (c)] each shifted by ${{\boldsymbol u}_j}$, with $j = {1},{2},{3}$, the index of each wavefront. (e) Owing to the orthogonality between different speckle regions, the cross-correlation of ${M_A}({\boldsymbol r})$ with the full reference speckle pattern $R$ reveals $N = {3}$ peaks. The peak position gives access to the local speckle grain displacements related to each wavefront, and so, to (f) the phase gradient maps of the wavefronts. (g) A 2D integration step finally allows independent wavefront reconstructions.

Download Full Size | PDF

Before detailing the multiplexed wavefront sensing concept, let us first recall the basics of diffuser-based WFSs in the case of a single wavefront acquisition. As initially proposed and validated in the X-ray range [37,38], thin diffusers can indeed be effectively used as Hartmann masks to perform broadband wavefront sensing with a high accuracy and resolution, a concept that was later extended to the visible range [41,44,45]. The WFS is composed of a thin diffuser used as a Hartmann-type mask located at a short distance $d$ from a camera sensor [see Fig. 1(b)]. A “reference” speckle pattern $R({\boldsymbol r})$ is first acquired by illuminating the WFS with a plane wave. Due to the large angular memory effect of thin diffusers [42,43], a tilt to the wavefront does not modify or decorrelate the speckle pattern but simply translates it. For a distorted wavefront, speckle grains will be locally shifted on the detector according to the local wavefront gradient, as shown in Fig. 1(b): a speckle grain at position ${\boldsymbol r}$ is locally displaced by a vector $u({\boldsymbol r})$, and the resulting speckle map $S({\boldsymbol r})$ is distorted as compared to the reference $R({\boldsymbol r})$ according to [39,45]

$$S({\boldsymbol r} ) = I({\boldsymbol r} )R[{{\boldsymbol r} - {\boldsymbol u}({\boldsymbol r} )} ],$$
where $I({\boldsymbol r})$ is the normalized intensity map. The phase gradient map ${\nabla _ \bot}\varphi$ can then be estimated by measuring the overall speckle grain displacement map ${\boldsymbol u}$ [41]:
$${\nabla _ \bot}\varphi \simeq {k_0}{\boldsymbol u}/d,$$
with ${k_0}$ the wavenumber.

Equation (2) shows that thin-diffuser-based WFS and microlens-array-based Shack–Hartmann WFS have similar behaviors, with speckle grains playing a role similar to that of focal spots in each subset of camera pixels, or macropixel (typ. ${7} \times {7}$ camera pixels [41]), and the refractive diffuser surface behaving as random, aberrating microlenses [29]. Interestingly, the use of a thin diffuser fundamentally provides a major advantage in the context of wavefront multiplexing. While the intensity pattern generated by microlens arrays in a Shack–Hartmann is periodic, and thus prone to ambiguity between identical, unrecognizable spots, a diffuser generates a superposition of unique patterns that can be identified unambiguously. The ambiguity problem strongly limits the phase dynamic range and the number of wavefronts that can be multiplexed with a Shack–Hartmann (see Supplement 1, S1). In contrast, random speckle grains provide a unique signature. Two uncorrelated, random speckle patterns are indeed statistically orthogonal relative to a zero-mean cross-correlation product. This inherent property of speckles alleviates the ambiguity issue and allows to retrieve multiple wavefronts locally encoded in the form of multiple, orthogonal speckle patterns [32].

To illustrate this concept, Figs. 1(c) and 1(d) respectively show a reference speckle pattern $R$ and a multiplexed speckle pattern $M$ encoding three wavefronts. Note that although the three speckle patterns associated to each wavefront are represented using three colors for clarity, the concept clearly applies to a single wavelength, monochrome detector, and to more than $N = {3}$ wavefronts. Since the $N$ wavefronts originate from different incoherent point sources, the pattern $M$ can be described as the sum of $N$ reference speckle patterns $R$, each shifted and deformed by $N$ different non-rigid transformations ${{\boldsymbol u}_j}({\boldsymbol r})$:

$$M({\boldsymbol r} ) = \mathop \sum \limits_{j = 1}^N {S_j}({\boldsymbol r} ) = \mathop \sum \limits_{j = 1}^N {I_j}({\boldsymbol r} )R[{{\boldsymbol r} - {{\boldsymbol u}_j}({\boldsymbol r} )} ],$$
where $j$ stands for the index of the wavefront among the $N$ multiplexed ones ($N = {3}$ in Fig. 1). This superposition is illustrated for a chosen subset of $M({\boldsymbol r})$, the macropixel ${M_A}({\boldsymbol r})$ (here, e.g., ${12} \times {12}$ pixels) shown in Figs. 1(c) and 1(d): the intensity in the macropixel ${M_A}({\boldsymbol r})$ is the incoherent sum of three different subregions of the reference speckle $R$ having undergone different lateral shifts, ${{\boldsymbol u}_1}$, ${{\boldsymbol u}_2}$, and ${{\boldsymbol u}_3}$ [see zoomed macropixel in Figs. 1(c) and 1(d)]. Since these subregions correspond to different regions of the speckle pattern created by different parts of the diffuser Hartmann mask, they are statistically orthogonal. The cross-correlation ${M_A}({\boldsymbol r}) \star R$ therefore allows to extract, without ambiguity, the speckle grain displacements vector map ${{\boldsymbol u}_j}({\boldsymbol r})$ associated to each wavefront through an estimate (centroid) of the position of the peaks, and the normalized intensities ${I_j}$ through the maximum values in each peak. This process is illustrated in Fig. 1(e), where three correlation peaks yielding ${{\boldsymbol u}_1}$, ${{\boldsymbol u}_2}$, and ${{\boldsymbol u}_3}$ are clearly visible. The local phase gradient at the pupil coordinate ${\boldsymbol r}$ can then be estimated using Eq. (2). Using a digital image correlation (DIC) algorithm [46], the same process can be reproduced in all macropixels ${M_A}({{\boldsymbol {r^\prime}}})$ of the full multiplexed speckle pattern $M$ in order to extract the phase gradient vector maps associated to each wavefront [see Fig. 1(f)]. Finally, a 2D integration of these phase gradient maps can be used to retrieve the $N$ wavefronts [see Fig. 1(g)].

It should be noted that this method implicitly requires a sparsity assumption about GS density, which is clearly visible in Fig. 1(e): the position of the correlation peaks, related to the maximum phase gradients of wavefronts (delimited here by dashed circles), can only be estimated if the correlation peaks do not overlap (see Supplement 1, S2). However, this constraint is common to all WFSs, and is not critical to the targeted application since wavefronts coming from GSs located in different isoplanatic patches are generally well separated angularly. Furthermore, as discussed in Section 3.B, the validity of this assumption can be predicted by calculating the “global” cross-correlation $M \star R$.

This approach can in principle be applied to any number of GSs and wavefronts. However, the contrast of the multiplexed speckle pattern $M({\boldsymbol r})$ resulting from the superposition of $N$ patterns ($C \propto 1/\sqrt N$) decreases for large $N$ values. This reduces the signal-to-noise ratio of the cross-correlation map ${M_A}({\boldsymbol r}) \star R$, the accuracy of the determination of speckle grain displacement ${{\boldsymbol u}_j}$, and thus the accuracy of wavefront reconstruction. A first solution to maintain a high reconstruction fidelity while multiplexing a large number $N$ of wavefronts would be to use larger macropixels ${M_A}$ (e.g., ${45} \times {45}$ pixels, Supplement 1, S3). However, this method drastically degrades the spatial resolution on the rebuilt wavefront. To alleviate resolution degradation, we propose an iterative DIC process to converge towards each speckle pattern ${S_k}$ [see Eq. (4)]: after a first DIC step (which provides a first estimation of the intensities ${I_j}$ and displacement maps ${{\boldsymbol u}_j}$), the distorted speckle pattern ${S_k}$ can be isolated from the multiplexed speckle pattern $M$ by subtracting the contributions of all others GSs $j \ne k$:

$${S_k}({\boldsymbol r} ) = M({\boldsymbol r} ) - \mathop \sum \limits_{j \ne k}^N {I_j}({\boldsymbol r} )R[{{\boldsymbol r} - {{\boldsymbol u}_j}({\boldsymbol r} )} ].$$

This process restores the contrast of speckle pattern ${S_k}$ as compared to $M$ and thus allows, through a second DIC step, to recover a more accurate estimation of the intensities ${I_k}$ and displacement maps ${{\boldsymbol u}_k}$. Noteworthy, since the subtracted patterns are prone to uncertainties, the contrast of the pattern ${S_k}$ is restored but remains noisy before the algorithm converges. For this reason, this process can be repeated $T$ times to increase the reconstruction accuracy and can be seen as a gradient descent algorithm that iteratively minimizes the quantity ${| {M({\boldsymbol r}) - \sum _{j = 1}^N {I_j}({\boldsymbol r})R[{{\boldsymbol r} - {{\boldsymbol u}_j}({\boldsymbol r})}]} |^2}$. We provide the scientific community with access to the source codes (see Code 1, Ref. [47]). Furthermore, a detailed description of the corresponding algorithm, as well as numerical simulation results demonstrating its relevance for high-resolution wavefront sensing are provided in Supplement 1, S3. Briefly, these simulations show that the number $T$ of iterations that are necessary to retrieve $N$ wavefronts increases with $N$ (typ. ${T} = {2}$ for $N = {5}$ wavefronts). Supplement 1, S3 also shows that more than $N = {16}$ wavefronts can be retrieved using this iterative DIC approach, while reducing the RMS error by an order of magnitude compared to direct DIC (see Fig. S6). Importantly, it also shows that high resolution is preserved by this method since a small phase pixel size (typ. ${7} \times {7}$ pixels) can be reached. When using a 4.2M pixel camera, this provides typ. 16 multiplexed wavefronts, each with 85 K phase and intensity pixels. Finally, we investigate the influence of background noise on the reconstruction in Supplement 1, S4.

3. RESULTS

A. Description of the Setup

To validate this concept experimentally, we built a widefield fluorescence microscope based on a commercial inverted microscope (Olympus IX-71). 1 µm diameter fluorescent beads randomly distributed on a glass slide are used as a sample (Orange 540/560, Thermo Fisher). An aberrating layer (1° Holographic Diffusers, Edmund Optics) is positioned 150 µm away from the microbead sample to induce spatially varying pupil aberrations [see Fig. 1(a)]. In order to excite the fluorescence of multiple beads chosen within the FoV, a 532 nm laser beam is shaped using a phase-only spatial light modulator (SLM, X13138-01, Hamamatsu) conjugated to the back focal plane of the microscope objective (LUCPlanFLN, ${\rm NA} = {0.45}$, ${\times 2}$ 0, Olympus). The computer-generated hologram displayed on this SLM is calculated using a Gerchberg–Saxton algorithm [48] so as to illuminate the chosen beads and use them as GSs. The 532 nm excitation is spectrally filtered by a dichroic mirror (NFD01-532, Semrock) and a notch filter ($\lambda = {533}\;{\pm}\;{2}\;{\rm nm}$, Thorlabs) to collect the emitted fluorescence (see Supplement 1, S5).

The WFS is composed of a thin diffuser (1° scattering angle holographic diffuser, Edmund Optics) and an sCMOS camera (Zyla 5.5, Andor). The diffuser-camera distance is set to $d = {3}\;{\rm mm}$ (see Ref. [41]), but placing the diffuser so close to the sensor was not mechanically possible. A ${1} \times$ magnification relay lens [not shown in Fig. 1(a)] is therefore used to image the diffuser at a distance $d$ from the camera. To properly measure wavefront distortions coming from various GSs, the multiplexed WFS is conjugated with the back pupil plane of the microscope objective. The reference speckle pattern is acquired in a first step using a simple collimated beam generated from a multimode fiber (core diameter 10 µm, Thorlabs) and a long focal length lens (${f^\prime} = {400}\;{\rm mm}$). This reference speckle pattern is shown in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. Experimental validation: single-shot measurements of three multiplexed wavefronts. (a) Acquired reference speckle pattern $R$ (for a plane wave illumination) and multiplexed speckle pattern $M$. Inset: zoom showing a reduced contrast for the multiplexed case due to the incoherent summation of speckle patterns. (b) Autocorrelation of the reference speckle pattern $R$ and cross-correlation of the multiplexed speckle pattern $M$ with the reference $R$. The three peaks in the cross-correlation reveal the number of multiplexed wavefronts and their global tip/tilt ${\langle \alpha \rangle _j}$, which can be determined by measuring the peaks shift (${\langle{\boldsymbol u}\rangle_j} = {\langle \alpha \rangle _j}d$) from the center. (c) Local cross-correlation allows to reconstruct the three multiplexed wavefronts. Here, the tip/tilt and defocus of each wavefront have been subtracted. (d) Comparison with the “classical” sequential method and (e) Zernike decomposition (first 25 modes without piston, tip/tilt, and defocus) for the three wavefronts. The excellent agreement between both measurements validates the multiplexing method.

Download Full Size | PDF

B. Multiplexed Wavefront Sensing Validation

To demonstrate the possibilities of the method, we first excited simultaneously $N = {3}$ GSs located in different isoplanatic patches, i.e., separated by more than 120 µm in the sample plane (see Supplement 1, S5 for the characterization of the sample isoplanatic patch size). On the WFSs, their fluorescence yields a superimposition of three speckle patterns, as shown in the bottom of Fig. 2(a). The cross-correlation $M \star R$ between the reference and multiplexed speckles is shown in Fig. 2(b), clearly revealing the number and location of the three excited GSs. Here, the position of each correlation peak indeed indicates the average propagation direction (or global tip/tilt: ${\langle \alpha \rangle _j} = {\langle{\boldsymbol u}\rangle_j}/d$) corresponding to each GS. The absence of overlap between peaks ensures that the sparsity hypothesis is valid, and that the wavefronts can be reconstructed independently. To this aim, the iterative DIC algorithm ($T = {3}$ iterations) is used to recover the speckle grain displacement maps related to each wavefront. The wavefronts obtained after integration of the phase gradient maps are shown in Fig. 2(c).

The validity of these multiplexed measurements was compared to individual, sequential measurements with single GSs, as quantitatively validated in Ref. [41]. The multiplexed [Fig. 2(c)] and sequential [Fig. 2(d)] aberration measurements appear in excellent agreement. To quantify this comparison, we performed a Zernike decomposition on both acquisitions. In Fig. 2(e), we compare the first 25 lowest order modes. Note that tip/tilt ($Z_1^{- 1}$ and $Z_1^1$) and defocus ($Z_2^0$) are omitted here for clarity because they dominate other modes by more than one order of magnitude. As can be seen in Fig. 2(e), the agreement between sequential and multiplexed measurements is excellent (${\rm RMSE} \lt {0.03}\lambda$), showing that several wavefronts generated by multiple GSs undergoing different aberrations can indeed be measured simultaneously in the pupil. In Supplement 1, S6, we present another experiment where $N = {5}$ GSs are multiplexed. We also discuss the experimental gain brought by the iterative DIC approach compared to the direct approach (RMSE reduction by a factor of 1.6 to 10.8). Finally, we present in Supplement 1, S8 a performance assessment of the method on a biological sample (fixed cultured cells) that does not contain any bright, isolated guide stars.

 figure: Fig. 3.

Fig. 3. Aberration deconvolution using multiplexed WFS (proof-of-concept experiment). (a) Non-aberrated image of fluorescent beads used as ground truth. (b) Image acquired in presence of an aberration medium. The five insets represent the aberrations measured locally, from which the global defocus $Z_2^0$ has been subtracted to highlight spatially varying aberrations. (c) Aberration correction using a single wavefront measurement (from GS #2, indicated with a solid circle). The aberration is corrected properly within the isoplanatic patch (indicated by the dashed circle), while the correction fails in the other region. Here, the defocus has been set to account for axial misalignment between the WFS and the imaging path. (d) Aberration correction with five wavefronts acquired in a single shot. Each of the multiplexed wavefronts is used to correct a certain region within the corresponding isoplanatic patch and a proper image stitch enables to correct the aberrations over a larger FoV. A zoomed image is also provided for the four cases.

Download Full Size | PDF

C. Correction beyond the Isoplanatic Patch

Having validated the ability to characterize multiple wavefronts simultaneously, we propose a proof-of-concept experiment showing that the measured wavefronts can be used to correct spatially varying pupil aberrations. For this purpose, a second sCMOS camera (Panda 4.2, PCO) conjugated to the sample plane by a 4-f system images the fluorescent sample (see Supplement 1, S5). Figures 3(a) and 3(b) show the images of the sample without and with an aberrating medium, respectively.

When imaging an incoherent object described by its brightness $f({\boldsymbol s})$, the image $g({\boldsymbol r})$ obtained in the presence of aberrations can be written as [49]

$$g({\boldsymbol r} ) = \int \kappa ({{\boldsymbol r} - {\boldsymbol s},{\boldsymbol s}} )f({\boldsymbol s} ){\rm d}{\boldsymbol s},$$
where ${\boldsymbol r}$ and ${\boldsymbol s}$ are spatial coordinates and $\kappa ({{\boldsymbol q},{\boldsymbol s}})$ describes the point-spread function (PSF) for a point source at a position ${\boldsymbol s}$ in the FoV. The image is therefore the incoherent sum of contributions affected by aberrations that vary spatially (or angularly).

In most approaches, however, the PSF is assumed to be stationary, i.e., spatially invariant. Under this approximation, $\kappa$ does not depend on the observation direction, and can therefore be simply written as $\kappa ({{\boldsymbol q},{\boldsymbol s}}) \approx \kappa ({\boldsymbol q})$ all across the field of view. The aberrated image described by Eq. (5) then becomes a simple convolution product:

$$g({\boldsymbol r} ) = \int \kappa ({{\boldsymbol r} - {\boldsymbol s}} )f({\boldsymbol s} ){\rm d}{\boldsymbol s} = ({\kappa \circledast f} )({\boldsymbol r} ).$$

If the aberrated wavefront $P({{\boldsymbol {q^\prime}}})$ is measured in the pupil plane using a given GS, the stationary PSF $\kappa ({\boldsymbol q})$ can be estimated using a Fourier transform: $\kappa ({\boldsymbol q}) = {| {F\{{P({{\boldsymbol {q^\prime}}})} \}} |^2}$, where ${| {F\{\cdot \}} |^2}$ is the square modulus of the Fourier transform. As shown by Eq. (6), a simple deconvolution of $g({\boldsymbol r})$ by $\kappa ({\boldsymbol q})$, using a Richardson–Lucy algorithm (MATLAB image processing Toolbox, 15 iterations), then yields a corrected image $f({\boldsymbol r})$. This is rigorously accurate in the direction of the GS used to measure $P({{\boldsymbol {q^\prime}}})$, and the correction remains acceptable only in its vicinity (within the isoplanatic patch). This is clearly visible in Fig. 3(c), where this deconvolution process was applied using a distorted wavefront measured with a single GS (GS#2). Here, the main aberration, i.e., the global defocus, is well compensated for in the entire FoV (see Supplement 1, S6). However, spatially varying aberrations [see inset in Fig. 3(b)] are only compensated for close to GS#2, within the isoplanatic patch, and the resolution quickly degrades away from it. A similar defocus-related general improvement of the image is observed using any of the five aberration corrections, but local improvements are only specific to the considered GS and its immediate vicinity.

In this context, the simultaneous measurement of wavefronts coming from multiple GSs described above can provide a better estimation of the angle-dependent PSF $\kappa ({{\boldsymbol q},{\boldsymbol s}})$, particularly in cases where aberration decorrelation over time demands simultaneous measurements across the FoV. In the proof-of-concept experiment shown in Fig. 3(d), the wavefronts coming from $N = {5}$ GSs, located in different isoplanatic patches, were simultaneously measured to estimate ${P_j}({{\boldsymbol {q^\prime}}})$ [$j = {1}$ to 5, corresponding wavefronts shown in Fig. 3(b)] and deduce the associated PSF ${\kappa _j}({\boldsymbol q})$ in each isoplanatic patch. The aberrated image can then be divided into $N = {5}$ regions delineated by indicative functions ${\iota _j}({\boldsymbol r}) \in \{{0,1} \}$, which are equal to 1 inside the $j$th region and 0 elsewhere [49]:

$$g({\boldsymbol r} ) \approx \mathop \sum \limits_{j = 1}^{N = 5} {\iota _j}({\boldsymbol r} )({{\kappa _j} \circledast f} )({\boldsymbol r} ).$$

The image shown in Fig. 3(d) was corrected using this piecewise approximation, i.e., by performing a deconvolution in each isoplanatic patch with the associated PSF ${\kappa _j}$. The improvement over the single, stationary WF correction $\kappa$ [Fig. 3(c)] is significant in the entire FoV, both in terms of resolution and contrast, providing image improvements beyond the isoplanatic patch (see also Supplement 1, S7 and S8). Since the $N$ measurements are performed simultaneously, the method is of particular interest for applications involving time-dependent aberrations.

4. CONCLUSION AND DISCUSSION

We proposed and demonstrated the use of a thin diffuser to simultaneously acquire multiple aberrated wavefronts by recording a single speckle pattern image. This direct, multi-angle wavefront sensing approach provides deterministic and quantitative measurements. It exploits the large memory effect of thin diffusers as well as the statistical orthogonality of speckle patterns to solve the aliasing problem intrinsic to periodic Hartmann masks. The proposed DIC-based algorithm can successfully reconstruct several (five or more; see Supplement 1, S3) angularly multiplexed wavefronts with high precision (typ. ${\rm RMSE} \lt \lambda /{30}$) and high resolution (85 K phase and intensity pixels), even for large angular distances between wavefronts. When conjugated to the pupil plane of an imaging system, this multi-angle WFS can thus sense, in a single shot, the aberrations of GSs located in different isoplanatic patches of the FoV. We illustrated the potential of this method for the digital correction of aberrations: several PSFs can be estimated simultaneously, and used to accurately deconvolve an aberrated image in multiple isoplanatic patches in order to recover a high-resolution image in the entire FoV. Here, the correction is performed considering a discrete set of GSs and aberrated wavefronts, in the corresponding patches, but a better image correction could be obtained by interpolating between patches, and precisely estimating the PSF in each point of the FoV [49].

While a diffuser-based WFS promises cost-efficiency for future applications, the associated speckle patterns have the drawback of spreading the energy over many pixels. This is non-ideal when dealing with a large number of multiplexed wavefronts, which reduces the effective contrast and increases reconstruction errors [see Fig. S6(a) in Supplement 1], particularly at low light levels where image dynamics is much reduced. However, the recent pace of progresses in the optimization of masks and reconstruction algorithms makes multiplexing approaches even more appealing and promising. The use of an optimized phase mask, able to generate orthogonal patterns with energy concentrated on few pixels, such as a random contour [50,51] or aperiodic foci array [52,53], offers an interesting perspective to improve signal-to-noise ratios.

It is important to note that our current implementation relies on the use of GSs, which, in certain scenarios, presents a significant disadvantage compared to emerging techniques that circumvent this need, especially those based on iterative pupil sampling methods [19] or extended-source Shack–Hartmann [54,55]. In this context, it would be interesting to investigate if the development of a dedicated algorithm and mask could further lead to an implementation compatible with extended sources [53].

Currently, the calculation times for high-resolution, multiplexed wavefront sensing (typ. 38 s for ${135} \times {135}$ phase pixels, $N = {4}$, $T = {2}$) led us to use post-processing aberration correction through deconvolution. However, real-time AO correction demands an optimization of this calculation speed. A direct ($T = {1}$ iteration) DIC can faithfully reconstruct aberration when a coarse phase sampling is sufficient (see Supplement 1, S3). A single wavefront ($N = {1}$) with ${20} \times {20}$ phase pixels can be retrieved within 1.4 s using an Intel Xeon CPU E5-2630 v4 @ 2.2 GHz -based workstation with 64 GB of RAM. Since multiple wavefronts retrieval relies on independent calculations, it can be parallelized, and increasing the number of wavefronts has a moderate impact on the calculation time ($N = {16}$ takes 2.6 s). Our MATLAB code provided in the supplementary material (Code 1, Ref. [47]) was not particularly optimized for speed, and we believe that these times can be shortened, e.g., by using faster DIC and 2D integration functions [54,55], to provide AO correction of tens of Zernike modes up to typ. 10 Hz.

When integrated into a full AO system (i.e., including hardware wavefront compensation), a single multi-angle WFS could provide instantaneous tomographic-like characterization of a large aberrating volume while significantly reducing the complexity of multi-conjugate AO systems. Given the complexity associated with using multiple compensators, an interesting compensation strategy entails using a single compensator in the pupil to correct the average aberrations in the FoV, followed by a deconvolution process in each patch [20]. Another promising strategy involves the use of a multi-angle (or “multi-pupil”) compensator in the pupil [7] to correct several (2D or 3D) isoplanatic patches in real-time.

In microscopy, multi-angle wavefront sensing combined with the appropriate optical labeling has the potential to benefit a wide range of applications. For instance, its single-shot nature could be highly beneficial for digitally correcting aberrations in dynamic organisms (e.g., swimming Zebrafish or C. elegans) across a large FoV. Furthermore, in AO multiphoton microscopy [56], integrating a multiplexed WFS in a de-scanned pathway, to probe endogenous GSs in different isoplanatic patches, could significantly enhance the correction of large volumes or multiple distant regions [57], an advancement that is extremely promising for functional imaging.

Beyond microscopy, the main fields where this parallel approach should prove most valuable are arguably those in which AO has become indispensable: astronomy and ophthalmology. In both cases, aberrations vary rapidly and isoplanatic patches are relatively small. While we demonstrated multi-angle WFSs using fluorescent GSs, it can indeed be applied to other types of GSs [58,59] provided that they are mutually incoherent.

Besides a wide range of applications in adaptive optics, we envision that multi-angle wavefront sensing should open new possibilities in optical diffraction tomography where the 3D refractive index mapping of the sample is usually obtained by sequentially measuring several wavefronts under multiple illumination angles [60,61]. Multi-angular WFSs should allow multiplexing these measurements to greatly increase the temporal resolution, and even enable single-shot tomography, either in the visible or in the X-ray domain [62].

Funding

DIM ELICIT-Region Ile de France (3D-DiPSI); Société d’Accélération du Transfert de Technologies (ERGANEO-PROJECT 520); Agence Nationale de la Recherche (ANR-20-CE42-0006, MaxPhase, ANR-21-CE42-0008 ELISE).

Acknowledgment

The authors thank Baptiste Blochet for stimulating discussions, and Sacha Reichman for his valuable help in preparing the biological samples.

Disclosures

The authors declare the following competing financial interests: P.B., M.G., T.W., and G.T. have filed a patent application related to angularly multiplexed wavefront sensing (US20220099499A1).

Data availability

Data underlying the results presented in this paper are available in Dataset 1, Ref. [63] We also freely make the source codes available to the scientific community (see Code 1, Ref. [47]).

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. F. Roddier, Adaptive Optics in Astronomy (Cambridge University, 1999).

2. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3, e165 (2014). [CrossRef]  

3. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14, 374–380 (2017). [CrossRef]  

4. K. M. Hampson, R. Turcotte, D. T. Miller, et al., “Adaptive optics for high-resolution imaging,” Nat. Rev. Methods Primers 1, 68 (2021). [CrossRef]  

5. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14, 1339–1352 (2006). [CrossRef]  

6. M. Žurauskas, I. M. Dobbie, R. M. Parton, et al., “IsoSense: frequency enhanced sensorless adaptive optics through structured illumination,” Optica 6, 370–379 (2019). [CrossRef]  

7. J. H. Park, L. Kong, Y. Zhou, et al., “Large-field-of-view imaging by multi-pupil adaptive optics,” Nat. Methods 14, 581–583 (2017). [CrossRef]  

8. M. A. May, N. Barré, K. K. Kummer, et al., “Fast holographic scattering compensation for deep tissue biological imaging,” Nat. Commun. 12, 4340 (2021). [CrossRef]  

9. I. N. Papadopoulos, J. S. Jouhanneau, J. F. A. Poulet, et al., “Scattering compensation by focus scanning holographic aberration probing (F-SHARP),” Nat. Photonics 11, 116–123 (2017). [CrossRef]  

10. Z. Qin, Z. She, C. Chen, et al., “Deep tissue multi-photon imaging using adaptive optics with direct focus sensing and shaping,” Nat. Biotechnol. 40, 1663–1671 (2022). [CrossRef]  

11. B. C. Platt and R. Shack, “History of Shack-Hartmann wavefront sensing/platt and shack,” J. Refract. Surg. 17, S573–S577 (2001). [CrossRef]  

12. R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack-Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002). [CrossRef]  

13. J. H. Park, W. Sun, and M. Cui, “High-resolution in vivo imaging of mouse brain through the intact skull,” Proc. Natl. Acad. Sci. USA 112, 9236–9241 (2015). [CrossRef]  

14. J. Li, D. R. Beaulieu, H. Paudel, et al., “Conjugate adaptive optics in widefield microscopy with an extended-source wavefront sensor,” Optica 2, 682–688 (2015). [CrossRef]  

15. J. Mertz, H. Paudel, and T. G. Bifano, “Field of view advantage of conjugate adaptive optics in microscopy applications,” Appl. Opt. 54, 3498–3506 (2015). [CrossRef]  

16. J. Scholler, K. Groux, K. Grieve, et al., “Adaptive-glasses time-domain FFOCT for wide-field high-resolution retinal imaging with increased SNR,” Opt. Lett. 45, 5901–5904 (2020). [CrossRef]  

17. R. D. Simmonds and M. J. Booth, “Modelling of multi-conjugate adaptive optics for spatially variant aberrations in microscopy,” J. Opt. 15, 094010 (2013). [CrossRef]  

18. E. Marchetti, R. Brast, B. Delabre, et al., “On-sky testing of the multi-conjugate adaptive optics demonstrator,” Messenger 129, 8–13 (2007).

19. D. Ancora, T. Furieri, S. Bonora, et al., “Spinning pupil aberration measurement for anisoplanatic deconvolution,” Opt. Lett. 46, 2884–2887 (2021). [CrossRef]  

20. T. Furieri, D. Ancora, G. Calisesi, et al., “Aberration measurement and correction on a large field of view in fluorescence microscopy,” Biomed. Opt. Express 13, 262–273 (2022). [CrossRef]  

21. J. Chung, G. W. Martinez, K. C. Lencioni, et al., “Computational aberration compensation by coded-aperture-based correction of aberration obtained from optical Fourier coding and blur estimation,” Optica 6, 647–661 (2019). [CrossRef]  

22. X. Wei and L. Thibos, “Design and validation of a scanning Shack Hartmann aberrometer for measurements of the eye over a wide field of view,” Opt. Express 18, 1134–1143 (2010). [CrossRef]  

23. T. Wu, Y. Zhang, B. Blochet, et al., “Single-shot digital optical fluorescence phase conjugation through forward multiply scattering samples,” arxiv, arXiv:2304.01759 (2023). [CrossRef]  

24. R. Lin, E. T. Kipreos, J. Zhu, et al., “Subcellular three-dimensional imaging deep through multicellular thick samples by structured illumination microscopy and adaptive optics,” Nat. Commun. 12, 3148 (2021). [CrossRef]  

25. K. Wang, D. E. Milkie, A. Saxena, et al., “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11, 625–628 (2014). [CrossRef]  

26. K. Wang, W. Sun, C. T. Richie, et al., “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun. 6, 7276 (2015). [CrossRef]  

27. R. Liu, Z. Li, J. S. Marvin, et al., “Direct wavefront sensing enables functional imaging of infragranular axons and spines,” Nat. Methods 16, 615–618 (2019). [CrossRef]  

28. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 131110 (2009). [CrossRef]  

29. N. Antipa, G. Kuo, R. Heckel, et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018). [CrossRef]  

30. N. Antipa, P. Oare, E. Bostan, et al., “Video from stills: lensless imaging with rolling shutter,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2019), pp. 1–8.

31. F. Linda Liu, G. Kuo, N. Antipa, et al., “Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser,” Opt. Express 28, 28969–28986 (2020). [CrossRef]  

32. M. Pascucci, S. Ganesan, A. Tripathi, et al., “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10, 1327 (2019). [CrossRef]  

33. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017). [CrossRef]  

34. H. Cao, “Perspective on speckle spectrometers,” J. Opt. 19, 060402 (2017). [CrossRef]  

35. B. Redding, S. F. Liew, R. Sarma, et al., “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013). [CrossRef]  

36. G. R. Arce, D. J. Brady, L. Carin, et al., “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process. Mag. 31, 105–115 (2014). [CrossRef]  

37. S. Bérujon, E. Ziegler, R. Cerbino, et al., “Two-dimensional x-ray beam phase sensing,” Phys. Rev. Lett. 108, 158102 (2012). [CrossRef]  

38. M. C. Zdora, P. Thibault, T. Zhou, et al., “X-ray phase-contrast imaging and metrology through unified modulated pattern analysis,” Phys. Rev. Lett. 118, 203903 (2017). [CrossRef]  

39. C. Wang, Q. Fu, X. Dun, et al., “Quantitative phase and intensity microscopy using snapshot white light wavefront sensing,” Sci. Rep. 9, 13795 (2019). [CrossRef]  

40. C. Wang, X. Dun, Q. Fu, et al., “Ultra-high resolution coded wavefront sensor,” Opt. Express 25, 13736–13746 (2017). [CrossRef]  

41. P. Berto, H. Rigneault, and M. Guillon, “Wavefront sensing with a thin diffuser,” Opt. Lett. 42, 5117–5120 (2017). [CrossRef]  

42. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

43. S. Feng, C. Kane, P. A. Lee, et al., “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]  

44. V. Brasiliense, J. F. Audibert, T. Wu, et al., “Local surface chemistry dynamically monitored by quantitative phase microscopy,” Small Methods 6, 2100737 (2022). [CrossRef]  

45. T. Wu, M. Guillon, C. Gentner, et al., “3D nanoparticle superlocalization with a thin diffuser,” Opt. Lett. 47, 3079–3082 (2022). [CrossRef]  

46. B. Pan, K. Qian, H. Xie, et al., “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20, 062001 (2009). [CrossRef]  

47. T. Wu, M. Guillon, G. Tessier, et al., “Multiplexed wavefront sensing with a thin diffuser: Codes,” figshare (2023), https://doi.org/10.6084/m9.figshare.23699640.

48. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttgart) 35, 237–246 (1972).

49. É. Thiébaut, L. Denis, F. Soulez, et al., “Spatially variant PSF modeling and image deblurring,” Proc. SPIE 9909, 99097N (2016). [CrossRef]  

50. J. K. Adams, D. Yan, J. Wu, et al., “In vivo lensless microscopy via a phase mask generating diffraction patterns with high-contrast contours,” Nat. Biomed. Eng. 6, 617–628 (2022). [CrossRef]  

51. V. Boominathan, J. T. Robinson, L. Waller, et al., “Recent advances in lensless imaging,” Optica 9, 1–16 (2022). [CrossRef]  

52. G. Kuo, F. Linda Liu, I. Grossrubatscher, et al., “On-chip fluorescence microscopy with a random microlens diffuser,” Opt. Express 28, 8384 (2020). [CrossRef]  

53. K. Yanny, N. Antipa, W. Liberti, et al., “Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy,” Light Sci. Appl. 9, 171 (2020). [CrossRef]  

54. A. Hubert, F. Harms, R. Juvénal, et al., “Adaptive optics light-sheet microscopy based on direct wavefront sensing without any guide star,” Opt. Lett. 44, 2514–2517 (2019). [CrossRef]  

55. A. Hubert, G. Farkouh, F. Harms, et al., “Enhanced neuroimaging with a calcium sensor in ex-vivo Drosophila melanogaster brains using closed-loop adaptive optics light-sheet fluorescence microscopy,” J. Biomed. Opt. 28, 066501 (2023). [CrossRef]  

56. P. Yao, R. Liu, T. Broggini, et al., “Construction and use of an adaptive optics two-photon microscope with direct wavefront sensing,” Nat. Protoc. 18, 3732–3766 (2023). [CrossRef]  

57. J. N. Stirman, I. T. Smith, M. W. Kudenov, et al., “Wide field-of-view, multi-region, two-photon imaging of neuronal activity in the mammalian brain,” Nat. Biotechnol. 34, 857–862 (2016). [CrossRef]  

58. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [CrossRef]  

59. J. Bertolotti and O. Katz, “Imaging in complex media,” Nat. Phys. 18, 1008–1017 (2022). [CrossRef]  

60. Y. K. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12, 578–589 (2018). [CrossRef]  

61. B. Simon, M. Debailleul, M. Houkal, et al., “Tomographic diffractive microscopy with isotropic resolution,” Optica 4, 460–463 (2017). [CrossRef]  

62. L. Quénot, H. Rougé-Labriet, S. Bohic, et al., “Implicit tracking approach for X-ray phase-contrast imaging with a random mask and a conventional system,” Optica 8, 1412–1415 (2021). [CrossRef]  

63. T. Wu, M. Guillon, G. Tessier, et al., “Multiplexed wavefront sensing with a thin diffuser: data,” figshare (2023), https://doi.org/10.6084/m9.figshare.23692359.

Supplementary Material (3)

NameDescription
Code 1       Matlab codes for multiplexed wavefront sensing
Dataset 1       Dataset for validating multiplexed wavefront sensing with a thin diffuser: 1- Speckle Images where 3 or 5 Wavefronts are multiplexed. 2- Reference Speckle Pattern are provided for reconstruction as well as 3- sequential measurements for comparison.
Supplement 1       Supplemental document

Data availability

Data underlying the results presented in this paper are available in Dataset 1, Ref. [63] We also freely make the source codes available to the scientific community (see Code 1, Ref. [47]).

63. T. Wu, M. Guillon, G. Tessier, et al., “Multiplexed wavefront sensing with a thin diffuser: data,” figshare (2023), https://doi.org/10.6084/m9.figshare.23692359.

47. T. Wu, M. Guillon, G. Tessier, et al., “Multiplexed wavefront sensing with a thin diffuser: Codes,” figshare (2023), https://doi.org/10.6084/m9.figshare.23699640.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Principle of angularly multiplexed wavefront sensing. (a) Schematic description of the concept, here applied to fluorescence microscopy. $N = {3}$ fluorescent guide stars are imaged through an aberrating layer by a microscope objective. This leads to $N = {3}$ tilted and aberrated wavefronts in its pupil plane. A wavefront sensor (WFS) based on a thin diffuser allows to (g) multiplex the acquisition of these wavefronts. (b) Basic working principle of the thin diffuser-based WFS: a plane wave illuminating a 1° holographic diffuser generates a reference speckle pattern $R$ on a camera sensor located at a distance $d$ . For a distorted wavefront, measurement of the speckle grain displacements ${\boldsymbol u}$ gives access to the phase gradient ( ${\nabla _ \bot}\varphi \simeq {k_0}{\boldsymbol u}/d$ ). (c), (d) The reference speckle pattern $R$ and the multiplexed speckle pattern $M$ measured by the gray-level camera contain information on the three wavefronts. $M$ is the incoherent sum of three distorted speckle patterns related to each wavefront. The pattern measured within a chosen region, the macropixel ${M_A}$ [gray square in (d)], can be described as the sum of three patterns of the reference [green, blue, and red boxes in (c)] each shifted by ${{\boldsymbol u}_j}$ , with $j = {1},{2},{3}$ , the index of each wavefront. (e) Owing to the orthogonality between different speckle regions, the cross-correlation of ${M_A}({\boldsymbol r})$ with the full reference speckle pattern $R$ reveals $N = {3}$ peaks. The peak position gives access to the local speckle grain displacements related to each wavefront, and so, to (f) the phase gradient maps of the wavefronts. (g) A 2D integration step finally allows independent wavefront reconstructions.
Fig. 2.
Fig. 2. Experimental validation: single-shot measurements of three multiplexed wavefronts. (a) Acquired reference speckle pattern $R$ (for a plane wave illumination) and multiplexed speckle pattern $M$ . Inset: zoom showing a reduced contrast for the multiplexed case due to the incoherent summation of speckle patterns. (b) Autocorrelation of the reference speckle pattern $R$ and cross-correlation of the multiplexed speckle pattern $M$ with the reference $R$ . The three peaks in the cross-correlation reveal the number of multiplexed wavefronts and their global tip/tilt ${\langle \alpha \rangle _j}$ , which can be determined by measuring the peaks shift ( ${\langle{\boldsymbol u}\rangle_j} = {\langle \alpha \rangle _j}d$ ) from the center. (c) Local cross-correlation allows to reconstruct the three multiplexed wavefronts. Here, the tip/tilt and defocus of each wavefront have been subtracted. (d) Comparison with the “classical” sequential method and (e) Zernike decomposition (first 25 modes without piston, tip/tilt, and defocus) for the three wavefronts. The excellent agreement between both measurements validates the multiplexing method.
Fig. 3.
Fig. 3. Aberration deconvolution using multiplexed WFS (proof-of-concept experiment). (a) Non-aberrated image of fluorescent beads used as ground truth. (b) Image acquired in presence of an aberration medium. The five insets represent the aberrations measured locally, from which the global defocus $Z_2^0$ has been subtracted to highlight spatially varying aberrations. (c) Aberration correction using a single wavefront measurement (from GS #2, indicated with a solid circle). The aberration is corrected properly within the isoplanatic patch (indicated by the dashed circle), while the correction fails in the other region. Here, the defocus has been set to account for axial misalignment between the WFS and the imaging path. (d) Aberration correction with five wavefronts acquired in a single shot. Each of the multiplexed wavefronts is used to correct a certain region within the corresponding isoplanatic patch and a proper image stitch enables to correct the aberrations over a larger FoV. A zoomed image is also provided for the four cases.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

S ( r ) = I ( r ) R [ r u ( r ) ] ,
φ k 0 u / d ,
M ( r ) = j = 1 N S j ( r ) = j = 1 N I j ( r ) R [ r u j ( r ) ] ,
S k ( r ) = M ( r ) j k N I j ( r ) R [ r u j ( r ) ] .
g ( r ) = κ ( r s , s ) f ( s ) d s ,
g ( r ) = κ ( r s ) f ( s ) d s = ( κ f ) ( r ) .
g ( r ) j = 1 N = 5 ι j ( r ) ( κ j f ) ( r ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.