Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spinning pupil aberration measurement for anisoplanatic deconvolution

Open Access Open Access

Abstract

The aberrations in an optical microscope are commonly measured and corrected at one location in the field of view, within the so-called isoplanatic patch. Full-field correction is desirable for high-resolution imaging of large specimens. Here we present, to the best of our knowledge, a novel wavefront detector, based on pupil sampling with subapertures, measuring the aberrated wavefront phase at each position of the specimen. Based on this measurement, we propose a region-wise deconvolution that provides an anisoplanatic reconstruction of the sample image. Our results indicate that the measurement and correction of the aberrations can be performed in a wide-field fluorescence microscope over its entire field of view.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

The measurement of wavefront aberration is of crucial interest in optical microscopy, which aims to image samples at the highest possible resolution. The wavefront measurement can be carried out with several methods, for example with the use of wavefront sensors (e.g.,  Shack–Hartman wavefront sensors), by the acquisition of the system’s point spread function (PSF) at different depths [1] or indirectly by using a pupil segmentation method with the use of digital micromirror device (DMD) or spatial light modulator (SML) [2,3]. In fluorescence microscopy, the PSF can be directly measured by imaging a point-like source, such as fluorescent nanobeads, smaller than the diffraction limit of the optical system. When a good knowledge of the PSF is available, deconvolution is effectively used to provide a reconstruction of the object at high resolution [4]. However, the isoplanatic patch, i.e., the area where the aberration can be considered constant, is often smaller than the field of view. This happens when the aberrations are developed on different planes in the volume between the sample and the objective [5]. In this case, the PSF is not isoplanatic and changes in different imaged regions. Wavefront sensors measure the aberrations with high accuracy only in one point. Consequently, the reconstruction of the wavefront aberration corresponding to different regions of the sample would require the presence of many bright point sources in the field of view. This technique is successfully used in astronomy [6] and in some cases in ophthalmic imaging [7]. Similarly, the direct measurement of the PSF in different locations, when possible, is affected by noise and requires the localization of the fluorescence emitters that are sparse in different regions of the sample. Therefore, the presence of an anisoplanatic PSF makes the use of deconvolution algorithms critical [8,9] since an incorrect knowledge of the PSF creates artifacts in the reconstruction.

In this Letter we first introduce a new method for wavefront measurement able to reconstruct the wavefront aberration and the PSF corresponding to all points of the field of view. Then, we present a deconvolution method that, starting from the measured PSFs, reconstructs the samples anisoplanatically in its different regions. Using this method, we show the high resolution, artifacts-free reconstruction of a sample slide in a widefield fluorescence microscope.

Our wavefront detection method is based on a spinning subpupil aberration measurement (SPAM). The measurement starts with a motorized device that moves a subaperture across the microscope’s pupil. For each position of the subapertures [Fig. 1(a)], an image of the sample is acquired with the camera of the widefield microscope (Visualization 1). By measuring the relative shift of each acquired image, the wavefront gradient in each pupil location is obtained. Figure 1(b) shows this process for an object point ${{\rm{OP}}_1}$. When imaged through a certain subaperture, its image ${{\rm{IP}}_1}$ is shifted by a quantity $\Delta {{\rm{x}}_1}$ from the nonaberrated position. Since we do not have access to the nonaberrated image, we choose as the reference the image acquired with a subaperture placed in the center of the pupil. The displacement $\Delta {{\rm{x}}_1}$ is thus proportional to the wavefront gradient in the corresponding pupil position. The shift is then measured using the sum of squared difference algorithm [10]. The wavefront reconstruction is equivalent to the Shack–Hartmann procedure (see Supplement 1). It is worth noting that for the same subaperture position, different fields on the object (${{\rm{OP}}_2}$) can suffer from different wavefront aberrations, potentially leading to a different image shift $\Delta {{\rm{x}}_2}$. Thanks to this principle the SPAM can measure the wavefront in each position of the field of view. Our wavefront measurement system is composed of a spinning wheel that, rotating, scans the pupil with an aperture of 1 mm across a pupil of 10 mm diameter. This configuration was chosen to obtain a good tradeoff between wavefront sampling and device compactness and allows the measurement of aberrations up to the 4th order of Zernike polynomials (OSA/ANSI j = 14). The scan happens on four radial distances, with the angular positions of the iris being designed to avoid overlaps [Fig. 1(a)]. The total scanned points on the wavefront are 18, and, consequently, 18 images are acquired. The camera automatically adjusts the exposure time proportionally to the ratio R of the area of the subapertures and of the whole pupil (${\rm{R}} = {{100}}$). The spinning wheel is also equipped with an aperture, as large as the pupil of the microscope, which is used for image acquisition. For a typical fluorescent sample slide, the exposure time with the fully open pupil is 10 ms, and it is 1 s for the subapertures. The total time required for the wavefront measurement is 18 s. It is worth noting that the subapertures could be realized with a bigger diameter for increased light collection during the scanning process but at the expense of wavefront measurement accuracy. The images are acquired with a ${{10}} \times$ microscope (Mitutoyo, Plan Apo, LWD, ${\rm{NA}} = {0.28}$) objective in a custom-made widefield upright microscope, equipped with a 200 mm tube lens (Nikon) and a sCMOS camera (Hamamatsu Flash 4.0, $2048 \times 2048$ pixels).

 figure: Fig. 1.

Fig. 1. (a) Pupil of the optical system (black circle) and position of the subapertures (red circles) in the pupil, (see Visualization 1). (b) Schematic diagram of the optical imaging system with an example of image shift relative to two fields on the object (${{\rm{OP}}_1}$ and ${{\rm{OP}}_2}$). (c) Example of the image distortion and shift due to the presence of a wavefront aberration.

Download Full Size | PDF

Figure 2 shows the image of a microscope slide (FluoCells, Thermofisher, Prepared Slide #3, 16 µm mouse kidney section labeled with Alexa Fluor 488). An aberrating phase plate, made with two 170 µm cover-glasses bonded with UV glue, is placed in front of the sample, inclined by 45° to induce a large aberration. A 480 nm light-emitting diode illuminates the sample, and the light is detected with a ${{520}}\;{{\pm}}\;{{15}}\;{\rm{nm}}$ filter placed in between the SPAM module and the tube lens. Thus, the SPAM reconstructs the region-dependent wavefront phases [Fig. 2(b)] in $N \times N$ regions (here $N = 9$). The corresponding PSFs are calculated as the squared value of the 2D Fourier transform of the wavefront phases [Fig. 2(c)]. We observe that the aberrations contain mainly astigmatism and coma.

 figure: Fig. 2.

Fig. 2. (a) Acquired image of a mouse kidney slide, aberrated by the presence of a glass wedge. Reconstructed wavefronts in (b) ${{5}} \times {{5}}$ subregions and (c) corresponding PSFs. In the actual reconstructions, we mapped the wavefront in a ${{9}} \times {{9}}$ grid, whereas here we show a ${{5}} \times {{5}}$ for a visual purpose only. Scale bar is 200 µm in (a) and 10 µm in (c).

Download Full Size | PDF

Since we have access to the PSF in the different locations of the sample, we developed a region-wise deconvolution, inspired by an approach proposed in astronomy [11]. We implement the deconvolution using a Richardson–Lucy (RL) iterative method [12]. A simple deconvolution for each of the $N \times N$ regions of the image would produce artifacts at the boundary of each region. These artifacts will typically have a size comparable to the one of the PSF. Moreover, each tile abruptly changes the PSF, possibly leading to a discontinuous reconstruction. Assuming that the PSF changes over the image with continuity, we use the following approach to avoid the formation of stitching artifacts. We divide the camera image, into overlapping tiles, defining a window size of $W$ pixels [here $W = {{410}}\;{\rm{px}}$ and a step or stride] of $s = {{82}}\;{\rm{px}}$. The stride was set to be comparable with the size of the PSF images and to divide the image with an integer number. This choice produces a map of $M \times M$ tiles ($M = {{21}}$). Adopting a notation similar to that used in the field of convolutional neural networks, we rearrange the image-data in channels (each tile is a channel in this notation), forming a matrix with dimensions $C \times W \times W$. The tiled view of the original image, $T$, has a total of $C = {M^2} = {{441}}$ tiles [Fig. 3(a)]. To match the problem size, we expand the PSF map from $N \times N$ to the reach the same tiled dimension of the image $M \times M$, by oversampling it via bicubic interpolation. The oversampled PSFs are also rearranged, forming a kernel of $C \times W^\prime \times W^\prime$, where ${W^\prime}$ indicates the dimension of each PSF. In our case study, ${W^\prime} = {{93}}\;{\rm{px}}$. In this formalism, each channel of the image is blurred by the corresponding channel of the PSF map, according to

$$T ({c,x,y}) = D ({c,x^\prime ,y^\prime} )*_{({x^\prime ,y^\prime})}{\rm{PSF}}({c,x^\prime ,y^\prime}),$$
where $D$ is the deconvolved tiled image, and the convolution operator acts only on the last two dimensions. Once we are settled with this expanded view of the image $T$ and the PSF, we can formulate the iterative RL deconvolution as
$${D^{i + 1}} = {D^i} \!\left({\frac{T}{{{D^i}*_{({x^\prime ,y^\prime})\;}{\rm{PSF}}}}\,{*_{({x^\prime ,y^\prime})}}\widetilde {{\rm{PSF}}}}\right)\!,$$
where ${D^{i + 1}}$ is the deconvolved dataset after $i + 1$ steps and $\widetilde {{\rm{PSF}}}({c,x^\prime ,y^\prime}) = {\rm{PSF}}({c, - {x^\prime}, - y^\prime})$. This formulation closely resembles the action of two convolutional layers (with a division in between) applied depth-wise along the channel direction. For this reason, we implemented the code by using convolution functions from the graphical processing unit (GPU) accelerated PyTorch package, commonly used in machine learning [13]. After the RL deconvolution, we are left with a dataset $D$ (Fig. 3), and, to reconstruct the final image, we untile this dataset. From each tile, we suppress an external frame (corresponding to 32 px). We perform this operation to remove the region where the RL algorithm produces artifacts. Then, we place each tile back into its corresponding location, averaging the overlapping regions. This is done by counting the number of times that each pixel was included in the tiled compositions. Repeating this procedure through the whole image plane leaves us with the recomposed deconvolved image [Fig. 3(c)]. We compare the results of a standard, isoplanatic (ISO), deconvolution and the proposed region-wise, anisoplanatic (ANI) deconvolution in Fig. 4. For this comparison, we have set the same number of iterations (200) for any reconstruction presented in this Letter. In the case of ISO, we use the PSF retrieved in the center of the field of view, to deconvolve the whole image. Consequently, the results of ANI and ISO are perfectly similar in the image center (data not shown). Both deconvolutions can restore an image with improved contrast better than the acquired one: comparing a detail of the original image [Fig. 4(c)] with the deconvolutions [Figs. 4(e) and 4(f)], a substantial deblur is observable. However, observing the peripherical regions of the field of view, we note that ISO produces artifacts on the image, as typically happens when performing deconvolution with an inaccurate PSF. This is noticeable by comparing the ISO reconstruction [Fig. 4(e)] to the image of the sample acquired without the aberrating glass [Fig. 4(d)]. We observe that the ISO reconstructed image is systematically affected by artifacts: as an example, Fig. 4(e) shows that several structures appear in the proximity of the bright locations, which are not present in the biological sample [Fig. 4(d)]. The arrows indicate some locations in which this type of artifact is particularly marked. Conversely, ANI reconstruction provides an improved contrast and deblurring without generating structures that are not present in the original specimen. Our deconvolution is inspired by modern concepts in parallel data processing and permits a straightforward gain in performance offered by GPUs. Finally, the combination of anisoplanatic deconvolution and adaptive optics [14] could be the key for achieving perfect imaging over extended areas of observation.
 figure: Fig. 3.

Fig. 3. Schematics of the ANI deconvolution problem. A tiled view ${{T}}$ of the image ${{t}}$ (on the left side) is deconvolved via iterative RL steps with its corresponding PSF channel. This produces a deblurred tiled image ${{D}}$ (central panel), which is then recomposed to form the final deconvolved image on the right side.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) Acquired and (b) ANI deconvolved image of a mouse kidney microscope slide. Magnified detail [corresponding to the squared inset in panels (a), (b)] of the image acquired in the presence of (c) the glass aberration, (d) ground truth acquired without the aberrating phase plate, (e) deconvolved with ISO, and (f) deconvolved with ANI. Further magnified detail [corresponding to the squared inset in panels (e), (f)] of the reconstruction with (g) ISO, (h) the ground truth, and (i) ANI. Some artifacts created by ISO deconvolution are indicated with the green and light blue arrows. Scalebar is 200 µm in (a), (b) and 50 µm in (c)–(f).

Download Full Size | PDF

In conclusion, we have proposed a novel method to measure the wavefront aberrations by sampling the pupil of an optical imaging system, particularly of a widefield microscope. The method allows reconstructing the wavefront for the different points of the field of view. The SPAM module can be inserted in the detection path of a widefield microscope and does not require the use of external wavefront sensors. It provides PSF reconstruction, by imaging the sample under the microscope, without the need of using beads or calibration targets. In combination with the wavefront and PSF measurement, we have developed a multiregion, deconvolution method, which takes into consideration the nonisoplanatic PSF and carries out a region-dependent iterative deblurring. The aberration measurement, along with the proposed deconvolution, has shown to be effective in reconstructing fluorescence imaging samples at high resolution, avoiding artifacts otherwise present in a conventional isoplanatic deconvolution.

Funding

Air Force Research Laboratory (12789919); Office of Naval Research Global (12789919); H2020 Marie Skłodowska-Curie Actions (799230); Laserlab-Europe (871124).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. R. K. Tyson, Introduction to Adaptive Optics (SPIE, 2000).

2. D. Milkie, E. Betzig, and N. Ji, Opt. Lett. 36, 4206 (2011). [CrossRef]  

3. B. Vohnsen, A. C. Martins, S. Qaysi, and N. Sharmin, Appl. Opt. 57, E199 (2018). [CrossRef]  

4. J. Mertz, H. Paudel, and T. G. Bifano, Appl. Opt. 54, 3498 (2015). [CrossRef]  

5. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (CRC Press, 2020).

6. R. Ragazzoni, E. Marchetti, and G. Valente, Nature 403, 54 (2000). [CrossRef]  

7. J. Thaung, P. Knutsson, Z. Popovic, and M. Owner-Petersen, Opt. Express 17, 4454 (2009). [CrossRef]  

8. L. Denis, E. Thiébaut, F. Soulez, J. M. Becker, and R. Mourya, Int. J. Comput. Vis. 115, 253 (2015). [CrossRef]  

9. R. Turcotte, E. Sutu, C. C. Schmidt, N. J. Emptage, and M. J. Booth, Biomed. Opt. Express 11, 4759 (2020). [CrossRef]  

10. M. B. Hisham, S. N. Yaakob, R. A. A. Raof, A. B. A. Nazren, and N. M. W. Embedded, in IEEE Student Conference on Research and Development (SCOReD) (IEEE, 2015), p. 100.

11. R. C. Flicker and F. J. Rigaut, J. Opt. Soc. Am. A 22, 504 (2005). [CrossRef]  

12. W. H. Richardson, J. Opt. Soc. Am. A 62, 55 (1972). [CrossRef]  

13. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, in Conference on Neural Information Processing Systems (NeurIPS) (2019).

14. P. Pozzi, C. Smith, E. Carroll, D. Wilding, O. Soloviev, M. Booth, G. Vdovin, and M. Verhaegen, Opt. Express 28, 14222 (2020). [CrossRef]  

Supplementary Material (2)

NameDescription
Supplement 1       Supplementary material
Visualization 1       Spinning pupil aberration measurement

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) Pupil of the optical system (black circle) and position of the subapertures (red circles) in the pupil, (see Visualization 1). (b) Schematic diagram of the optical imaging system with an example of image shift relative to two fields on the object ( ${{\rm{OP}}_1}$ and ${{\rm{OP}}_2}$ ). (c) Example of the image distortion and shift due to the presence of a wavefront aberration.
Fig. 2.
Fig. 2. (a) Acquired image of a mouse kidney slide, aberrated by the presence of a glass wedge. Reconstructed wavefronts in (b)  ${{5}} \times {{5}}$ subregions and (c) corresponding PSFs. In the actual reconstructions, we mapped the wavefront in a ${{9}} \times {{9}}$ grid, whereas here we show a ${{5}} \times {{5}}$ for a visual purpose only. Scale bar is 200 µm in (a) and 10 µm in (c).
Fig. 3.
Fig. 3. Schematics of the ANI deconvolution problem. A tiled view ${{T}}$ of the image ${{t}}$ (on the left side) is deconvolved via iterative RL steps with its corresponding PSF channel. This produces a deblurred tiled image ${{D}}$ (central panel), which is then recomposed to form the final deconvolved image on the right side.
Fig. 4.
Fig. 4. (a) Acquired and (b) ANI deconvolved image of a mouse kidney microscope slide. Magnified detail [corresponding to the squared inset in panels (a), (b)] of the image acquired in the presence of (c) the glass aberration, (d) ground truth acquired without the aberrating phase plate, (e) deconvolved with ISO, and (f) deconvolved with ANI. Further magnified detail [corresponding to the squared inset in panels (e), (f)] of the reconstruction with (g) ISO, (h) the ground truth, and (i) ANI. Some artifacts created by ISO deconvolution are indicated with the green and light blue arrows. Scalebar is 200 µm in (a), (b) and 50 µm in (c)–(f).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

T ( c , x , y ) = D ( c , x , y ) ( x , y ) P S F ( c , x , y ) ,
D i + 1 = D i ( T D i ( x , y ) P S F ( x , y ) P S F ~ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.