Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Astigmatism-based active focus stabilisation with universal objective lens compatibility, extended operating range and nanometer precision

Open Access Open Access

Abstract

Focus stabilisation is vital for long-term fluorescence imaging, particularly in the case of high-resolution imaging techniques. Current stabilisation solutions either rely on fiducial markers that can be perturbative, or on beam reflection monitoring that is limited to high-numerical aperture objective lenses, making multimodal and large-scale imaging challenging. We introduce a beam-based method that relies on astigmatism, which offers advantages in terms of precision and the range over which focus stabilisation is effective. This approach is shown to be compatible with a wide range of objective lenses (10x-100x), typically achieving <10 nm precision with >10 μm operating range. Notably, our technique is largely unaffected by pointing stability errors, which in combination with implementation through a standalone Raspberry Pi architecture, offers a versatile focus stabilisation unit that can be added onto most existing microscope setups.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Corrections

10 April 2024: A typographical correction was made to an author name.

1. Introduction

Optical microscopy typically relies on imaging of samples positioned at the focal plane of the microscope objective lens. Real-time maintenance of the in-focus imaging plane is crucial due to potential axial drift caused by external factors. These include mechanical instabilities and thermal influences [1] that are particularly problematic for long-term imaging, where instabilities can degrade the imaging quality [2]. With the development of super-resolution microscopy techniques that achieve nanoscale imaging, [37] it is vital to implement active focus stabilisation, especially in the case of high-throughput automated microscopy [812].

Numerous focus stabilisation approaches have been developed for applications in optical microscopy [13,14]. Building on these, several commercial systems have been designed that can maintain focus during imaging [1518]. However, their limited compatibility with custom-built microscopes, and their significant cost, has hampered their adoption. This has resulted in a variety of custom approaches that can be classified as: (i) marker-based [1921] or (ii) beam-based methods (Fig. 1(a)) [2227]. In the marker-based approach, the fluorescence or scattering from bright particles is collected through a dedicated channel distinct from the imaging channel [28,29]. By monitoring these fiducial markers, their drift can be corrected. Recently, this approach has enabled fast active 3D-focus stabilisation by integrating GPU processing [30]. However, the introduction of fiducial markers is: (i) not always available depending on the samples; (ii) has the potential to influence the sample of interest and (iii) adds an additional experimental step. By using a beam-based approach instead, these issues can be avoided.

 figure: Fig. 1.

Fig. 1. Axial drift estimation using astigmatism. (a) Schematic and characteristics of common focus stabilisation techniques including PiFocus presented in this work. (b) Selected slices from a z-scan of the astigmatic PSF (scale bar = 40 pixels). (c) Variation of beam widths of $\sigma _{x}$ (blue line) and $\sigma _{y}$ (red line) as a function of z-position. (d) Calibration curve of focal drift estimation using $\sigma _{x} - \sigma _{y}$. (e) Time-lapse acquisition at z = 0 $\mu$m for estimating axial precision.

Download Full Size | PDF

With the beam-based approach, it is common to monitor the reflection of an infrared laser from the coverslip-sample interface. Among beam-based methods, the triangular geometry or total internal reflection (TIR) has been widely adopted, and methods that incorporate open hardware and comprehensive documentation have been developed [31]. Nevertheless, there are limitations of current approaches that can be summarised by: (i) relatively low sampling rates (pgFocus: 30 Hz, [32] openFrame: 4 Hz [33]); (ii) only being compatible with high numerical aperture (NA); (iii) electronic components requiring soldering and custom circuit boards and (iv) the need to put a focus stabilisation module close to the objective lens as the performance is limited by laser pointing stability [34]. Alternative beam-based methods have recently been introduced, e.g. parallel beam splitting by a double-hole mask [35]. However, these non-TIR approaches still suffer from low precision in axial drift estimation and stabilisation [3640]. An ideal focus stabilisation system that would improve on existing iterations would be: (i) standalone, (ii) open-source, (iii) low cost, (iv) fiducial-free and (v) would offer large range and nanoscale precision with active stabilisation.

A promising solution involves applying astigmatism for focus stabilisation [41,42] using a reflected beam from the coverslip-sample interface. Its widespread application may have been impeded by the interference patterns generated from reflective surfaces due to the use of coherent laser sources [43]. However, this issue has recently been resolved in the scattering microscopy field [44,45] and in a recent focus stabilisation method [33].

Here we present PiFocus, a versatile focus stabilisation method implemented on Raspberry Pi, based on monitoring the astigmatic reflection from the coverslip-sample interface. We demonstrate that this method enables <10 nm axial drift control with extended operating range (>10 $\mu$m). Excitingly, it becomes possible to tune the operating range and precision such that $\sim$20 nm precision can be achieved even with low-magnification (10x), low-NA objective lenses. Ultimately, PiFocus is implemented as a low-cost (<£2,000) standalone system that can be added on to most microscopes, with advantages such as ease of application, wide objective lens compatibility and versatility.

2. Results

2.1 Astigmatic-based drift estimation with nanometre precision

Our proposed focus stabilisation approach relies on determining the axial position of the sample by inducing astigmatism to the back-reflected laser beam from the coverslip-sample interface. PiFocus was initially implemented using a 100x 1.35-NA silicone-immersion objective lens and astigmatism was induced with a cylindrical lens (f = 500 mm) placed in front of the camera (Fig. 1(a)). The point-spread-function (PSF) is symmetric when the sample is in focus and becomes elliptical when defocused (Fig. 1(b)). By measuring the beam widths in x and y (Fig. 1(c)), here referred to as $\sigma _{x}$ and $\sigma _{y}$, the axial sample position and the direction of its displacement can be found based on a calibration scan (Fig. 1(d), Visualization 1). The calibration scan is performed by moving the sample in distinct z steps and for each step the difference between $\sigma _{x}$ and $\sigma _{y}$ is determined. This calibration is not sample-dependent and only needs to be performed once for a specific objective lens and cylindrical lens combination. Note however that variations in coverslip thickness may necessitate objective lens collar correction depending on the sample being imaged. Collar correction does not affect sensitivity, but would influence absolute axial positioning.

We used a multimode infrared laser (808 nm) to ensure minimal overlap with common fluorescence channels. The multimode laser provided a broadband beam (coherence length of 32 $\mu$m and bandwidth 9 nm), which avoids the interference pattern that would otherwise be created due to secondary reflections in the beam path [33,44]. Interference fringes from reflections due to the use of a coherent single-mode laser source and their effect on the calibration curve can be seen in Supplement 1, Fig. S1 and Visualization 2. The output of the multimode laser diode was delivered through a single-mode fibre to provide a Gaussian beam that could be shaped for axial drift estimation. Some experiments (TIR, 40x and 10x) used a 638 nm multimode laser instead for simpler alignment, but the results were identical (Supplement 1, Fig. S2).

To determine the accuracy of this approach, a time-lapse (100 Hz) acquisition was collected at a specific axial position (z = 0 $\mu$m). The duration of the time-lapse was typically a few seconds to minimise drift. The uncertainty of $\sigma _{x} - \sigma _{y}$ of the time-lapse was then converted to nanometres using the calibration (Fig. 1(d)), to estimate the precision of the approach (Fig. 1(e)), which was found to be $\sigma _{z}$ = 2.0 nm for these conditions. Note that this precision is valid at z = 0 and we observe an approximately two-fold increase in the precision value towards the limits of the defined operating range, as is common for astigmatic localisation microscopy [46].

2.2 Tuning the focus stabilisation operating range

We next investigated the ability of PiFocus to improve the axial range over which the focus stabilisation remains operational, by evaluating different cylindrical lenses and configurations (Fig. 2(a)). Calibration curves and time-lapse datasets were acquired to determine operating range and precision for each condition (Fig. 2(b)). For the 100x 1.35-NA silicone-immersion objective lens, it was possible to increase the operating range from 2.8 $\mu$m to 20.6 $\mu$m by changing the focal length of the cylindrical lens from 1000 mm to 300 mm. However, this came at the cost of sensitivity and thus precision that changed from 1.5 nm to 12.7 nm. This means that the performance of the system can be chosen based on the application that is needed by tuning the extent of astigmatism.

 figure: Fig. 2.

Fig. 2. Extending the operating range. (a) Different lens configurations for PiFocus. (b) Calibration using different cylindrical lenses (CL) with 100x 1.35-NA silicone-immersion lens. (c) Calibration curve for two orthogonal cylindrical lenses.

Download Full Size | PDF

An alternative strategy to further extend the operating range was explored by employing a combination of two cylindrical lenses (Config. B) rather than a CL-TL configuration (Config. A). Results revealed that while this approach greatly increased the operating range >100 $\mu$m, it introduced non-linear behaviour in the resulting calibration curve. As the separation between the focal planes is increased, the non-linearity increases, which needs to be taken into account for scanning applications.

2.3 Compatibility with varying NA objective lenses

Conventional TIR-based focus stabilisation, as well as most beam-based methods, are mostly limited to high-NA objective lenses to provide sufficient sensitivity and signal. However, the ability to switch between objective lenses of varying magnifications and NAs while maintaining focus is important to successfully perform multimodal experiments. Furthermore, certain applications require high axial stability but can only be performed with low NA objective lenses. We therefore characterised the performance of PiFocus for a wide variety of commonly used objective lenses. We performed axial drift estimation experiments for each objective lens and demonstrated >10 $\mu$m operating range with <21 nm precision for all lenses (Fig. 3(a)). The results are summarised in the Table shown in Fig. 3(b). These results demonstrate that PiFocus offers a universal solution to focus stabilisation, which is critical for microscopes that employ multiple objective lenses with varying NA. The table also includes different cylindrical lens combinations, demonstrating the tunability of astigmatism-based focus stabilisation.

 figure: Fig. 3.

Fig. 3. PiFocus offers universal lens compatibility and is superior to TIR for low NA. (a) Calibrations for different objective lenses with varying magnification, NA and immersion media using CL500. (b) Calibration curve for TIR-based stabilisation performed with 60x 1.49-NA oil-immersion (orange) and 10x 0.25-NA (green) objective lenses. (c) Summary table of all results including comparison between PiFocus and TIR. (M) Magnification, (I.M) Immersion medium, (WD) Working distance, (OR) operating range.

Download Full Size | PDF

2.4 Comparison with TIR-based focus stabilisation

To demonstrate the universality of PiFocus, we compared its performance to the commonly used TIR-based approach. We implemented TIR stabilisation on the same microscope and determined operating range and precision. The calibrations for the 60x 1.49 NA oil-immersion and 10x 0.25 NA objective lenses are shown in Fig. 3(b). The limitations of the TIR-based approach become evident when considering that the sensitivity drops approximately 30-fold upon changing to the low-magnification objective lens, resulting in a precision of 322 nm and an operating range of 50 microns (Fig. 3(c), Visualization 3). Conversely, PiFocus achieves a superior 20.5 nm precision with an operating range of 65 microns. This can be explained by considering that estimation of the main parameters $\sigma _{x}$ and $\sigma _{y}$ are insensitive to laser pointing stability. As the beam travels $\sim$2 m to reach the detector, imperfect pointing stability leads to large long-term variations in $x_{c}$ due to the instability and vibration of optical components. This also explains why the performance of our TIR implementation is worse than astigmatism (Fig. 3(b)) and other similar TIR approaches, which demonstrates the simplicity of implementing astigmatism-based focus stabilisation far away from the sample. Theoretical assessment of the calibration curve for TIR-based approach [14] indicates that the sensitivity scales linearly with magnification and NA of the objective lens being employed. Based on theory, the sensitivity ratio of the calibration curves for the 60X 1.49 Oil and 10X 0.25 objective lenses is 36. This ratio agrees well with the experimental value of 33 from our TIR calibration results (Fig. 3(c)), affirming the agreement between computational and experimental results. The anticipated sensitivity values for the calibration curve of the 100x 1.35 NA Sil, 60X 1.49 NA Oil, 60x 1.27 NA Water, 40x 0.60 NA and 10X 0.25 NA Air objective lenses are 77.10 $px/\mu m$, 51.57 $px/\mu m$, 43.96 $px/\mu m$, 18.27 $px/\mu m$ and 1.43 $px/\mu m$, respectively.

2.5 PiFocus stabilisation

We implemented PiFocus on a Raspberry Pi 4B architecture (Fig. 4(a)) and custom-written Python scripts were used for image acquisition, calibration and focus stabilisation. The Raspberry Pi interfaced with a piezo z-stage that enabled feedback control of z-position. While in operation, PiFocus continuously acquires images and performs real-time axial drift estimation. Proportional feedback control is then applied to correct the focal position using the piezo stage. To achieve real-time processing at reasonable sampling rates, it was necessary to project images in the x and y dimensions respectively and use 1D Gaussian fitting [42]. With this approach, PiFocus was able to maintain the astigmatic beam shape at >100 Hz. By utilising a bypass voltage signal (Fig. 4(a)) from MicroManager to the Raspberry Pi through an Arduino Uno, PiFocus can perform z-scanning during focus stabilisation.

 figure: Fig. 4.

Fig. 4. Active focus stabilisation using PiFocus. (a) Schematic of PiFocus hardware implementation. (b) Axial sample drift monitored by PiFocus. (c) Axial drift with stabilisation ON (red) and OFF (black) monitored by imaging fluorescent beads using astigmatic 3D localisation.

Download Full Size | PDF

Finally, we demonstrated focus stabilisation in a realistic imaging experiment. PiFocus was activated while imaging fluorescent nanoparticles using a fluorescence imaging path that employed astigmatic 3D localisation microscopy (4.4 nm precision, Fig. S3). The astigmatic reference beam with a precision of 1.9 nm was also captured (Fig. 4(b)). We compared the axial drift of the fluorescent particles with PiFocus on and off (Fig. 4(c)). Without PiFocus, significant drift (>600 nm over 40 min) could be observed. Conversely with PiFocus activated, the drift was kept below 5 nm, which is at the limit of the axial precision of the fluorescence measurements. These results confirm that the drift estimation experiments done represent the axial response of the imaging system. The estimation of drift using fiducial markers is ideal for quantifying the ability of a focus stabilisation approach to correct for axial drift. As opposed to using only the control signal itself, the fluorescence channel enables a realistic estimation of the drift that would be manifested in any cellular imaging experiment.

3. Discussion

We have devised and characterised an active focus stabilisation system, based on real-time monitoring of an astigmatic laser beam reflected from the coverslip-sample interface. Depending on the selected operating range and objective lens, the system achieves sub-10 nm precision in axial drift estimation and correction. This performance is similar to previous TIR-based solutions, although there are a few important differences: (i) We observe good performance ($\sim$20 nm precision) even for low-NA objective lenses, as astigmatism does not require a large NA to achieve high sensitivity. (ii) Our approach is not affected by long-term pointing stability variations present in TIR methods, as we use the width rather than the position of the beam. This means that there is no need to keep the focus stabilisation module close to the objective lens, as is usually the case for TIR-methods [34] and we demonstrate excellent stability even over extended distances ($\sim$2 m from the fibre output to the beam-monitoring camera). (iii) While the operating regime in TIR can be tuned, our approach achieves a significantly higher precision for a given range. (iv) Our implementation is carried out on a Pi architecture that would also work for a TIR approach and can be independently added on to most existing microscopes.

There are however limitations to our approach. The precision achieved is limited by vibrations, background, and camera noise and astigmatism inherently requires using a camera. It is also necessary to fit a Gaussian function to the data, which is slower than certain centroid methods used to find the beam position. However, we achieve >100 Hz update rates on a relatively slow Raspberry Pi platform, which clearly offers scope for faster CPUs or GPU-acceleration if higher update rates are required. We explored alternative analysis methods based on image moments that were slightly faster, however we found that these were much more susceptive to background fluctuations and thus discarded them.

We demonstrated the tunability of the operating range for focus stabilisation. Through the utilisation of cylindrical lenses featuring different focal lengths, we were able to investigate the fundamental trade-off between the calibration sensitivity and the operating range. As the operating range increases and the distance between the two focal points point is moved, the differing effective NAs give rise to nonlinear behaviour that ultimately affects precision. For deployment on a super-resolution microscope requiring nanometre precision, opting for mild astigmatism with an operating range of $\sim$10 $\mu$m is ideal. Conversely, for imaging larger organisms such as zebrafish, larger separation between the focal points can be employed to extend the operating range beyond 100 $\mu$m, albeit with reduced precision exceeding 40 nm and a potentially non-linear calibration. It should be noted that the operating range and/or precision can be further improved by using tunable or translating lenses.

There are similarities between our approach and the recently published openFrame solution [33]. The main difference is that openFrame applies astigmatism near the laser launch position that is very mild, which is suitable for large operating ranges. Ultimately, openFrame achieves a precision of 35 nm and a refresh rate of 4 Hz, whereas our approach achieves a precision of down to 2 nm and a refresh rate of 100 Hz, which is more suitable for high-resolution applications.

Our devised solution was implemented on a Raspberry Pi system, providing the capability of serving as an independent focus stabilisation unit on most microscopes that can be turned on with a physical button or via software. The entire system can be implemented for approximately £1,800 (Supplement 1, Table S1-2), not including a piezo stage. If an objective piezo scanner is required, it can be sourced for about £2,000. The use of Raspberry Pi is also advantageous as it addresses the difficulty associated with designing and fabricating a bespoke circuit board that is currently required for solutions such as pgFocus [31]. While we chose a Raspberry Pi platform here, the astigmatism-based focus stabilisation used in PiFocus could also be integrated with Micromanager, similar to the solution provided for openFrame.

4. Methods

4.1 Optical setup

A diagram of the optical setup is shown in Supplement 1, Fig. S4. The focus stabilisation system used a 300 mW, 808 nm multimode laser diode (OFL311, OdicForce Lasers) with 9 nm bandwidth. The output beam of this laser was collimated with a triplet lens assembly (OFL177, OdicForce) and the beam was then coupled into a 780-970 nm single-mode fibre (P1-780A-FC-1, Thorlabs) using a 10x 0.25 NA objective lens (Olympus), providing an output power of 2.6 mW. Some experiments (TIR, 40x and 10x) also used a 700 mW, 638 nm multimode laser diode (OFL195-U, OdicForce), although the results were identical for the two lasers (Fig. S2). The output of this fibre was collimated using a short focal length lens (f = 40 mm, AC254-040-B, Thorlabs). An adjustable iris (ID25SS/M, Thorlabs) was utilised to adjust the beam size. The collimated beam with a 5 mm diameter then passed a 50/50 cube beam splitter (CCM1-BS014/M, Thorlabs). A polarising beam splitter could be used instead to increase signal and to minimise reflections. The beam entered the imaging path by being reflected from the NIR shortpass dichroic (#69-196, Edmund Optics) heading towards the back aperture of the microscope objective lens. The laser beam was then focused by the imaging objective lens onto the top of the #1.5 glass coverslip. The back-reflected light from the coverslip-sample interface returned to the beam splitter and was partially reflected onto the camera detector. The reflection was filtered by a NIR longpass filter (FELH0800, Thorlabs), passed through a cylindrical lens (with varying focal lengths), which was positioned after the beam splitter, 40 mm before the tube lens (f = 200, TTL200-B, Thorlabs). The camera was placed at the centre of the two focal points created by the cylindrical lens and the tube lens. Two cameras were explored, a low-cost ($\sim$£60) high-speed 10-bit Raspberry Pi camera (Arducam OV9782) and a medium-cost ($\sim$£300) astronomy camera based on the IMX290 sensor (ZWO ASI290MM). Both cameras performed similarly in terms of speed and precision, although the ZWO camera was more versatile and offered higher bit depth. In an alternative configuration, the tube lens was replaced by a second orthogonal cylindrical lens to attempt to minimise non-linear behaviour in the calibration curves.

The focus stabilisation unit was integrated into a custom-built fluorescence microscope capable of performing epi-, total internal reflection (TIR) [47], and highly inclined and laminated optical sheet (HiLo) [48] excitation for imaging. A three-axis nanopositioning piezo stage (SLC-1780-D-S, SmarAct), equipped with an integrated sensors for closed-loop positioning, was used to control the sample position [34].

4.2 Fluorescence setup

For fluorescence excitation, a 4.5 mW 639 nm collimated laser module (PL252, Thorlabs) was used, which was passed through an excitation filter and focused onto the back focal plane of the objective lens using an achromatic lens (AC254-200-A-ML, Thorlabs). With a mirror and the lens on a translational stage, we were able to switch from widefield to HiLo and TIR excitation by translating the laser beam toward the periphery of the back focal plane of the objective lens. The excitation laser beam was then delivered to the objective lens using a quadband dichroic mirror (Di03-R405/488/561/635-t3-25x36, Semrock). The emitted light by the fluorescent sample was collected by the objective lens and passed through the dichroic mirror, an emission (FELH0650, Thorlabs) and a bandpass filter (FF01-609/181-25, Semrock) was used to block the NIR beam. A tube lens (TTL200-A, f = 200 mm, Thorlabs) was finally used to create an image of the focal plane on a CMOS camera (ASI485MC, ZWO).

4.3 Calibration and precision measurements

For calibration, a z-image stack was acquired by scanning the stage in the axial direction and capturing an image for each step to form the calibration stack. The z-stack scan range and step size were chosen to ensure that the entire range of astigmatism was covered, encompassing the full range of potential focal plane deviations. In each slice, the average intensity in x and y (2D image projected into a 1D curve) were calculated. Subsequently, these values were fitted with a Gaussian function to determine $\sigma _{x}$ and $\sigma _{y}$, establishing a calibration curve. This curve maps the relationship between the axial position of the sample and the corresponding beam shape, which is crucial for the focus stabilisation process. A calibration was only performed once for each objective lens/cylindrical lens combination. The calibration curve does not change significantly over time other than if corrections are made using the correction collar. If corrections are made, the sensitivity and thus relative movements remain unchanged. However, for absolute positioning a new calibration would have to be carried out.

To determine the precision of the z-position estimation, a time-lapse image stack was acquired with 200 Hz frame rate for 100 seconds. The precision was then calculated as the standard deviation of $\sigma _{y} - \sigma _{x}$. Data acquisition and instrument control were performed using a custom-written Python script as we show in Code 1, Ref. [49]. Data analysis and processing were performed with scripts written in Python. All scripts are provided on Zenodo (Ref. [49]).

4.4 Fiducial benchmark

To evaluate the performance of the focus stabilisation unit to maintain focus, we monitored the position of a fluorescent nanoparticle in 3D over time. We used 200 nm FluoSpheres (F8807, Thermo Fisher) as the fiducial marker [50]. To acquire astigmatic single-molecule images of the fluorescent beads, we introduced weak astigmatism by placing a long focal length cylindrical lens (f = 1000 mm, LJ1516RM-A, Thorlabs) in front of the imaging camera. We acquired a 3 $\mu$m z-stack to generate a calibration curve (Supplement 1, Fig. S3) for the beads. After activating PiFocus, we started a long time-lapse acquisition (1 hr) of the selected bead with 40 ms exposure time. We then turned the stabilisation system off and captured another long time-lapse acquisition (1 hr). The image acquisition was performed using MicroManager 2.0. The calibration stack and time-lapses were analysed using PeakFit [51] to extract $\sigma _{y} - \sigma _{x}$ for z-position and precision estimations, as done for the focus stabilisation.

4.5 Raspberry Pi implementation

The focus stabilisation system was implemented on a Raspberry Pi 4 Model B (OKdo 4GB starter kit). The Arducam was connected through the Raspberry Pi camera flex cable and the ZWO and the piezo stage driver were connected to the USB 3.0 port of the Raspberry Pi board. Focus stabilisation was achieved through Python scripts as we show in Code 1, Ref. [49]. A polynomial, corresponding to the calibration for the given objective lens, is used to estimate the current and target positions. For each frame, fitting is performed as described previously to determine $\sigma _{y} - \sigma _{x}$. The offset from the target position is then calculated based on the calibration and proportional control is applied to maintain focus. In our case we had direct USB control over the piezo stage. For more conventional solutions, the piezo could be controlled using a DAC voltage signal, that can be achieved using a 16-bit DAC (AD5693, Adafruit) and we have included code for this. To enable z-scanning during focus stabilisation, we used an Arduino Uno that interfaced directly with Micromanager. The Arduino outputs a bypass voltage through the AD5693 16-bit DAC that is read by a 16-bit ADC (ADS1115, Adafruit) on the Raspberry Pi. When a change in the bypass voltage is detected, the piezo stage is moved to the new target position and the $\sigma _{y} - \sigma _{x}$ setpoint is modified based on the calibration curve to maintain stabilisation during scanning.

Funding

Royal Society (RGS\R2\202446); Academy of Medical Sciences (SBF006\1138); Wolfson Foundation; Engineering and Physical Sciences Research Council (EP/T517860/1).

Acknowledgments

The authors would like to thank summer student Robert Elliott who setup the preliminary camera implementation on Raspberry Pi. We would also like to thank Karl Bellvé for helpful discussion on hardware and Michelle Peckham for providing helpful comments on our results. This work was funded by a University of Leeds University Academic Fellowship, a Royal Society Research Grant (RGS$\backslash$R2$\backslash$202446), as well as an AMS Springboard Award (SBF006$\backslash$1138), awarded to A.P., and by an EPSRC Bragg Centre Doctoral Training Studentship (EP/T517860/1) awarded to A.R. We would also like to acknowledge a Funding for Places award from The Wolfson Foundation, which has funded the Wolfson Imaging Facility at the University of Leeds.

Author Contributions. A.P. conceptualised, supervised and administered the project. A.R., T.C., A.T.A.A. and A.P. developed the calibration methodology, conducted experiments and analysed data. A.R. designed and implemented the optical setup. A.R. and A.P. carried out the nanoparticle fluorescence measurements. A.R., T.C. and A.P. developed Python code for analysis and focus stabilisation. A.R. and A.P. wrote the manuscript. All authors reviewed and edited the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are openly available in Ref. [49].

Supplemental document

See Supplement 1 for supporting content.

References

1. B. Neumann, A. Dämon, D. Hogenkamp, et al., “A laser-autofocus for automatic microscopy and metrology,” Sens. Actuators 17(1-2), 267–272 (1989). [CrossRef]  

2. S. A. Jones, S.-H. Shim, J. He, et al., “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8(6), 499–505 (2011). [CrossRef]  

3. T. Orré, A. Joly, Z. Karatas, et al., “Molecular motion and tridimensional nanoscale localization of kindlin control integrin activation in focal adhesions,” Nat. Commun. 12(1), 3104 (2021). [CrossRef]  

4. P. Jouchet, C. Cabriel, N. Bourg, et al., “Nanometric axial localization of single fluorescent molecules with modulated excitation,” Nat. Photonics 15(4), 297–304 (2021). [CrossRef]  

5. M. Bates, J. Keller-Findeisen, A. Przybylski, et al., “Optimal precision and accuracy in 4Pi-STORM using dynamic spline PSF models,” Nat. Methods 19(5), 603–612 (2022). [CrossRef]  

6. E. W. Sanders, A. R. Carr, E. Bruggeman, et al., “resPAINT: Accelerating volumetric super-resolution localisation microscopy by active control of probe emission,” Angew. Chem. Int. Ed. 61(42), e202206919 (2022). [CrossRef]  

7. S. C. M. Reinhardt, L. A. Masullo, I. Baudrexel, et al., “Ångström-resolution fluorescence microscopy,” Nature 617(7962), 711–716 (2023). [CrossRef]  

8. M. Klevanski, F. Herrmannsdoerfer, S. Sass, et al., “Automated highly multiplexed super-resolution imaging of protein nano-architecture in cells and tissues,” Nat. Commun. 11(1), 1552 (2020). [CrossRef]  

9. R. Diekmann, M. Kahnwald, A. Schoenit, et al., “Optimizing imaging speed and excitation intensity for single-molecule localization microscopy,” Nat. Methods 17(9), 909–912 (2020). [CrossRef]  

10. A. Millett-Sikking, “Any immersion remote refocus (AIRR) microscopy,” Zenodo (2022). https://zenodo.org/doi/10.5281/zenodo.7425649.

11. A. E. S. Barentine, Y. Lin, E. M. Courvan, et al., “An integrated platform for high-throughput nanoscopy,” Nat. Biotechnol. 41(11), 1549–1556 (2023). [CrossRef]  

12. R. M. Power, A. Tschanz, T. Zimmermann, et al., “Automated 3D multi-color single-molecule localization microscopy,” bioRxiv, bioRxiv:023.10.23.563122 (2023).

13. G. Reinheimer, “Anordnung zum selbsttätigen fokussieren auf in optischen geräten zu betrachtende objekte,” German Patent (1971).

14. Q. Li, “Autofocus system for microscope,” Opt. Eng. 41(6), 1289 (2002). [CrossRef]  

15. Nikon, “Perfect Focus System,” https://www.nikonusa.com/fileuploads/pdfs/Instruments/TE2000_PFS.pdf.

16. Zeiss, “Definite Focus,” https://www.imdik.pan.pl/images/Us%C5%82ugi_dokumenty/Przydatne_informacje/2d_Definite_Focus_information.pdf.

17. L. Microsystems, “Leica Adaptive Focus Control,” https://www.leica-microsystems.com/products/light-microscopes/p/leica-dmi6000-with-adaptive-focus-control/.

18. Olympus, “TruFocus: Z Drift Compensator,” https://www.olympus-lifescience.com/en/microscopes/inverted/ix83/trufocus/.

19. W. Hwang, S. Bae, and S. Hohng, “Autofocusing system based on optical astigmatism analysis of single-molecule images,” Opt. Express 20(28), 29353 (2012). [CrossRef]  

20. P. Bon, N. Bourg, S. Lécart, et al., “Three-dimensional nanometre localization of nanoparticles to enhance super-resolution microscopy,” Nat. Commun. 6(1), 7764 (2015). [CrossRef]  

21. R. Schmidt, T. Weihs, C. A. Wurm, et al., “MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope,” Nat. Commun. 12(1), 1478 (2021). [CrossRef]  

22. Y. Liron, Y. Paran, N. G. Zatorsky, et al., “Laser autofocusing system for high-resolution cell biological imaging,” J. Microsc. 221(2), 145–151 (2006). [CrossRef]  

23. P. Kner, J. Sedat, D. Agard, et al., “High-resolution wide-field microscopy with adaptive optics for spherical aberration correction and motionless focusing,” J. Microsc. 237(2), 136–147 (2010). [CrossRef]  

24. A. Pertsinidis, Y. Zhang, and S. Chu, “Subnanometre single-molecule localization, registration and distance measurements,” Nature 466(7306), 647–651 (2010). [CrossRef]  

25. C.-S. Liu, Y.-C. Lin, and P.-H. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013). [CrossRef]  

26. B. X. Cao, P. Hoang, S. Ahn, et al., “Real-time detection of focal position of workpiece surface during laser processing using diffractive beam samplers,” Opt. Lasers Eng. 86, 92–97 (2016). [CrossRef]  

27. C. Niederauer, M. Seynen, J. Zomerdijk, et al., “The K2: Open-source simultaneous triple-color TIRF microscope for live-cell and single-molecule imaging,” HardwareX 13, e00404 (2023). [CrossRef]  

28. B. Hajj, J. Wisniewski, M. El Beheiry, et al., “Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy,” Proc. Natl. Acad. Sci. 111(49), 17480–17485 (2014). [CrossRef]  

29. N. Saliba, G. Gagliano, and A.-K. Gustavsson, “Whole-cell multi-target single-molecule super-resolution imaging in 3D with microfluidics and a single-objective tilted light sheet,” bioRxiv, bioRxiv:2023.09.27.559876 (2023).

30. S. Coelho, J. Baek, J. Walsh, et al., “3D active stabilization for single-molecule imaging,” Nat. Protoc. 16(1), 497–515 (2021). [CrossRef]  

31. K. Bellve, C. Standley, L. Lifshitz, et al., “Design and Implementation of 3D Focus Stabilization for Fluorescence Microscopy,” Biophys. J. 106(2), 606a (2014). [CrossRef]  

32. K. Bellve, C. Standley, L. Lifshitz, et al., “pgFocus: wiki page,” https://big.umassmed.edu/wiki/index.php/PgFocus.

33. J. Lightley, S. Kumar, M. Q. Lim, et al., “openFrame : A modular, sustainable, open microscopy platform with single-shot, dual-axis optical autofocus module providing high precision and long range of operation,” J. Microsc. 292(2), 64–77 (2023). [CrossRef]  

34. J. S. H. Danial, J. Y. L. Lam, Y. Wu, et al., “Constructing a cost-efficient, high-throughput and high-quality single-molecule localization microscope for super-resolution imaging,” Nat. Protoc. 17(11), 2570–2619 (2022). [CrossRef]  

35. B. Van Den Berg, R. Van Den Eynde, B. Amouroux, et al., “A modular approach to active focus stabilization for fluorescence microscopy,” bioRxiv, bioRxiv:2020.09.22.308197 (2020).

36. B. Cao, P. Hoang, S. Ahn, et al., “In-Situ real-time focus detection during laser processing using double-hole masks and advanced image sensor software,” Sensors 17(7), 1540 (2017). [CrossRef]  

37. M. Bathe-Peters, P. Annibale, and M. J. Lohse, “All-optical microscope autofocus based on an electrically tunable lens and a totally internally reflected IR laser,” Opt. Express 26(3), 2359 (2018). [CrossRef]  

38. S.-Y. Chen, R. Heintzmann, and C. Cremer, “Sample drift estimation method based on speckle patterns formed by backscattered laser light,” Biomed. Opt. Express 10(12), 6462 (2019). [CrossRef]  

39. L. Silvestri, M. C. Müllenbroich, I. Costantini, et al., “Universal autofocus for quantitative volumetric microscopy of whole mouse brains,” Nat. Methods 18(8), 953–958 (2021). [CrossRef]  

40. H. Ma and Y. Liu, “Embedded nanometer position tracking based on enhanced phasor analysis,” Opt. Lett. 46(16), 3825 (2021). [CrossRef]  

41. D. K. Cohen, W. H. Gee, M. Ludeke, et al., “Automatic focus control: the astigmatic lens approach,” Appl. Opt. 23(4), 565 (1984). [CrossRef]  

42. W.-Y. Hsu, C.-S. Lee, P.-J. Chen, et al., “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009). [CrossRef]  

43. J. Lightley, F. Görlitz, S. Kumar, et al., “Robust deep learning optical autofocus system applied to automated multiwell plate single molecule localization microscopy,” J. Microsc. 288(2), 130–141 (2022). [CrossRef]  

44. J. Ortega Arroyo, D. Cole, and P. Kukura, “Interferometric scattering microscopy and its combination with single-molecule fluorescence imaging,” Nat. Protoc. 11(4), 617–633 (2016). [CrossRef]  

45. S. Lin, Y. He, D. Feng, et al., “Optical fingerprint of flat substrate surface and marker-free lateral displacement detection with angstrom-level precision,” Phys. Rev. Lett. 129(21), 213201 (2022). [CrossRef]  

46. R. Chowdhury, A. Sau, J. Chao, et al., “Tuning axial and lateral localization precision in 3D super-resolution microscopy with variable astigmatism,” Opt. Lett. 47(21), 5727 (2022). [CrossRef]  

47. D. Axelrod, “Cell-substrate contacts illuminated by total internal reflection fluorescence,” J. Cell Biol. 89(1), 141–145 (1981). [CrossRef]  

48. M. Tokunaga, N. Imamoto, and K. Sakata-Sogawa, “Highly inclined thin illumination enables clear single-molecule imaging in cells,” Nat. Methods 5(2), 159–161 (2008). [CrossRef]  

49. A. Rahmani, C. Tabitha, and P. Aleks, “PiFocus: acquisition, analysis and hardware control,” Zenodo (2023). https://zenodo.org/doi/10.5281/zenodo.10726262.

50. A. Balinovic, D. Albrecht, and U. Endesfelder, “Spectrally red-shifted fluorescent fiducial markers for optimal drift correction in localization microscopy,” J. Phys. D: Appl. Phys. 52(20), 204002 (2019). [CrossRef]  

51. T. J. Etheridge, A. M. Carr, and A. D. Herbert, “GDSC SMLM: Single-molecule localisation microscopy software for ImageJ,” Wellcome Open Res. 7, 241 (2022). [CrossRef]  

Supplementary Material (5)

NameDescription
Code 1       Python Codes
Supplement 1       Supplementary figures and video descriptions
Visualization 1       Calibration and time-lapse for the 100x 1.35 silicone-immersion objective lens (10 fps playback speed, Fire LUT).
Visualization 2       A calibration scan that shows the interference pattern created from a coherent laser diode (10 fps playback speed, 60x 1.27 water-Immersion objective lens, Fire LUT).
Visualization 3       TIR Scan for 60x 1.49 oil-immersion and 10x 0.25 Air objective lenses with 500 nm steps (10 fps playback speed, Fire LUT).

Data availability

Data underlying the results presented in this paper are openly available in Ref. [49].

49. A. Rahmani, C. Tabitha, and P. Aleks, “PiFocus: acquisition, analysis and hardware control,” Zenodo (2023). https://zenodo.org/doi/10.5281/zenodo.10726262.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Axial drift estimation using astigmatism. (a) Schematic and characteristics of common focus stabilisation techniques including PiFocus presented in this work. (b) Selected slices from a z-scan of the astigmatic PSF (scale bar = 40 pixels). (c) Variation of beam widths of $\sigma _{x}$ (blue line) and $\sigma _{y}$ (red line) as a function of z-position. (d) Calibration curve of focal drift estimation using $\sigma _{x} - \sigma _{y}$. (e) Time-lapse acquisition at z = 0 $\mu$m for estimating axial precision.
Fig. 2.
Fig. 2. Extending the operating range. (a) Different lens configurations for PiFocus. (b) Calibration using different cylindrical lenses (CL) with 100x 1.35-NA silicone-immersion lens. (c) Calibration curve for two orthogonal cylindrical lenses.
Fig. 3.
Fig. 3. PiFocus offers universal lens compatibility and is superior to TIR for low NA. (a) Calibrations for different objective lenses with varying magnification, NA and immersion media using CL500. (b) Calibration curve for TIR-based stabilisation performed with 60x 1.49-NA oil-immersion (orange) and 10x 0.25-NA (green) objective lenses. (c) Summary table of all results including comparison between PiFocus and TIR. (M) Magnification, (I.M) Immersion medium, (WD) Working distance, (OR) operating range.
Fig. 4.
Fig. 4. Active focus stabilisation using PiFocus. (a) Schematic of PiFocus hardware implementation. (b) Axial sample drift monitored by PiFocus. (c) Axial drift with stabilisation ON (red) and OFF (black) monitored by imaging fluorescent beads using astigmatic 3D localisation.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.