Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Characterizing the 3-D field distortions in low numerical aperture fluorescence zooming microscope

Open Access Open Access

Abstract

In this article, we characterize the lateral field distortions in a low numerical aperture and large field-of-view (FOV) fluorescence imaging system. To this end, we study a commercial fluorescence MACROscope setup, which is a zooming microscope. The versatility of this system lies in its ability to image at different zoom ranges, so that sample preparations can be examined in three-dimensions, at cellular, organ and whole body levels. Yet, we found that the imaging system’s optics are optimized only for high magnifications where the observed FOV is small. When we studied the point-spread function (PSF) by using fluorescent polystyrene beads as “guide-stars”, we noticed that the PSF is spatially varying due to field distortions. This variation was found to be laterally symmetrical and the distortions were found to increase with the distance from the center of the FOV. In this communication, we investigate the idea of using the field at the back focal plane of an optical system for characterizing distortions. As this field is unknown, we develop a theoretical framework to retrieve the amplitude and phase of the field at the back focal pupil plane, from the empirical bead images. By using the retrieved amplitude, we can understand and characterize the underlying cause of these distortions. We also propose a few approaches, before acquisition, to either avoid it or correct it at the optical design level.

© 2012 Optical Society of America

1. Introduction

In this article, we characterize the lateral field distortions of a low numerical aperture (NA) and large field-of-view (FOV) fluorescence imaging system. We used a commercial fluorescent zooming microscope (also known as MACROscope) where the FOV changes with the optical zoom as it is well suited to illustrate the applicability of our characterization procedure. This setup from Leica™ (Fig. 1(a)) is a macro documentation system, combined with fluorescence techniques for visualization of sample preparations at a range of zooms [1].

 figure: Fig. 1

Fig. 1 (a) Schematic of a simple wide-field fluorescence MACROscope (Reproduced from [1]); Best of two worlds: maximum intensity projection along the optical axis of a Convallaria majalis sample taken using a Leica™ ZAPO16, fit with a confocal scanning head, at (b) a minimum zoom setting with lateral pixel size of 1.09 μm and (c) a sub-region of the sample at the maximum zoom setting with lateral pixel size of 0.89 μm (Courtesy of INRA). The scale bars are 100 μm in length.

Download Full Size | PDF

As in the case of a microscope, the emitted fluorescence from the sample is collected by an objective lens, but a fine/coarse focusing can be obtained by using a macro lens. The apochromatic macro lens is combined with the objective lens to image large fields (about 20mm diagonal diameter) and to provide larger working distances (about 97mm). Existing wide-field microscopes offer high resolutions but with limited FOV, while stereomicroscopes offer larger FOV but compromise on the resolution. The MACROscope offers higher FOV and good lateral resolution (for NA between 0.12–0.50, a lateral resolution of 1.65–0.39 μm respectively). Variations of this setup is also available from other commercial vendors like the Axio Zoom V16 from Carl Zeiss or the AZC2 from Nikon. The principal difference between these commercial adaptations is the range of zooms that they work at.

A sample of the plant Convallaria Majalis is used to highlight the MACROscope’s imaging capabilities, under two different settings: minimum and maximum zoom. The three-dimensional (3-D) image volumes are shown in Fig. 1(b) and 1(c) as maximum intensity projections (MIP) along the optical axis.

1.1. Context

Most of the commercial MACROscopes are guaranteed to be telecentric. In telecentric images,

  • the apparent size of the object does not vary with distance from the camera,
  • the apparent shape of objects does not vary with distance from the center of FOV.

However, in the observation of the Convallaria Majalis specimen in Fig. 1(b), we noticed that with shifts in the focal plane, the specimen expands or contracts. The cell walls ‘appear’ tilted and thicker than expected. This effect is best illustrated in the observed image of a Haemocytometer (Fig. 2).

 figure: Fig. 2

Fig. 2 Haemocytometer grid used for illustrating and measuring the distortion in the field. The square area indicated in red is of size 1 mm2, in green is 0.0625 mm2, in yellow is 0.04 mm2 and finally the smallest in blue is 0.0025 mm2. Reproduced from Wikimedia Commons

Download Full Size | PDF

In such a slide, the grid dimensions are calibrated, so that the image distortions can be quantified. For example, the square area indicated in red is of size 1 mm2, in green is 0.0625 mm2, in yellow is 0.04 mm2 and finally the smallest in blue is 0.0025 mm2. This slide was illuminated from below and the image was captured, from above, by a Photometrics CoolSNAP HQ2 cooled CCD camera (6.45 μm × 6.45 μm pixel). The radial pixel size for the 12.7× zoom is 390nm, while the axial slice width was fixed at 50 μm to capture the entire volume. We have shown in Fig. 3 a single lateral focal plane and the projection along the y-direction for the transmitted image volume. The dimensions of the displayed volume is 343×343×6200 μm.

 figure: Fig. 3

Fig. 3 The focal plane of the observed transmitted volume of the Haemocytometer (top) and the maximum intensity projection along the y-direction (bottom). The object is imaged using a 2x/air PlanApo objective fit to a Leica™ MacroFluo™ APOZ16. The zoom for this acquisition was set at 12.7x, the lateral sampling at 390nm, the slice thickness at 50 μm and the scale bar length is 50 μm. The total size of the displayed volume is 343×343×6200 μm.

Download Full Size | PDF

From the acquired data (see Media 1), we noticed the following:

  • Two symmetric focal planes (about the sharpest focus) have their periphery grid lines either stretched or contracted laterally with respect to the optical center. This is equivalent to magnification change with focus.
  • The points which exactly coincide with the optic axis remain pivoted, while all other points in the image plane are scaled relative to this pivot. This lateral relative scaling was measured to be up to 344nm for a 1 μm axial displacement.
  • These distortions are significant for low zooms only.

1.2. Motivation and outline

The fundamental motivation underlying the study and characterization of distortions in any optical system is to identify the cause, understand the limits in system usage, and take necessary precautions to avoid or actions to correct it. The questions that we wish to answer as a result of this study are:

  • Can the optical system’s back aperture be an indicator of the cause of these distortions?
  • Can an analysis of the physics behind these distortions help in correcting them?
This article is organized as follows. In Sect. 2, we briefly introduce the basics of the scalar diffraction model, and explain the roles that the pupil phase and amplitude play in defining the impulse response of the system or the point-spread function (PSF). As the field intensity at the back aperture gives information about the changes in the light path through the optical system, its estimation might validate our hypothesis on the cause of these distortions. We therefore estimate the amplitude and phase of this field, from the observed fluorescence intensities [2], by adopting a Bayesian framework. We also show that the Gerchberg-Saxton (GS) algorithm [3] can be derived as a special case from this framework. Finally, the algorithm is used to estimate the amplitude and the phase from some empirically obtained fluorescence point intensities in Sect. 3. Based on these results, we discuss the implications on the distortion process.

1.3. Notations

Scalar variables used in this article are denoted by lowercase letters (x), vectors are denoted by the boldface lowercase letters (x), and the matrices by the boldface uppercase letters (X). As the images are discrete, their spatial support is Ωs = {(x,y,z) : 0 ≤ xNx – 1, 0 ≤ yNy – 1, 0 ≤ zNz – 1}. By 𝒪s) = {o = (oxyz) : Ωs ⊂ 𝕅3 ↦ 𝕉}, we refer to the possible observable objects, and we assign the function h : Ωs ↦ 𝕉 to the microscope PSF. The observed intensities are denoted by i(x) : x ∈ Ωs (bounded and positive), and a 3-D convolution operation between two functions is denoted by ‘*’. When the same symbol is used as a superscript over a given function, as in h*(x), it represents the Hermitian adjoint operation on h(x). For a complex function hA : Ωs ↦ 𝔺, by |hA(x)| and ∠hA(x), we refer to its magnitude and phase respectively. While, by ℜ(hA(x)) and ℑ(hA(x)), we refer to its real and imaginary components. By Pr(·), we denote the probability density function.

The objective lenses of a microscope are defined by their magnification (M), numerical aperture (NA), and the medium in between the lens and the cover slip. For example, a lens of 5x magnification, 0.5 NA, and air as medium between the lens and cover slip is written as ‘5x/0.5 air’.

As mentioned earlier, we present 3-D images by their 2-D maximum intensity projection (MIP) along the optical axis in the 2-D XY plane or along the y-direction in the 2-D XZ plane.

2. Sensing the back aperture field

It is necessary to understand the conditions where our imaging system performs optimally, if we wish to extract the best from it. Nearly diffraction-limited (or aberration free) performance can be obtained, when the sources of these distortions are isolated. Once isolated, the objective would be to restore telecentricity in the images, by using post-acquisition computational methods.

2.1. PSF and role of the phase

The effective NA of the combined objective-zoom system is usually ≪ 0.7, and we work under near paraxial conditions. The effect of polarization is neglected, and the incoherent PSF can be modeled by using the scalar diffraction model. From the Kirchhoff-Fraunhofer approximation [4], we can write the near-focus amplitude PSF, hA(x, y, z), in terms of the inverse Fourier transform of the two-dimensional (2-D) exit pupil function, P(kx, ky, z;NA), at each defocus z as

hA(x,y,z)=2D1{P(kx,ky,z;NA)},
where (x,y,z) ∈ Ωs and (kx, ky, kz) ∈ Ωf are the coordinates in the spatial and in the pupil domain, and NA is the effective numerical aperture of the optical system. If ni is the refractive index of the objective immersion medium and λex the excitation wavelength, the pupil function (including defocus and aberrations), can be written as [5]
P(kx,ky,z;NA,φa)={exp(jz((k0ni)2(kx2+ky2))12+jφa),if(kx2+ky2)12k0<NA,0,otherwise,
where k0 = 2π/λex is the angular wavenumber or the number of wavelengths per 2π units of distance, and φa is the phase due to aberrations. As the medium between the lens and the specimen is air, ni = 1.0. In Eq. (2), the amplitude of the pupil function is assumed to be a constant. The magnitude PSF, h(x), can be written in terms of the excitation amplitude PSF, hA(x; λex), as
h(x)=|hA(x;λex)|2.
By studying the expressions in Eqs. (1)(3), we can state that the intensity distribution of a point source in an image space is the inverse Fourier transform of the overall complex field distribution of the wavefront, in the back aperture of the optical system.

We observed that smooth variations in the amplitude of the pupil in Eq. (2) do not strongly affect the final PSF, while phase variations, such as defocus or aberrations, can produce an entirely different PSF. We use this as the basis for distortion characterization. We thus rephrase the question raised in Sect. 1.2: ‘Can the amplitude or the phase of the field at the back aperture of the optical system be an indication of the source of the distortions?’.

2.2. A Bayesian perspective

Although the phase information is not directly measurable in an incoherent imaging setup, it can be retrieved by choosing the PSF model that best fits the given bead image. This problem is also known as sensor-less wavefront sensing.

As the problem is both nonlinear and non-convex, for successful phase estimation, we require some prior knowledge about the field at the back aperture that we wish to estimate. It was shown in [6] that the out-focus-highlights (OOFHs) contains ‘partial’ information of the pupil or the back focal plane. There are several literature works like [79] that have studied wavefront reconstruction for adaptive optics (AO) control. The AO methods are based on the idea of phase aberration compensation by adding deformable mirror or phase modulating element in the optical system. A review of the recent trends in AO is given in [10]. The amplitude and phase of the pupil function can also be measured by using a fiber-optic interferometer, as was done in [11].

Wavefront sensing could also be accomplished computationally, for example, by using the GS algorithm [3]. Wavefront sensing [12] by phase retrieval is the process of estimating the amplitude and the phase of a pupil function from the observed 3-D intensities of an imaged point source. In the expression for h(x) in Eq. (3), as the only unknown is the phase φa from the aberrations, the problem of phase retrieval is a question of estimating the aberrated phase from the observed intensities. This problem of phase retrieval is normally under-determined. However, as the phase that is to be estimated does not change with defocus, it can be estimated if images of point source at multiple defocus positions are available. The only requirement is that these sections are sufficiently far from the focus. As the distance from the central focal plane grows to infinity, the intensity approaches that at the back pupil plane. However, in practice, the measurement of defocused beads becomes increasingly difficult for larger defocusing due to the decaying fluorescence intensities. We remark that the distortions that we observe are mainly amplitude aberrations. That is, they do not generate variations in the optical path difference (or the phase) of the light and so is not an aberration in the strict sense.

If we consider Poissonian photon counting statistics [13], the observed bead image can be written as:

γi(x)=𝒫{γ|hA(x)|2+b(x)},xΩs,
where 𝒫(·) denotes a voxel-wise noise function modeled as a Poissonian process. b(x) is a uniformly distributed intensity that models the low-frequency background signal [13]. 1/γ is known as the photon conversion factor, and γi(x) is the observed photons. In the above expression, we have assumed that the fluorescent bead is sufficiently small (below the resolution limit) to be considered as a point source. The background, b(x), can be either estimated or calculated from a single dark image of the CCD or from an out-of-focus section [13]. To estimate the complex amplitude PSF, hA(x), from the intensity image, i(x), we use Bayesian inference. From the Bayes’ theorem, the posterior probability is
Pr(hA|i)=Pr(i|hA)Pr(hA)Pr(i),
where Pr(hA) is a probability density function (p.d.f), the prior from which |hA| is assumed to be generated. Pr(i|hA) is the likelihood function for the PSF and it specifies the probability of obtaining an image i(x) from a diffraction-limited point source:
Pr(i|hA)=xΩs(|hA|2+b)(x)i(x)exp((|hA|2+b)(x))i(x)!.
An estimate of the near-focus amplitude distribution, ĥA, can be obtained by using the maximum a posteriori (MAP) estimate or by minimizing the negative logarithm of the a posteriori as
h^A(x)=argmaxhA(x)Pr(hA|i),s. t. kMAX<2πλexNA,=argminhA(x)log[Pr(hA|i)],s.t. kMAX<2πλexNA,
where kMAX is the maximum frequency permissible by the imaging system pupil. As Pr(i) does not depend on hA(x), it can be considered as a normalization constant, and it shall hereafter be excluded from all the estimation procedures. The minimization of the negative logarithm of Pr(hA|i) in Eq. (5) can be rewritten as the minimization of the following joint energy functional:
𝒥(hA)=log[Pr(hA|i)]=𝒥obs(hA)Imageenergy+𝒥reg(hA)Priorenergy.
In Eq. (8),
  • 𝒥obs : 𝔺 ↦ Ωs is a measure of fidelity to the data and it corresponds to the negative logarithm of the term Pr(i|hA) from the noise distribution. It has the role of pulling the solution towards the observation data. We make a decision about the underlying scene based on this cost function, and it specifies the penalty paid by the system in producing an incorrect estimate of the scene.
  • Jreg : Ωs ↦ 𝕉 corresponds to the penalty term Pr(hA) that ensures smoothness on the solution.

For the GS algorithm, there is no intrinsic smoothness term on the solution. To compare our approach with the GS algorithm, we drop the prior energy term in Eq. (5) (by assuming a uniform distribution). The amplitude PSF can be estimated by the maximum likelihood (ML) algorithm:

h^A(x)=argminhA(x)𝒥obs(hA),s.t.kMAX<2πλexNA=argminhA(x)log[Pr(i|hA)],s.t.kMAX<2πλexNA,=argmaxhA(x)|hA(x)|2i(x)log(|hA(x)|2+b(x)),s.t.kMAX<2πλexNA.
As there is no closed-form solution to the problem in Eq. (9), we use the following fixed-point iterative algorithm:
h^A(n+1)(x)=h^A(n)(x)τ2𝒥obs(hA).
In Eq. (10), τ ∈ [0.5, 0.99] is a scaling factor. The cost function 𝒥obs(hA) is real, and ∇(·) is the complex gradient operation on it so that
𝒥obs(hA)=𝒥obs(hA)(hA(x))+j𝒥obs(hA)(hA(x))=2×(hA(x)i(x)(|hA(x)|2+b(x))hA(x)),xΩs.
The division is Hadamard, where each element of the matrices are divided element-wise, while ‘·’ denotes Hadamard element-wise multiplication. From Eq. (10) and (11), we get the fixed-point iterative algorithm for the near-focus amplitude PSF as
h^A(n+1)(x)=(1τ)h^A(n)(x)+τ(i(x)|h^A(n)(x)|2+b(x)h^A(n)(x)),xΩs,
It is important to note that although the given observation is real, the final estimate ĥA(x) is complex. In practice, the optimization process in Eq. (12) respects certain constraints.
  • Relaxation constraints on the pupil function: An upper limit can be introduced on the field intensity at the back aperture of the optical system based on the effective NA. Thus, the initial pupil function, P̂(0)(kx, ky, z = 0), is chosen to be a unit disc with a maximum radius of kMAX and phase zero (cf. Eq. (2)). This is inverse Fourier transformed to get h^A(0)(x) (cf. Eq. (1)). For successive estimates, the above relaxation constraint on the bandwidth in the pupil domain is maintained.
  • Loose support on the magnitude of the coherent PSF hA(x): We assume that part of this magnitude is zero, or that the PSF is confined to a region Ωh. That is |hA(x)| ≥ ε, ∀x ∈ Ωh and ε is a small value close to zero. For the lateral plane, we define the maximum permissible radius as 5 × 0.61λex/NA [14]. The idea of using a constraint on the PSF is to fit the model only to those regions in the observation where the fluorescence signal is strong. It also removes any spurious background noise in the process.
Generally, the MAP incorporates the prior knowledge simultaneously, but in the above case, we add some prior information about the solution sequentially. In reality, we find ourselves in a situation where only partial knowledge or partial certainty about the solution is known. We call such constraints as ‘partial knowledge”, because it refers to the fact that we represent our knowledge about states of nature not necessarily in the form of probability distributions. The idea of representing partial knowledge by convex sets, as in this article, is not new. More recently in [13], we have shown that such constraints on the solution space can be introduced elegantly in the form of a prior probability measure.

For the fixed-point iterative algorithm, the step size, τ, was chosen to be 0.6 in all our experiments. The iterations are continued until either the mean-squared error (MSE) between the phase estimates for two successive iterations is below a pre-defined threshold ε or a pre-defined maximum number of iterations is reached by the algorithm.

Tables Icon

Algorithm 1:. Proposed Algorithm.

2.3. Gerchberg-Saxton algorithm as a special case

The GS algorithm [3] is a technique to estimate the field at the back focal plane of the obejctive by following a forward and inverse Fourier transforms of the observation. The fixed-point iterative algorithm and the GS algorithm are initialized in the same manner for the pupil function. After initialization, a suitable curvature is added to the phase of the complex pupil function to obtain the defocus adjusted complex pupil function, P(kx, ky, z), at every defocus z [15]. This is inverse Fourier transformed (cf. Eq. (3)) to get the corresponding amplitude PSF (hA(x)) intensities at the different defocus planes. The magnitude of hA(x) is assigned to the corresponding measured intensities (after background subtraction) at the different defocus planes. A Fourier transform of this modified hA(x) gives the new estimate of the defocus-adjusted complex exit pupil function, P̂(kx, ky, z), at the different defocus positions of z. The resulting defocus-adjusted complex pupil functions are readjusted back to zero and averaged to get a new estimate of the complex exit pupil function P̂(kx, ky, z = 0). This process is repeated until the MSE criterion or the maximum iteration is reached. Some constraints are introduced during the iterative algorithm that can aid in the convergence of the algorithm. The progress of the GS algorithm is the same as the fixed-point algorithm except for Step 6 in Algorithm 1. We see that when τ = 1 in Eq. (12), then the factor i(x)/(|h^A(n)(x)|2+b(x)), at each iteration, performs the assigning operation of Step 6. This ratio also has a physical significance. It has the role of replacing the incorrect amplitude of hA(x) by the correct experimentally obtained magnitude i(x).

3. Experiments

In the previous section, we derived an iterative algorithm for phase retrieval from intensity data. In order to answer our original question in Sect. 2.1, we apply this proposed iterative algorithm on observed images of microbeads. For the imaging experiments, we chose InSpeck™Green (505/515) fluorescent microspheres of size 2.5 μm from Molecular Probes®. This size was chosen because the resolution of the system, in the axial direction, for certain zooms are worse than this size. In addition, for lenses with low NA, the diversity defocus planes are not affected by the size. We diluted a 1 μl of this suspension in 20 μl of distilled water and dried a drop of this on to a coverslip. These dried beads were then imaged using a Leica™ MacroFluo™ fit with a 5× planapochromatic HR objective and 16 zoom positions. The maximum NA of the optical system is 0.5. Of the several data acquired and processed, we have chosen two settings for the objective and the zoom. When the objective is 2 × /air PlanApo, the zoom position is at 4.6 × /air (radial sampling 421nm and axial sampling 3000nm). While for the 5 × /air PlanApo, the zoom position is at 1.6 × /air (radial sampling 998.3nm and axial sampling 1000nm). As the beads are distributed randomly, we chose five locations in the lateral plane for cropping the bead images. These locations are shown in Fig. 4 with the positions of the PSFs marked out. Note that in Fig. 4, the PSFs are shown by their MIPs along the y-direction but their positions are shown in the lateral plane. For analysis, we chose only the beads from positions close to the periphery that exhibited apparent intensity chopping.

 figure: Fig. 4

Fig. 4 Empirical PSFs are shown at the different positions (denoted by a cross) in the lateral field of the lens. Here, the PSFs are shown as the MIP along the y-direction.

Download Full Size | PDF

3.1. NA for convex relaxation

Given the fact that the MACROscope works under a variable zoom, the NA of the optical mount is variable. The setup that we performed our experiments with was a prototype that is not pre-calibrated by the vendor. So the effective NA for all the zooms of the mount are not directly available. However, if we consider the light as a cone, the apex of the cone is at the central focal intensity plane and the base of the cone is the observed diffraction ring at a defocus plane of distance H away from the center. D is the diameter of the largest concentric ring of the base (cf. Fig. 5). For example, at a zoom of 9.2× with the 2× objective, the maximum radius of the diffractive ring pattern at a distance of H = 61 μm away from the center was measured to be 32.46 μm. The measured radius D/2 is related to the angle α and the defocus distance H by tan(α) = D/2H = 32.46/61 = 0.53. The maximum subtended semi-cone angle will be α = arctan(0.53) = 0.49 radians. Since ni = 1, the effective NA can be calculated to be 0.47 which is closer to the manufacture specified NA of 0.5. If we consider another set of images taken under a zoom of 1.6×, D was measured to be 32.70 μm for a H of 71 μm. In this case the NA was calculated as 0.22. We use such ‘loosely’ calculated NA values to limit the frequency bandwidth in the pupil plane of Eq. (1).

 figure: Fig. 5

Fig. 5 The schematic to measure the maximum object spread for constraining the iterative algorithm.

Download Full Size | PDF

3.2. Results

As the problem is under determined, to introduce diversity, four defocus sections (M = 4 in Algorithm 1) were chosen, symmetrically, above and below the central focal plane of the 1.6× image. These sections lie at a distance of about 2–5 times the Rayleigh length from the focal plane. This bead was cropped from the periphery of the field and these individual sections approximate the OOFH that was mentioned earlier. One of the defocus sections is shown in Fig. 6(a). The unwrapped phase of the pupil, that was retrieved, φ̂a, is shown in Fig. 6(b), after about 32 cycles of the fixed-point algorithm, with τ = 0.6. We allowed the algorithm to continue, although the solution for the electric field amplitude converged between 12–15 cycles. In order to reduce noisy estimates, at each iteration, it was suggested in [15] to filter the estimate obtained from the GS algorithm by a Gaussian filter. To avoid such an ad hoc method we propose, as future work, that at each cycle of the fixed-point iterative algorithm, the field amplitude be regularized by a total variation functional [13]. For reproducibility of the experiments, the complete source code in Matlab™, and the data are provided here: http://bioimageanalysis.org/praveen/code.zip.

 figure: Fig. 6

Fig. 6 (a) The first section of the observed intensity, with z = −57 μm, and the scale bar is 10 μm, and (b) retrieved unwrapped pupil phase, φ̂a. The bead image was cropped from the right peripherary intensity image of Fig. 4 at a zoom 1.6× (radial sampling of 998.3nm and axial sampling of 1000nm). τ = 0.6, the maximum number of iteration is 32 and the phase scale is between [−π, +π] radians.

Download Full Size | PDF

3.3. Discussion

From the estimated phase in Fig. 6(b), we see that the pupil function of the optical system is partially chopped. The chopping of the pupil is such that it resembles a ‘cat’s’ eye [6]. This could be the result of two limiting apertures (from the sizes of lenses in the objective and the zoom) creating the vignetting effect in peripheral regions of the field. In the schematic that is shown in Fig. 7, we illustrate such a distortion. The axial aperture is the complete circle while the oblique aperture is vignetted. Our reconstruction of the cat’s eye in Fig. 6(b) is not sharp. This could be explained as follows. Every lens creates a limiting pupil due to its physical dimension. The imaging of these pupils through the optical system on to the back aperture plane will make diffused circles when it is far from the conjugated back aperture planes.

 figure: Fig. 7

Fig. 7 A schematic showing the effect of two limiting apertures (here zoom and objective lenses) at the back focal plane of the optical system. Here the on-axis and off-axis positions are shown.

Download Full Size | PDF

The retrieval of the phase allows us to understand the physics behind the field distortions. Although, some of these setups are claimed to be telecentric, we found that the system is not telecentric in the entire zoom range but only for high zooms. In addition, based on the amplitude of the retrieved electric field, we can also see the extent of overlap in the apertures. For example, from the defocus intensities in Fig. 8, the algorithm was able to retrieve the back aperture amplitude as shown in Fig. 9(b). Although the effective NA for this particular acquisition was calculated to be 0.17, the amplitudes were also retrieved with minor variations in the effective NA. We found that the estimation of the phase requires the exact NA value. In spite of erroneous input NA, the estimated amplitude on the other hand could still validate our hypothesis of vignetting (Fig. 9(a) and 9(c)). Even with an erroneous NA, the chopping could still be quantified to be between 84–89%.

 figure: Fig. 8

Fig. 8 Diversity sections, i(x), taken at four symmetrical positions with defocus at (a) z = −36 μm, (b) z = −15 μm, (c) z = +15 μm and (d) z = +36 μm. The objective is a 2x/air PlanApo and the zoom is set at 4.6x. The slice width is fixed at 3 μm and the effective NA was calculated to be 0.17.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Retrieved back focal pupil amplitude, |P̂(kx, ky, z = 0)|, from the defocus sections in Fig. 8. The algorithm was run with variations in the effective NA (a) 0.05, (b) 0.17, (c) 0.20.

Download Full Size | PDF

The approach presented here can be thought of as adding a virtual wavefront or interferometric pupil plane sensor, and using it to retrieve the field information. By using fluorescence beads as ‘guide stars’, we can validate the causes of our distortions and quantify them. Although we have demonstrated our algorithm and experiments on a MACROscope, the field distortions studied here are present in any optical system working under low NA and low magnification conditions.

There are many ways to overcome the field distortions. Some of them are listed here:

  1. During acquistion, the FOV can be reduced so that only regions with minimum distortions are imaged. The complete image field can also be reconstructed by mosacing together two overlapping images taken in sequence.
  2. In a zooming lens, the magnification change with defocus is proportional to the squared root of the distance between the two lens group in the axial direction [17]. For manufacturers of zooming microscopes, a possible solution to minimize the distortions is by reducing the change in the back focus of the tube lens system.
  3. In [16], the author discusses a method to correct distortions in a telecentric zoom system. The distortion here, as in our case, is characterized by magnification changes with working distance. It is claimed that by adjusting the first or the last optical component of the lens adjacent to telecentric image or object space, the distortions can be minimized.

In order to avoid the radial distortions that in turn can be produced as a result of such translations, the imaged object is moved in addition to the lens components. However, it is not clear what would be the defocus contribution of such an optical element translation to the final output image. It is likely that the imaging plane needs to be moved as well in addition to the optical elements.

Each of the above three methods have their advantages and difficulties. In the first case, the zoom system cannot be used at its full capacity while the other two are suggestions in redesigning the setup at the optical level. Our future work is therefore aimed at correcting these field distortions, computationally, after the images have been acquired.

Acknowledgments

This research was supported by the ANR DIAMOND project (http://www-syscom.univ-mlv.fr/ANRDIAMOND). The authors gratefully acknowledge Dr. Philippe Herbomel (Institut Pasteur, France) and Dr. Didier Hentsch (Imaging Center, IGBMC, Strasbourg) for making their MACROscope setups available to us. We also thank Dr. Gilbert Engler (INRA Sophia Antipolis, France) and Dr. Peter Kner (University of Georgia, Athens, GA, USA) for the interesting discussions.

References and links

1. P. Sendrowski and C. Kress, “Arrangement for analyzing microscopic and macroscopic preparations,” WO 2009/04711 (2009). PCT/EP2008/062749.

2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

3. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

4. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 1999).

5. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. A 59, 1314–1321 (1969). [CrossRef]  

6. P. Pankajakshan, Z. Kam, A. Dieterlen, G. Engler, L. Blanc-Féraud, J. Zerubia, and J.-C. Olivo-Marin, “Point-spread function model for fluorescence macroscopy imaging,” in Proc. of Asilomar Conference on Signals, Systems and Computers, (2010), 1364–1368.

7. L. Sherman, J. Y. Ye, O. Albert, and T. B. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206, 65–71 (2002). [CrossRef]   [PubMed]  

8. M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. USA 99, 5788–5792 (2002). [CrossRef]   [PubMed]  

9. Z. Kam, P. Kner, D. Agard, and J. W. Sedat, “Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc. 226, 33–42 (2007). [CrossRef]   [PubMed]  

10. M. J. Booth, “Adaptive optics in microscopy,” Philos. Transact. A Math. Phys. Eng. Sci. 365, 2829–2843 (2007). [CrossRef]   [PubMed]  

11. R. Juškaitis and T. Wilson, “The measurement of the amplitude point spread function of microscope objective lenses,” J. Microsc. 189, 8–11 (1998). [CrossRef]  

12. P. Pankajakshan, A. Dieterlen, G. Engler, Z. Kam, L. Blanc-Feraud, J. Zerubia, and J.-C. Olivo-Marin, “Wavefront sensing for aberration modeling in fluorescence macroscopy,” in Proc. IEEE International Symposium on Biomedical Imaging (ISBI), IEEE (IEEE, Chicago, USA, 2011).

13. P. Pankajakshan, “Blind Deconvolution for Confocal Laser Scanning Microscopy,” Ph.D. thesis, Université de Nice Sophia-Antipolis (2009).

14. T. J. Holmes, D. Biggs, and A. Abu-Tarif, “Blind Deconvolution,” in Handbook of Biological Confocal Microscopy, 3rd ed, J. B. Pawley, ed. (Springer, New York, 2006), Chap. 24, pp. 468–487. [CrossRef]  

15. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28, 801–803 (2003). [CrossRef]   [PubMed]  

16. J. E. Webb, “Distortion tuning of quasi-telecentric lens,” US Patent 7646543 (2010).

17. J. Winterot and T. Kaufhold, “Optical arrangement and method for the imaging of depth-structured objects,” US Patent 7564620 (2009).

Supplementary Material (1)

Media 1: AVI (23072 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Schematic of a simple wide-field fluorescence MACROscope (Reproduced from [1]); Best of two worlds: maximum intensity projection along the optical axis of a Convallaria majalis sample taken using a Leica™ ZAPO16, fit with a confocal scanning head, at (b) a minimum zoom setting with lateral pixel size of 1.09 μm and (c) a sub-region of the sample at the maximum zoom setting with lateral pixel size of 0.89 μm (Courtesy of INRA). The scale bars are 100 μm in length.
Fig. 2
Fig. 2 Haemocytometer grid used for illustrating and measuring the distortion in the field. The square area indicated in red is of size 1 mm2, in green is 0.0625 mm2, in yellow is 0.04 mm2 and finally the smallest in blue is 0.0025 mm2. Reproduced from Wikimedia Commons
Fig. 3
Fig. 3 The focal plane of the observed transmitted volume of the Haemocytometer (top) and the maximum intensity projection along the y-direction (bottom). The object is imaged using a 2x/air PlanApo objective fit to a Leica™ MacroFluo™ APOZ16. The zoom for this acquisition was set at 12.7x, the lateral sampling at 390nm, the slice thickness at 50 μm and the scale bar length is 50 μm. The total size of the displayed volume is 343×343×6200 μm.
Fig. 4
Fig. 4 Empirical PSFs are shown at the different positions (denoted by a cross) in the lateral field of the lens. Here, the PSFs are shown as the MIP along the y-direction.
Fig. 5
Fig. 5 The schematic to measure the maximum object spread for constraining the iterative algorithm.
Fig. 6
Fig. 6 (a) The first section of the observed intensity, with z = −57 μm, and the scale bar is 10 μm, and (b) retrieved unwrapped pupil phase, φ̂a. The bead image was cropped from the right peripherary intensity image of Fig. 4 at a zoom 1.6× (radial sampling of 998.3nm and axial sampling of 1000nm). τ = 0.6, the maximum number of iteration is 32 and the phase scale is between [−π, +π] radians.
Fig. 7
Fig. 7 A schematic showing the effect of two limiting apertures (here zoom and objective lenses) at the back focal plane of the optical system. Here the on-axis and off-axis positions are shown.
Fig. 8
Fig. 8 Diversity sections, i(x), taken at four symmetrical positions with defocus at (a) z = −36 μm, (b) z = −15 μm, (c) z = +15 μm and (d) z = +36 μm. The objective is a 2x/air PlanApo and the zoom is set at 4.6x. The slice width is fixed at 3 μm and the effective NA was calculated to be 0.17.
Fig. 9
Fig. 9 Retrieved back focal pupil amplitude, |P̂(kx, ky, z = 0)|, from the defocus sections in Fig. 8. The algorithm was run with variations in the effective NA (a) 0.05, (b) 0.17, (c) 0.20.

Tables (1)

Tables Icon

Algorithm 1: Proposed Algorithm.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

h A ( x , y , z ) = 2 D 1 { P ( k x , k y , z ; NA ) } ,
P ( k x , k y , z ; NA , φ a ) = { exp ( j z ( ( k 0 n i ) 2 ( k x 2 + k y 2 ) ) 1 2 + j φ a ) , if ( k x 2 + k y 2 ) 1 2 k 0 < NA , 0 , otherwise ,
h ( x ) = | h A ( x ; λ ex ) | 2 .
γ i ( x ) = 𝒫 { γ | h A ( x ) | 2 + b ( x ) } , x Ω s ,
Pr ( h A | i ) = Pr ( i | h A ) Pr ( h A ) Pr ( i ) ,
Pr ( i | h A ) = x Ω s ( | h A | 2 + b ) ( x ) i ( x ) exp ( ( | h A | 2 + b ) ( x ) ) i ( x ) ! .
h ^ A ( x ) = argmax h A ( x ) Pr ( h A | i ) , s. t.  k MAX < 2 π λ ex NA , = argmin h A ( x ) log [ Pr ( h A | i ) ] , s.t.  k MAX < 2 π λ ex NA ,
𝒥 ( h A ) = log [ Pr ( h A | i ) ] = 𝒥 obs ( h A ) Image energy + 𝒥 reg ( h A ) Prior energy .
h ^ A ( x ) = argmin h A ( x ) 𝒥 obs ( h A ) , s. t. k MAX < 2 π λ ex NA = argmin h A ( x ) log [ Pr ( i | h A ) ] , s . t . k MAX < 2 π λ ex NA , = argmax h A ( x ) | h A ( x ) | 2 i ( x ) log ( | h A ( x ) | 2 + b ( x ) ) , s . t. k MAX < 2 π λ ex NA.
h ^ A ( n + 1 ) ( x ) = h ^ A ( n ) ( x ) τ 2 𝒥 obs ( h A ) .
𝒥 obs ( h A ) = 𝒥 obs ( h A ) ( h A ( x ) ) + j 𝒥 obs ( h A ) ( h A ( x ) ) = 2 × ( h A ( x ) i ( x ) ( | h A ( x ) | 2 + b ( x ) ) h A ( x ) ) , x Ω s .
h ^ A ( n + 1 ) ( x ) = ( 1 τ ) h ^ A ( n ) ( x ) + τ ( i ( x ) | h ^ A ( n ) ( x ) | 2 + b ( x ) h ^ A ( n ) ( x ) ) , x Ω s ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.