Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed autofocusing of a cell using diffraction patterns

Open Access Open Access

Abstract

This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed auto-focusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.

©2006 Optical Society of America

1. Introduction

Automated microscopic measurement of biological specimens is becoming increasingly important in medicine and life sciences. A critical step in such measurement is autofocusing. Since the depth of focus of microscopes is usually very shallow, on the order of several micrometers, small shifts in the depth direction cause the image of the specimen to easily become out of focus. Thus, autofocusing is essential to keep the object in focus for precise observation. Furthermore, major applications of such automated measurement, such as image cytometry, require high-throughput because the number of target specimens tends to be enormous. Therefore, high-speed operation is also important.

Many microscope focusing methods based on the spatial frequency of the acquired image have been proposed [1, 2, 3]. The best focal position providing the highest amount of detail can be estimated from a focus curve, formed by sampling the image to obtain a focus score and plotting it against focal position in the depth direction. The best focal position is then found by searching for the peak in the focus curve.

However, this sampling takes a considerable amount of time because many images at many focal positions must be individually acquired and processed. Assuming that acquisition and processing of each image takes 40 ms [3] and that 20 samples of the focus score are necessary to estimate the focus curve, the entire autofocus process takes at least 0.8 s.

This paper proposes a quick autofocusing method for microbiological specimens, such as micro-organisms and cells. The proposed focusing method uses a quick focus estimation that we call “depth from diffraction”, based on a diffraction pattern in the defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. The performance of the proposed method was demonstrated by applying it to three-dimensional tracking of a swimming paramecium.

Since this method uses a diffraction pattern, and this pattern depends on coherency of illumination, not all illuminations can be used for this method. In this paper, we used KÖhler illumination and the method worked well. It could work with differential interference contrast (DIC) microscopy, because diffraction patterns were observed in our preliminary experiments. However, it may not work in dark field, reflection or fluorescence mode.

2. Depth-from-diffraction method

Consider a cell observed with a microscope under KÖhler illumination [4]. When the target cell is in focus, a clear image of the cell is observed. If we adjust the focus slightly to defocus the image, bright and dark diffraction fringes can be observed near the periphery of the cell. For example, Fig. 1 shows these fringes in images of paramecium and yeast cells.

We concentrate on two useful characteristics of these fringes: firstly, the interval of the fringes depends on the distance between the focal plane and the target cell; and secondly, the sequence of bright and dark fringes depends on the positional relationship between the focal plane and the cell. These characteristics suggest that the depth of the target cell could be estimated from a single defocused image of the cell containing such diffraction fringes. A similar phenomenon known as Becke lines is used in optical mineralogy [5, 6] to determine the refractive index of a transparent mineral.

Based on this idea, we propose a new depth estimation method using the diffraction fringes of a cell; we call this method “depth-from-diffraction.” Since this method can estimate the depth from only a single defocused image, the depth estimation could be performed extremely rapidly. Once the depth is estimated, focusing can be easily realized by moving the focal plane to the estimated depth, or moving the specimen onto the focal plane.

 figure: Fig. 1.

Fig. 1. Diffraction images of yeast and paramecium cells at three focal positions. The yeast cell has a spherical body about 5 μm in diameter. Paramecium is a motile cell with an ellipsoidal body whose longitudinal length is from 100 to 200 μm. The paramecium cell was held in a micro-capillary to keep it in the field of view. In the case of the yeast cell, an intensity variance was observed in the interior of the cell. No clear inner fringe was observed.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Block diagram of the experimental set-up.

Download Full Size | PDF

3. Experiments

3.1. Experimental set-up

To confirm the effectiveness of the proposed method, we developed the system shown in Fig. 2. The system consisted of an optical microscope (BX50WI, OLYMPUS), a computer vision system (I-CPV), an XYZ automated stage, and a personal computer (PC) running a real-time OS (ART-Linux, Moving Eye) for control. We also developed both programs to process images on the I-CPV system, and to control whole system on the PC.

A bright-field image of a target paramecium with KÖhler illumination was magnified with

Tables Icon

Table 1. Specifications of the XYZ automated stage

a 20-times objective lens (N.A. 0.46, UMPlanFl, OLYMPUS) and projected onto the vision system. As described above, the diffraction pattern of the paramecium depends on its depth position.

Note that the pattern also depends on the diameter of the aperture stop of the illumination optical system, because the range of incident angles of the illumination light is determined by this diameter. In our experiments, we adjusted the stop diameter so that the diffraction pattern of the cell was clearly visible to the vision system.

To realize high-speed focusing, the vision system must be capable of high-speed image acquisition and processing. Therefore, the vision system we adopted was the so-called I-CPV system developed by Hamamatsu Photonics K. K. [7]. This was composed of a column parallel vision (CPV) system [8] and an image intensifier. The CPV system can capture and process an 8-bit gray-scale image with 128 × 128 pixels at 1-kHz frame rate.

Wild-type Paramecium caudatum cells were adopted as specimens, which were cultured at 25 °C in a soy flour solution. Cells were collected together with the solution, filtered through a nylon mesh to remove debris, washed with mineral water without gas, and infused into a chamber. The chamber was a small tank with dimensions 15 mm × 15 mm × 150 μm, made of a glass slide forming the base and pieces of cut cover glasses forming the side walls, which were bonded together using optical cement.

The chamber was fixed on the XYZ stage so that its position could be controlled in three dimensions by the PC. The stage’s specifications are shown in Table 3.1. The step response time of all axes was around 50 ms. All axes were actuated by AC servomotors (HF-KP053(B), Mitsubishi Electric) driven by AC servo amplifiers (MR-J3-10A, Mitsubishi Electric).

3.2. Extraction of diffraction pattern features

An image processing algorithm was developed to estimate a specimen’s depth from the diffraction pattern. The area of a bright fringe was measured as a feature indicating the fringe interval, because it was easier to extract this area than the interval itself. Thus, the areas of the outer and inner bright fringes were used as image features (indexes). Details of the image processing are shown in Fig. 3. This image processing can be executed within 1 ms by the I-CPV.

This algorithm was developed for a single isolated cell. However, the depth-from-diffraction method itself can be applied to two or more cells, even when they are not isolated, since we can observe their diffraction pattern. The problem is how to extract the image features from the cells’ diffraction pattern. Both image features and diffraction pattern depend on cell type, distribution, shape and optical setup such as numerical aperture.

3.3. Profile measurement of the diffraction pattern features

Profiles of the calculated indexes were measured by varying the specimen’s position in the optical axis direction of the microscope (Z direction). The specimen was a paramecium cell held on the bottom slide glass of the chamber, with its longitudinal axis parallel to the slide glass. The chamber position was controlled with the XYZ stage. We define the Z coordinate to be parallel to the optical axis of the microscope and its origin to be the position of the focal plane. z > 0 means the specimen is closer to the objective lens than the focal plane. The indexes were measured while the specimen was scanned from z = - 50μm to z = 50μm in 2 s. The frame rate was 1000 samples/s and a total of 2000 indexes were measured.

 figure: Fig. 3.

Fig. 3. Block diagram of image processing to extract features of bright fringe in Becke pattern. All thresholds were determined by humans using three images (one focused and two defocused in different direction) of the specimen captured under the environment for the experiments.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) Profile of bright fringe and (b) profile of estimated Z position of the object.

Download Full Size | PDF

Figure 4 (a) shows the measured profile of the indexes. For z > 0, the index of the outer fringe has a certain value, while that of the inner fringe is almost zero. In contrast, for z < 0, the index of the inner fringe is larger than that of the outer fringe. In the region 0 ≤ z ≤ 15μm, the index of the outer fringe is almost linearly dependent on z, whereas in the region - 15 ≤ z ≤ 0, the index of the inner fringe is almost linearly dependent on z. Outside of these regions, there is no linear relationship. This is because, as the specimen moves away from the focal plane, the bright fringe becomes larger and exceeds the size of a ring-shaped mask used in the image processing.

The target’s z coordinate was estimated using the measured indexes of the bright fringes. Assuming that z linearly depends on the index of the outer fringe when z ≥ 0 and the index of the inner fringe when z < 0, z could be estimated using

ẑ={a1iouter+a0(z0)b1iinner+b0(z<0)

where ẑ is the estimated z, i outer and i inner are the indexes of the outer and inner fringes, and a 1, a 0, b 1, and b 0 are coefficients calculated by fitting the measured profile of the indexes using a least squares method. The sets of coefficients (a 1,a 0) and (b 1,b 0) were calculated for 0 ≤ z 8.0μm and -4.0μm < z < 0, respectively.

To estimate z using Eq. (1), the sign of z should be estimated first to determine which equation should be applied. According to the measured profile of the indexes, the index of the inner fringe was almost 0 when z was positive. Thus, the sign of z was estimated from the index of the inner fringe.

The profile of the estimated z is shown in Fig. 4(b). This result indicates that estimation with the linear assumption is valid for -10 < z < 15 μm.

If the distance is less than 50μm, the sign of the specimen’s z position can be estimated. When the specimen’s z position exceeds this range, its position cannot be estimated using the depth-from-diffraction method. In this case, other scanning-type focusing methods must be adopted.

This result indicates that the proposed method works well when the specimen is near the focal plane. Thus the method is adequate for quick local focus adjustment near the focal plane. However, the method is not suitable for global focusing. In this case, the focus-curve method described in the introduction should be adopted.

Note that the depth estimation took only 1 ms using our system, which is much shorter than the response time of typical focusing mechanisms. Thus, the achievable focusing speed is practically limited by the response speed of the focusing mechanism. In our experiment, the focusing took about 50 ms, which was the step response time of the XYZ stage.

3.4. Dynamic focusing applied to three-dimensional tracking of a swimming paramecium

Dynamic focusing using the depth-from-diffraction method was applied to three-dimensional tracking of a swimming paramecium. This experiment was carried out to demonstrate the advantages of the proposed focusing method.

The purpose of this experiment was to stay focused on a freely swimming paramecium, and also to keep the specimen within the field of view by controlling the chamber position using the XYZ stage.

The Z-axis was controlled to keep the target on the focal plane. The desired position on the Z-axis, z d, was determined to track the specimen on the focal plane using the specimen’s estimated z position. In this experiment, zd was given by

zd=zk(iouteriinner)

where z is the current position on the Z-axis, i outer and i inner are, respectively, the indexes of the outer and inner fringes, k(i outer -i inner) is the specimen’s estimated z position, and k is a gain parameter used in the estimation, which is determined in ad-hoc way. Eq. (2) is based on the rough assumption that the specimen’s position is linearly dependent on the difference between the two indexes. This is an ad-hoc assumption which worked well for the purpose of focus tracking.

The X- and Y-axes were controlled to keep the target in the center of the field of view. The target position in the field was measured from the centroid of the captured image [9].

Three-dimensional tracking of a swimming paramecium for 70 s was successfully demonstrated. Figure 5(a) shows a sequence of images captured by a monitoring CCD camera while the paramecium turned from facing the bottom of the chamber to facing the top, while tracking the paramecium to keep it on the focal plane. The focal position variance can be seen by observing a dust particle indicated by the circles A, B, and C in Fig. 5. In circle A, the dust particle image is blurred. In circle B, the dust particle image is near the focus as the paramecium swam in the direction of the optical axis. In circle C, the dust particle image is blurred again.

Figure 5(b) shows a computer-generated animation based on the captured trajectory of the paramecium. The position of the paramecium was calculated from both the position of the XYZ stage and the target position in the image captured by the I-CPV.

 figure: Fig. 5.

Fig. 5. (a) Sequence of photographs acquired by the CCD camera. The maximum speed of the focus position movment during this sequence was 346 μm/s. (b) Computer-generated movie based on the acquired trajectory. Two small movies are superimposed: one is a movie of the target paramecium captured by a CCD camera placed at the top left in the movie, and the other is a computer-generated animation from the same viewpoint of a CCD camera placed at the bottom-right in the movie. [Media 1]

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Relationship between focal plane position and features of the acquired image. Two schematics are shown to explain the diffraction pattern generation qualitatively: (a) the object is in front of the focal plane and a bright fringe appears inside of the cell or (b) the object is behind the focal plane and a bright fringe appears outside of the cell.

Download Full Size | PDF

These results indicate that the depth-from-diffraction method realized high-speed focusing that could track a swimming paramecium while keeping it in focus.

4. Discussion

The mechanism of the diffraction pattern generation can be explained qualitatively by modeling the cell as a spherical shell. Figure 6 shows a schematic diagram illustrating the diffraction pattern generation. It is assumed that a cell is illuminated by planar monochromatic light from the left, and the incident light is refracted mainly by the cell membrane. The membrane acts as a lens for the light rays that go through the membrane but not through the inside of the cell. The cell effects the rays in two ways. One set of rays pass through the side of the cell (and not the center). These are deflected inward. A second set of rays pass through one side of the cell, the cell’s center, and then the other side. These are only deflected a small amount. Thus, when the focal plane is behind the cell, we can observe a bright fringe inside and a dark fringe outside. Conversely, when the cell is behind the focal plane, we can observe a virtual image with a bright fringe outside and a dark fringe inside.

Although the above discussion is based on paraxial rays, Fresnel’s diffraction theory [10] should be adopted for a more rigorous treatment of the phenomenon. Assume a thin object illuminated by a monochromatic planar lightwave from the left, as shown in Fig. 7. Its phase and amplitude transmittance function is g(ξ,η). The complex field across the z = 0 plane placed just behind the object is represented by U(x,y,z). The object is observed using a microscope objective lens placed behind the object. Here, to simplify the problem, we assume that the magnification of the objective lens is 1, and the lens can perfectly project the inverted intensity distribution at the focal plane onto the image plane. That is, we neglected the aberrations and the effect of diffraction of the objective lens. Then, the intensity distribution I(x,y) of the image plane can be denoted as

 figure: Fig. 7.

Fig. 7. Arrangement used in calculating theoretical intensity distribution.

Download Full Size | PDF

I(x,y)=U(x,y,0)2=I0λ2z2g(ξ,η)·exp[jπλz{(xξ)2+(yη)2}]dξdη2

where I 0 is a constant intensity depending on the illumination light intensity. Note that this equation is valid also for negative z, that is, when the object is placed behind the focal plane. When the diffraction pattern I(x,y) depends uniquely on z around z = 0, the depth of the object can be estimated by extracting the features of the diffraction pattern, as described above.

For example, diffraction patterns of two different objects, an amplitude knife-edge and a phase knife-edge, are considered [6, 10]. An amplitude knife-edge is a semi-infinite opaque plane bounded by a sharp straight edge; and a phase knife-edge is a semi-infinite but transparent plane also bounded by a sharp straight edge. Placing the semi-infinite plane at ξ < 0, the phase and amplitude transmittance function g(ξ,η) can be denoted as

g={1(ξ0)Amplitudeknifeedge0(ξ<0)

and

g={1(ξ0)Phaseknifeedgee(ξ<0)

where ϕ is the phase shift due to the phase knife-edge, whose refractive index is different from that of the surrounding medium.

The intensity profiles were calculated using Eq. (3), and the results are shown in Fig. 8. The horizontal axes show a scaled spatial parameter P=(2λz)x. Thus, the actual diffraction pattern is attained by scaling the shown profile by a factor of λz2. Since this factor depends on z, we can estimate the absolute z from the distance between intensity peaks.

The diffraction pattern of the phase knife-edge for z < 0 is the reversed profile of that for z > 0. Thus, the sign of z can be estimated when the object is a phase knife-edge. However, the diffraction patterns of the amplitude knife-edge for z > 0 and z < 0 are identical, which means that the sign of z cannot be estimated in this case. These results show that the proposed method cannot be applied to all types of microscopic specimens. For example, typical micro electro mechanical systems (MEMS) on a common silicon substrate are opaque, and they cannot be focused using this method.

 figure: Fig. 8.

Fig. 8. Theoretical intensity profiles of the amplitude knife-edge and phase knife-edge [6] : (a) amplitude knife-edge (z > 0), (b) amplitude knife-edge (z < 0), (c) phase knife-edge (z > 0), and (d) phase knife-edge (z < 0).

Download Full Size | PDF

When the object is a phase knife-edge, z can be estimated from the diffraction pattern. This indicates that we can estimate the positional relationship between the focal plane and the object from a single defocused image of the object when different patterns are formed depending on z. This result suggests that the depth-from-diffraction method could be applied to not only spherical cells, as described in the first paragraph of this section, but also to flat transparent specimens, such as amoebae and squamous cells. However, the cell types that are suitable for use in this method remain the subject of a future study.

As described above, the diffraction pattern is scaled by a factor of λz2 theoretically. However, the profiles of the indexes shown in Fig. 4 seems to depend linearly on z, not √z. The reason should be the coarse pixels of the imager. When z is small, the distance between bright and dark fringe is always less than one pixel. Thus, the imager is less sensitive about the change of the distance between fringes. This should straighten the profile near z = 0.

5. Conclusion

In this paper, a new autofocusing method for observing cells was proposed. The proposed focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,“ which is based on a diffraction pattern in a defocused image of a biological specimen under Kohler illumination. Since this method can estimate the depth from only a single defocused image, the depth estimation could be performed in 1 ms using a high-speed image processing system. Although this method requires specimens with certain optical characteristics, many cells seem to satisfy the requirement. This method worked well when the specimen was near the focal plane. Thus, it is suitable for quick local focus adjustment near the focal plane. Focus tracking of cells while scanning the field of view is considered to be a good application.

References and links

1. J. H. Price and D. A. Gough, “Comparison of Phase-Contrast and Fluorescence Digital Autofocus for Scanning Microscopy,” Cytometry, 16, 283–297 (1994). [CrossRef]  

2. M. Subbarao and J.-K. Tyan, “Selecting the Optimal Focus Measure for Autofocusing and Depth-From-Focus,” IEEE Trans. Patternn Anal. Mach. Intell., 20, 864–870 (1998). [CrossRef]  

3. J.-M. Geusebroek, F. Cornelissen, A. W. M. Smeulders, and H. Greets, “ Robust Autofocusing in icroscopy,” Cytometry , 39, 1–9 (2000). [CrossRef]   [PubMed]  

4. M. Born and E. Wolf, Principles of Optics, 7th Edition (Cambridge University Press, Cambridge, 2002).

5. W. D. Nesse, Introduction to Optical Mineralogy. (Oxford University Press, New York, 1991).

6. T. Tsuruta.Oyo-kogaku (Applied Optics) I. (Baifukan, Tokyo, 1990). (in Japanese)

7. H. Toyoda, N. Mukohzaka, K. Nakamura, M. Takumi, S. Mizuno, and M. Ishikawa, “1ms column-parallel vision system coupled with an image intensifier; I-CPV,” in Proceedings of Symp. High Speed Photography and Photonics 2001, vol. 5-1, 2001, pp. 89–92 (in Japanese).

8. Y. Nakabo, M. Ishikawa, H. Toyoda, and S. Mizuno, “1ms column parallel vision system and it’s application of high speed target tracking,” in Proceedings of the IEEE International Conference on Robotics & Automation (Institute of Electrical and Electronics Engineers, New York,2000), pp. 650–655.

9. H. Oku, N. Ogawa, K. Hashimoto, and M. Ishikawa, “ Two-dimensional tracking of a motile microorganism allowing high-resolution observation with various imaging techniques,” Rev. Sci. Instrum., 76, 034301 (2005). [CrossRef]  

10. J. W. Goodman, Introduction to Fourier Optics. (McGraw-Hill, Inc., Boston, Massachusetts,1996).

Supplementary Material (1)

Media 1: MOV (2507 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Diffraction images of yeast and paramecium cells at three focal positions. The yeast cell has a spherical body about 5 μm in diameter. Paramecium is a motile cell with an ellipsoidal body whose longitudinal length is from 100 to 200 μm. The paramecium cell was held in a micro-capillary to keep it in the field of view. In the case of the yeast cell, an intensity variance was observed in the interior of the cell. No clear inner fringe was observed.
Fig. 2.
Fig. 2. Block diagram of the experimental set-up.
Fig. 3.
Fig. 3. Block diagram of image processing to extract features of bright fringe in Becke pattern. All thresholds were determined by humans using three images (one focused and two defocused in different direction) of the specimen captured under the environment for the experiments.
Fig. 4.
Fig. 4. (a) Profile of bright fringe and (b) profile of estimated Z position of the object.
Fig. 5.
Fig. 5. (a) Sequence of photographs acquired by the CCD camera. The maximum speed of the focus position movment during this sequence was 346 μm/s. (b) Computer-generated movie based on the acquired trajectory. Two small movies are superimposed: one is a movie of the target paramecium captured by a CCD camera placed at the top left in the movie, and the other is a computer-generated animation from the same viewpoint of a CCD camera placed at the bottom-right in the movie. [Media 1]
Fig. 6.
Fig. 6. Relationship between focal plane position and features of the acquired image. Two schematics are shown to explain the diffraction pattern generation qualitatively: (a) the object is in front of the focal plane and a bright fringe appears inside of the cell or (b) the object is behind the focal plane and a bright fringe appears outside of the cell.
Fig. 7.
Fig. 7. Arrangement used in calculating theoretical intensity distribution.
Fig. 8.
Fig. 8. Theoretical intensity profiles of the amplitude knife-edge and phase knife-edge [6] : (a) amplitude knife-edge (z > 0), (b) amplitude knife-edge (z < 0), (c) phase knife-edge (z > 0), and (d) phase knife-edge (z < 0).

Tables (1)

Tables Icon

Table 1. Specifications of the XYZ automated stage

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

z ̂ = { a 1 i outer + a 0 ( z 0 ) b 1 i inner + b 0 ( z < 0 )
z d = z k ( i outer i inner )
I ( x , y ) = U ( x , y , 0 ) 2 = I 0 λ 2 z 2 g ( ξ , η ) · exp [ j π λz { ( x ξ ) 2 + ( y η ) 2 } ] dξdη 2
g = { 1 ( ξ 0 ) Amplitude knife edge 0 ( ξ < 0 )
g = { 1 ( ξ 0 ) Phase knife edge e ( ξ < 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.