Abstract
This paper presents a passive autofocus algorithm applicable to interferometric microscopes. The proposed algorithm uses the number of slope variations in an image mask to locate the focal plane (based on focus-inflection points) and identify the two neighboring planes at which fringes respectively appear and disappear. In experiments involving a Mirau objective lens, the proposed algorithm matched the autofocusing performance of conventional algorithms, and significantly outperformed detection schemes based on zero-order interference fringe in dealing with all kinds of surface blemish, regardless of severity. In experiments, the proposed algorithm also proved highly effective in cases without fringes.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Optical microscopy measurement systems are widely used in characterizing semiconductors [1], nanostructures [2], fluorescence devices [3], and biological entities [4, 5]. Autofocusing functions are classified as active [6–12] and passive [13–23]. Active autofocus systems direct a signal (e.g., laser or ultrasound) toward the sample and then detect the signals that are reflected. By comparing the reflected signal with a given reference point, it is possible to derive the distance between the light source and sample (focal length) for use as a reference in adjusting the lens group. Passive autofocus systems use a camera to capture a large number of sample images at various focal lengths, whereupon an algorithm identifies those that capture the sharpest representation of the sample. Passive autofocus systems are generally more robust, cost effective, and accurate than active systems; however, the response time is usually much slower.
Passive autofocus systems capture a stack of images, some of which are on within the range of focus, and some of which lie outside the range of focus. An algorithm is used to automate the identification of one or more images that lie within that range of focus based on the maximum intensity, the accumulated sum of intensity values, or other metrics. The algorithm most commonly used for interferometric microscopes is based on zero-order interference fringe [24] (i.e., fringes with the multiple wavelengths), which uses the fringe with the maximum intensity to identify the focal point. Table 2 lists five common autofocus algorithms used to control the passive autofocus operation of normal microscopes, including square gradient, image power, Brenner gradient, energy Laplace, and maximum pixel intensity [25]. In these five algorithms, the various operators can be used to calculate the intensity value of each image within a specified area (mask) and then derive the focal point corresponding to the peak position (highest accumulated sum). In recent years, researchers have developed other autofocusing algorithms that are more robust and/or accurate than those listed in Table 2. The use of phase congruency to find the focal plane makes the scheme in [19] is highly robust to noise from sensors under a range of illumination conditions, while providing a good balance between defocus sensitivity and effective range. The scheme in [20] maximizes the image score using six different image scorings algorithms to deal with a wide range of excitation wavelengths. Their automated multi-axis alignment procedure also enhances the versatility of the system. Using as few as two intermediate images, the scheme in [21] is able to find the focal in phase-contrast (bright-field) microscopy or fluorescence microscopy images from pathology slides.
In the current study, we developed a passive autofocus algorithm based on the number of slope variations associated with the effect of the diffusivity of light reflected from the sample surface and the ideal interferometric fringes. Diffuse reflections from smooth surfaces (scattering evenly in many directions) results in surface reflections with distinct variations in intensity. Ideal interferometric fringes from perfectly smooth surfaces contain only one fringe of maximum intensity (zero-order fringe) with several other fringes of lower intensity. Although the variations in intensity on the smooth surface plane is high, the maximum intensity of the fringe (i.e. zero-order fringe) can effectively reduce the variations in intensity on the smooth surface plane on the focal plane. By contrast, when the variations in intensity on the smooth surface plane is still high, the weaker intensity of the fringe (i.e. appearance and disappearance of fringes) can not effectively reduce the variations in intensity on the smooth surface plane on the focal plane. And thus, the variations in intensity on the smooth surface plane are different according to the ideal appearance fringe, the ideal maximum fringe (i.e. zero-order fringe), and the ideal disappearance fringe. The proposed algorithm uses the number of slope variations in an image mask to identify focus-inflection points in order to locate the image that corresponds to the focal plane. Implementing the proposed algorithm one time produces one known focal point. Implementing the algorithm multiple times produces multiple focal points, which can be used to compile 2D and 3D profiles for use in rebuilding regions affected by dirt or imperfections (hereafter referred to as blemishes) on the sample surface. Note that this is not possible using zero-order interference fringe. In this study, blemishes are differentiated as uniform and non-uniform. The even scattering of reflected light by uniform surface blemishes produces largely intact fringes with small variations in intensity (shown in Fig. 9(b)), such that the fringe with the highest intensity is located on the focal plane. The uneven scattering of reflected light by non-uniform surface blemishes produces non-intact fringes with large variations in intensity (shown in Fig. 13(b)), such that the fringe with the highest intensity is located on the defocal plane. The number of slope variations in an image mask can also be used to identify the two neighboring defocal planes in order to reduce the scope of the area that must be re-scanned (i.e., the distance between the first and last images) to enhance efficiency by reducing computational overhead. The advantage of the proposed algorithm is that focal points correspond to the lowest number of slope variations, and blemishes correspond to higher numbers of slope variations. Thus, the trend associated with the focal point differs from that of the blemish. However, in terms of the intensity of the zero-order interference fringe, the trends associated with the focal plane and the blemish are the same, which means algorithms based on the zero-order interference fringe cannot be used to differentiate smooth surface areas from blemishes.
In Section 2, we outline the proposed autofocus algorithm, in which the number of slope variations is used to identify inflection points associated with focal distance. In Section 3, we use simulations to show that the ideal theoretical fringe influences the variations in the intensity of the reflections of the sample surface. We also demonstrate that the numbers of slope variations differ when the ideal appearance fringe, the maximum fringe on the focal plane, and the disappearance fringe occur. We assess the robustness of the proposed algorithm by comparing the number of fringe variations found in the simulations with those measured in the experiments. Section 4 outlines experiments used to assess the feasibility of using inflection points for focusing, and compares the proposed scheme with existing systems. Conclusions are drawn in Section 5.
2. Implementation of proposed autofocus algorithm
Figure 1 outlines the experiment setup of an interferometric microscope, in which a Mirau objective is installed on a normal microscope to produce interference fringes. The light path is indicated by the following labels: Lens_1, Beam splitter_1, Lens_2, pixel C (on the step height sample), and Lens_3. The proposed autofocus algorithm uses the number of slope variations in the mask to identify the focal plane at pixel $C({{i_c},{j_c}} )$. The step-height sample is held on a Piezo scanning stage (i.e., PZT), which moves the sample a set distance R μm away from the Mirau objective lens in order to produce interference fringes on the focal plane. As shown in Fig. 1, when the optical path between Beam splitter_2 and the reference mirror is equal to the optical path between Beam splitter_2 and pixel C on the sample, fringes are produced by the Mirau objective. The number of slope variations in the mask is indicated by ${N_s}\textrm{(z)},$ where z refers to a stack of M images labeled ${0^{\textrm{th}}},\textrm{ }{1^{\textrm{st}}},\textrm{ } \ldots ,{(\textrm{M} - 1)^{th}}$ . As shown in Fig. 1 and Fig. 2(a), an image is first captured with the sample located close to the Mirau objective. This image is designated the 0th image, which is $(0 \times R)$ μm away from the lens along the z-axis. The scanning stage then moves the sample a fixed distance R μm from the lens and captures the 1st image in the stack, corresponding to the position located $R( = 1 \times R)$ μm from the lens. The 2nd image corresponds to the position located $2R( = 2 \times R)$ μm from the lens, and so on. After M cycles, the resulting M images are labeled 0th, 1st, 2nd, 3rd, …, ($M - 1$)th. As shown in Fig. 2(a), the x and y coordinates are measured in pixels (within a single image), and the z coordinate is measured by the number of images in the stack. As shown in the masks of${\; }{M_{ - z}}$, ${M_z}$, and ${M_{ + z}}$ in Fig. 2(b), an ideal theoretical interferometric fringe from a white light source would be smooth and only vary on the fringe peaks. In that image, the fringe with the maximum intensity (i.e., the zero-order fringe) lies within the green area and is shown in the corresponding${\; }{M_z}$ image. One weak-intensity fringe lies within the red area on the left and is shown in the corresponding the${\; }{M_{ - z}}$ image, whereas the other weak-intensity fringe lying within the red area on the right is shown in the corresponding${\; }{M_{ + z}}$ image. Figure 2(c) illustrates the large number of high-amplitude variations within a cross-section passing through pixel C (i.e.,$\; ({{i_c},{j_c}} )$) obtained from a step-height sample with a smooth surface. The high-amplitude variations in the cross-section observed in Fig. 2(c) were produced from diffuse reflections from the sample surface. The three areas marked in Fig. 2(c) indicate where images${\; }{M_{ - z}},{\; \; }{M_z},{\; }\textrm{and}{\; }{M_{ + z}}$ were captured. Variations in the fringe presented in Fig. 2(d) are produced due to the weaker fringe intensity in the corresponding${\; }{M_{ - z}}$ in Fig. 2(b) and variations in the stronger intensity of light reflected from the sample surface (produced by the diffuse reflection) in the corresponding${\; }{M_{ - z}}$ in Fig. 2(c). Thus, Fig. 2(d) presents the captured ${\; }{M_{ - z}}$ image, where the cross-section of the fringe indicates variations. Similarly, the cross-section of the fringe in the ${M_{ + z}}$ image (i.e. Figure 2(f)) indicates variations due to the weaker fringe intensity and variations in the intensity of light reflected from the sample surface. By contrast, the maximum intensity of the fringe in the corresponding${\; }{M_z}$ in Fig. 2(b) effectively reduces variations in the intensity of the reflected light from sample surface in the corresponding${\; }{M_z}$ in Fig. 2(c). Thus, the cross-section of the fringe in ${M_z}$ on the focal plane (i.e., Fig. 2(e)) is smoother than those in Fig. 2(d) and Fig. 2(f). The short coherence length in interferometric fringes produced by the white light source in Fig. 1 are sensitive to variations in the intensity of light reflected from the sample surface. The maximum intensity of the fringe can reduce variations in the intensity of light reflected from the sample surface. ${N_s}\textrm{(z)}$ is the lowest value for the fringe intensity, as shown in Fig. 2(e). The weaker the intensity of the fringe is, the stronger the variations in the intensity of the light reflected from the sample surface are. Therefore, ${N_s}\textrm{(}{M_{ - z}}\textrm{)}$ in Fig. 2(d) and ${N_s}\textrm{(}{M_{ + z}}\textrm{)}$ in Fig. 2(f) are always higher than ${N_s}\textrm{(}{M_z}\textrm{)}$ in Fig. 2(e).
Step 1: Determining the number of slope variations in the mask, ${N_s}(z )$
As shown in Fig. 2(d)-(f), pixel C$({{i_c},{j_c}} )$ was adopted as the center pixel in an$\; {N_x} \times {N_y}\; $ mask (row ${\times} $ column), where all of the pixels within the mask are described as follows:
The following equation is used to determine the position of slope variations in the mask:
Step 2: Derive the values for ${f_1}(z )$ and ${f_2}(z )$ from ${N_s}(z )$ in the diagram using the discrete Fourier and inverse discrete Fourier transforms.
The discrete Fourier Transform written is as follows:
Inverse Discrete Fourier Transform:
Substituting Eqs. (5) and 6 into Eq. (8) gives us curve $\; {f_1}(z )$. Similarly, substituting Eqs. (5) and 7 into Eq. (9) gives us curve ${f_2}(z )$.
In Eq. (1)0, $M_i^{\prime}$ indicates the focus-inflection points, $z^{\prime}$ indicates the labeled images (from 1st to $({M - 2} )$th), and three neighboring images ($z^{\prime} - 1,z^{\prime},\; \textrm{and}{\; }z^{\prime} + 1$) are used to determine the value of -1 in ${f_1}({z^{\prime}} )$. If the calculated result is -1, then $z^{\prime}$ is regarded as ${M_i}^{\prime}$. Equation (10) is used to obtain seven values for ${M_i}^{\prime}$ (${M_1}^{\prime},{M_2}^{\prime},\ldots ,{M_7}^{\prime}$), as indicated by the hollow circles in Fig. 3(c):
Pixel C on focal plane ${M_4}^{\prime}({i.e.\; {M_z}} )$ can be identified from among ${M_i}^{\prime}$ focus-inflection points using Eq. (1)1 for interferometry microscopes.
3. Simulation results for slope variations in images from interferometric microscope
The experiment described below was conducted using a microscope (Olympus BX51M) with a 100W halogen lamp light source, a 10X normal objective lens (RMS10X PLAN ACHROMAT OBJECTIVE0.25 NA, WD10.6 mm), a 10X Mirau objective lens (Mirau interferometer, Nikon CF Plan, JAPAN), a CCD camera (DCU223C, 1024 × 768 Resolution, Color, pixel size 4.65μm×4.65μm), and Piezo scanning stage (LPS710/M), as shown in Fig. 1. The performance of the proposed algorithm was evaluated by performing a series of proprietary MATLAB simulations on a notebook computer equipped with an Intel Core i5-6200U, 2.31GHz and 4 GB of RAM.
Figure 4(a)-(c) present simulated fringes, and the corresponding four solid circles are pixels in rows 150, 450, 722, and 986. Each solid circle indicates the center of a 201×1 mask, indicated by the green area. In the green mask in Fig. 4, ${M_p}$ indicates the varied intensity without fringes on the surface of the step height sample ; ${M_{ - z}}$ indicates the appearance of fringes; ${M_z}$ indicates the maximum intensity fringe (i.e., zero-order fringe); and ${M_{ + z}}$ indicates the disappearance of fringes. Figure 4(a) corresponds to Fig. 2(c); Fig. 4(b) corresponds to Fig. 2(b); Fig. 4(c) is the result of Fig. 4(b) multiplied by Fig. 4(a) corresponding to Fig. 2(d)-(f). Figure 5(a)(b)(c)(d) present the experiment results obtained using the set-up in Fig. 1. Figure 5(a) presents the captured ${M_p}$ image without fringes, and the cross-section passing through pixel C (515, 1265) with the 201×1 mask, corresponding to ${N_s}({{M_p}} )$ in Fig. 4(c). Figure 5(b) is the captured ${M_{ - z}}$ image showing the appearance of fringes and the cross-section passing through pixel C (515, 1265) with the 201×1 mask corresponding to ${N_s}({{M_{ - z}}} )$ in Fig. 4(c). Figure 5(c) is the captured ${M_z}$ image with the maximum intensity fringe and the cross-section passing through pixel C (515, 1265) with the 201×1 mask corresponding to ${N_s}({{M_z}} )$ in Fig. 4(c). Figure 5(d) is the captured ${M_{ + z}}$ image showing the disappearance of fringes and the cross-section passing through pixel C (515, 1265) with the 201×1 mask, corresponding to ${N_s}({{M_{ + z}}} )$ in Fig. 4(c).
The simulation depicted in Fig. 4(a) was conducted as follows. In experiments, the intensity value of the cross-section of Fig. 5(a) was approximately 140. Thus, the straight line with an intensity value of 140 is regarded as ${I_{in}}$. The point spread function (PSF) of Gaussian blurring and additive white Gaussian noise (AWGN) are used to simulate variations in the intensity of the light reflected from the sample surface in Fig. 1. Gaussian blurring and additive white noise were implemented using three Matlab functions, as follows:
- a. $PSF = fspecial('gaussian',10,2 )$, where 10 refers to the size of the filter, 2 refers to the standard deviation, and $PSF$ refers to Gaussian blurring. The parameters of 10 and 2 were set according to the experiment result of Fig. 5(a).
- b. ${I_{gaussianblurring}} = imfilter({I_{in}},\; PSF,\; 'conv',\; 'circular' )$, where ${I_{in}}$ is the input image (i.e., the straight line with an intensity value of 140), $'conv'$ indicates convolution, ‘$circular$’ is a boundary option, and ${I_{gaussianblurring}}$ refers to the output image obtained following the convolution of ${I_{in}}$ and PSF.
- c. ${I_{gaussiannoise}} = awgn({{I_{gaussianblurring}},SNR} )$, where $SNR$ indicates the signal-to-noise ratio (unit: dB) and ${I_{gaussiannoise}}$ is obtained through the convolution of additive white Gaussian noise via ${I_{gaussianblurring}}$. The parameter of SNR was set at 1 according to the cross-section of the experiment result of Fig. 5(a).
Figure 4(a) presents the simulated cross-section of the surface on the sample (without fringing), which is similar to the experiment result of Fig. 5(a). Compared to ${N_s}({{M_p}} )= 65$ in Fig. 5(a), ${N_s}({{M_p}} )= 124$ in Fig. 4(a) means the simulation is accurate.
Figure 4(b) presents a simulation of the ideal theoretical interferometric fringe of the white light source. In Fig. 4(b), the intensity value is equal to 113 in pixel rows 1∼300 and 1101∼1400. A smooth fringe is produced by Eqs. (1)2 and 13 in pixel rows 301∼1100. In Eq. (1)2 (from [26]), ${I_{fringe1}}$ is the superposition result of the three wavelengths ${\lambda _1} = 400nm,\; {\lambda _2} = 550nm,{\; }{\lambda _3} = 632.8nm$. In Eq. (1)2, the parameter x represents the pixel row and $ro{w_{scale}}$ is used to tune the fringe shape. In Eq. (1)3, the parameter of ${I_{scale}}$ is used to tune the intensity scale ${I_{fringe2}}$ in Fig. 4(b). The parameter ${I_{scale}}$ was set with 230 in order to make the simulated fringe ${I_{fringe2}}$ in Fig. 4(b) similar to the experimental fringe of Fig. 5(c).
4. Experimental results
The parameter settings of the experiments are listed in Table 1. In these experiments, three lights (normal light, strong light, and filtered light) were used to demonstrate the effect of the ideal smooth interferometric fringes and the variations in the intensity of the light reflected from the sample surface. In all of the step height samples in Table 1, the high plane was marked as Surface A and the low plane was marked as Surface B. In the following subsection, we present the results of the proposed algorithm using various numbers of captured images with various movements of the scanning stage and masks of various sizes. The working distance of $M \times R$ μm needs to be greater than the length of the step height of the sample. The smaller parameter R is, the higher the accuracy of the auto-focusing algorithm is. If a large value is used for R, the user should confirm the fringe is located on the mask of pixel C in the images in order to perform the proposed algorithm. For example, the value of R was approximately equal to 0.86 μm between Fig. 5(b) and Fig. 5(c). It was also approximately 0.86 μm between Fig. 5(c) and Fig. 5(d)). In Fig. 5, the shifting fringe is in the mask of pixel C.
4.1.1 Case of normal light
The efficacy of the proposed algorithm was assessed using a standard step height sample (1.8 μm, from VLSI Standards, Inc.) with a Mirau objective lens to produce the superposition of fringes at multiple wavelengths using a halogen lamp as a light source, as shown in Fig. 1. A total of 360 images (400×900 px) were captured. In each image, the mask over pixel C (9×9 px) was centered at (200, 450) for use in locating the focal point on Surface A. The distance between any two images (i.e. R) was 0.02485 μm. Figure 6(a) and Fig. 6(b) detail Steps 1-2, and Fig. 7(a) details Step 3. Figure 7(b) presents the 0th image with pixel C (200, 450) indicated by a dark solid circle and the 9×9 mask indicated by a square. The number of slope variations was calculated as follows: ${N_s}(z )= {N_s}(0 )= 4$. Based on the eight focus-inflection points (${M_1}^{\prime},{\; }{M_2}^{\prime},{\; } \ldots ,{\; }{M_8}^{\prime}$) indicated in Fig. 7(a), the point at which pixel C was in focus corresponded to the 220th image (i.e., ${M_6}^{\prime}$). Two neighboring inflection positions were also identified in the 174th image (${M_5}^{\prime}$) and 256th image (${M_7}^{\prime}$). As shown in Fig. 7(d), the 220th image (${M_6}^{\prime}$) image presented smooth fringes and the lowest ${N_s}({{M_6}^{\prime}} )$ because the maximum intensity fringe reduced variations in the intensity of the light reflected off the sample surface. By contrast, these variations increased the value of${\; }{N_s}({{M_5}^{\prime}} )$ corresponding to the appearance of weaker intensity fringes, and also increased the value of ${N_s}({{M_7}^{\prime}} )$ corresponding to the disappearance of weaker intensity. In the cross-section of Fig. 7(d), the position of pixel C corresponds to the region with fringes of highest intensity, which is the same focal point identified by algorithms based on zero-order interference fringe. Therefore, both algorithms find the same focal plane. Note also that application of the neighboring ${M_{ - z}}$ and ${M_{ + z}}$ planes reduced the range of images required for re-scanning from the 0th-359th to the 174th-256th.
4.1.2 Focusing accuracy in case of normal light
As shown in Fig. 7(d), the focal point at C (200, 450) corresponded to the 220th image. The blue line in Fig. 8(a) indicates the 900 focal points generated by applying the proposed algorithm 900 times, whereas the red line indicates the results obtained by applying the algorithm based on zero-order interference fringe (Subsection 4.1.1) to the same 360 images. Note that the two lined nearly overlap. Note also that the dark solid circle corresponds to focal point C (200, 450) in the 220th image, and it is marked in blue-green on the left axis “Height (images)” in Fig. 8(a). The distance between any two images (i.e., R) was 0.02485μm; the height values are expressed in green on the right axis “Height (μm)” in Fig. 8(a). The blue line in Fig. 8(a) indicates that the step height between Surface A (column=1∼500) and Surface B (column=750∼900) obtained using the proposed algorithm was 1.74 μm, which is within 0.06 μm (=1.8-1.74μm) of the ground-truth value. By contrast, the result obtained based on zero-order interference fringe was 1.73 μm, which corresponds to accuracy of 0.07μm (=1.8-1.73μm).
The proposed algorithm proved to be more sensitive than the algorithm based on zero-order interference fringe, particularly in dealing with images showing signs of dirt or sample defects. Point D at (200, 630) in Fig. 7(d) indicates the areas with uniform surfaces blemishes. The proposed algorithm identified the focus-inflection point corresponding to the minimum ${N_s}(z )$ in the 348th image, indicated by the pink solid circles in Fig. 8(a) and Fig. 8(b). The distribution of ${f_1}(z )$ and ${f_2}(z )$ values associated pixel D in Fig. 8(b) differs those that associated with pixel C in Fig. 7(a), indicating that the proposed algorithm was able to differentiate between smooth surfaces with uniform surfaces blemishes. The proposed algorithm uses the modified 91×3 mask (from the original 9×9 mask), keeping the other parameters constant; the result is shown in Fig. 8(c). In Fig. 8(c), pixel C is in focus in the 219th image (i.e., ${M_8}^{\prime}$). Similarly, pixel C is in focus in the 220th image in Fig. 7(a) and Fig. 7(d). In Fig. 7(a) and Fig. 8(c), the symmetric nature of ${f_1}(z )$ and ${f_2}(z )$ enable the smaller 9×9 mask to find the focal point of C. Parameter ${V_b}$ is used to tune the sensitivity of ${f_2}(z )$ in Eq. (1)1 to surface blemishes. Curve ${f_2}(z )$ in Fig. 7(a) is smallest in the 220th image (and the 219th image in Fig. 8(c)). This differs in the case of surface blemishes (Fig. 8(b)), when curve ${f_2}(z )$ is greatest in the 218th image. Thus, the value of ${V_b}$ influences the result of Eq. (1)1. In this study, ${V_b}$ was set at 21, which is suitable for various sizes of mask.
When using algorithms based on zero-order interference fringe, the intensity values associated with fringes were indistinguishable from those associated with uniform surfaces blemishes. This is because in Fig. 9(b), the fringe on the uniform blemishes is smaller and intact due to uniform scattering and the maximum intensity of pixel D occurs in the 220th image. Thus, as indicated by the red line in Fig. 8(a), pixel D in the uniform blemishes was indistinguishable under the zero-order interference scheme.
Figure 10(a) presents the 3D profile obtained by applying the proposed algorithm 360,000 times (400×900). Figure 10(b) presents the 3D profile obtained using the algorithm based on zero-order interference fringe. Again, in Fig. 10(a) and Fig. 10(b), the proposed algorithm outperformed the zero-order interference scheme in cases of surfaces with uniform blemishes. The proposed algorithm took 1836 seconds to create the 3D profile while the other algorithm took 29 seconds. While the proposed algorithm was slower, both algorithms took less than 0.05 seconds to find one focal pixel, which is acceptable.
4.1.3 Parameter ${C_b}$ in case of normal light
In Eq. (1)1, the numerator (${f_1}$) and denominator (${f_2}$) are related to the function used to differentiate between the surface and blemishes$\; ({{C_b}} )$. The distribution of the surface (shown in Fig. 7(a) and Fig. 8(c)) is normal. The distribution of the blemishes (shown in Fig. 8(b)$)$ is abnormal. Therefore, a wide range of values can be selected for ${C_b}$. Figure 11(a)-(c) present 2D profiles respectively derived using ${C_b}$=7, 25, 57, which is the same cross-section passing through pixels C and D (shown in Fig. 8(a) with ${C_b}$=15). These four figures clearly illustrate the efficacy with which the proposed scheme distinguishes between the surface and blemishes over a wide range of ${C_b}$ values. As shown in Fig. 11(a), ${C_b}$ = 7 was too small, which resulted in errors on Surface A and Surface B. As shown in Fig. 11(c), ${C_b}$=57 was too large, which similarly resulted in large distortions on Surface A and Surface B. Generally speaking, parameter ${C_b}$ should be set according to the resolution of the camera and the resolution of the experimental set-up (for example, 10X Mirau). A fixed value for ${C_b}$ can be used for different samples. In this study, ${C_b}$ was set at 15.
4.2 Focusing accuracy in case of strong light
Here, we assess the performance of the proposed algorithm when applied to a sample with non-uniform blemishes using a stronger light source in the experimental set-up shown in Fig. 1 and compare it with the performance of an algorithm based on zero-order interference fringe. A total of 360 images (400×900 px) were captured using the parameters listed in Subsection 4.1.1. A 9×9 px mask centered at pixel C (200, 600) was used to determine the height of Surface B. The distance between any two images was 0.02 μm. Figure 12(a) presents the distribution of focus-inflection points associated with the focal position. We determined that the focal position corresponded to the 150th image (i.e., ${M_2}^{\prime}$) with the lowest ${N_s}({{M_2}^{\prime}} )$, as shown in Fig. 12(d). The two neighboring focus-inflection points corresponded to the 56th image (${M_1}^{\prime}$) and 256th image (${M_3}^{\prime}$), as respectively shown in Fig. 12(c) and Fig. 12(e). The stronger light source exaggerated the variations in the intensity of the light reflected from the sample surface. Thus, the values of ${N_s}(\textrm{z} )$ in Fig. 12(b)(f) are higher than the values of ${N_s}(\textrm{z} )$ in Fig. 7(b)(f). Moreover, as shown in the cross-section in Fig. 12(d), the strong light source produced distorted fringes. The strong light source also made the intensity values on the region with the non-uniform blemishes easily distinguishable due to the non-uniform scattering (from the increased diffuse reflection), as shown in Fig. 13(a)(b)(c). Figure 13(b) is the cross-section passing through pixel D on the 150th image shown in Fig. 12(d). Similarly, Fig. 13(a) and Fig. 13(c) are the cross-sections passing through pixel D on the 0th image and the 359th image shown in Fig. 12(b) and Fig. 12(f), respectively. Compared to the uniform blemishes and light source in Fig. 9(a) without the fringes, the non-uniform blemishes and the stronger light source in Fig. 13(a) increase variations in intensity. Moreover, compared to Fig. 9(b), there are more variations in Fig. 13(b). Variations in the fringe occur at the maximum intensity on the defocal plane. Figure 14(a) presents the 2D profiles obtained using the proposed algorithm and the algorithm based on zero-order interference fringe with 900 iterations. This figure shows that when using a normal light source, it is possible to derive an accurate 2D profile, but uniform blemishes are not rendered clearly, as indicated by the red line in Fig. 8(a). When using a strong light source, non-uniform blemishes are rendered clearly, but the 2D profile is easily distorted, as indicated by the red line in Fig. 14(a). By contrast, the proposed algorithm produces roughly the same results for both normal and strong light sources, as indicated by the clear rendering of blemishes in Fig. 8(a) and Fig. 14(a). Specifically, pixel D in Fig. 14(a) and Fig. 14(b) is located at (200, 785), corresponding to a non-uniform blemish on Surface B (focal plane = 325th image). Pixel D in Fig. 8(a) and Fig. 8(b) is located at (200, 630), corresponding to a uniform blemish on Surface A (focal plane = 348th image). Generally speaking, in Fig. 14(a), for the algorithm based on zero-order interference fringe, the non-uniform blemishes on Surface B are higher than Surface A due to the balance limitation of the intensities between the surfaces and the blemishes. This balance limitation means that the images of blemish intensity will be clearer when the input light is stronger. However, too strong an input light will likely lead to overexposure of the images of Surface A (i.e., the high region) and those of Surface B (i.e., the low region). For the proposed algorithm on the other hand, both of blemishes on Surface B (325th image) and Surface A (348th image) are close to the upper limit of the 359th image. This is a clear demonstration that the proposed algorithm is able to deal effectively with surface blemishes under a variety of light sources.
4.3 Focusing accuracy in case of filtered light (632nm)
Here, we used a quartz step sample and a 632-nm filter (CWL = 632 nm, FWHM = 10 nm, Minimum Transmission ≧ 45%, Blocking Wavelength Range 200-1200 nm) to reduce a light source of multiple wavelengths to a single wavelength (632 nm). The optical element of a 632-nm filter was put between Lens_1 and beam splitter_1 in Fig. 1. A total of 360 images (400×900 px) were captured. In each image, the mask over pixel C (91×3 px) was centered at (200, 450) for use in locating the focal point on Surface B. The distance between any two images was 0.02 μm. As shown in Fig. 15(a), the proposed algorithm identified two focus-inflection points at $\textrm{z} = {M_1}^{\prime}$ = 165th and $\textrm{z} = {M_2}^{\prime}$ = 348th. We randomly selected the 0th image (Fig. 15(b)), the 49th image (Fig. 15(c)), the 299th image (Fig. 15(e)), and the 359th image (Fig. 15(f)) to illustrate the functions of the proposed algorithm. The 632-nm filter (with FWHM 10 nm) reduced variations in the intensity of the light reflected from the sample surface, with the result that the five fringes in Fig. 15(b)-(f) are much smoother than those in Fig. 7(b)-(f). Thus, the smoother fringes reduced the ${N_s}(z )$ values; however, the proposed algorithm was still able to locate the focal point in the pixel C, as indicated in Fig. 15(d) where the intensity on the focal plane of the 165th image far exceeded the black line. In Fig. 15(b)(c)(e)(f), the intensities out of the focal plane are below the black line, such that ${N_s}(z )$ could still be used to identify fringes of higher intensity.
4.4 Case with no fringes using a normal objective lens
In subsections 4.1, 4.2, and 4.3, the sample surfaces were smooth and the Mirau objective was used. The next case has no fringes due to the use of a normal 10x objective lens to replace the Mirau objective in the set-up presented in Fig. 1. Moreover, an aluminum sample with a rough surface with a step height of approximate 100 μm was used to demonstrate the proposed algorithm was in good agreement with the focal points identified using the autofocus algorithms listed in Table 2.
In Fig. 16, we captured 700 images (400×400 px), which were labeled z=0th-699th. In each image, the mask over pixel C (91×91 px) was centered at (200, 200) for use in locating the focal point on Surface A. The distance between any given two images was 1 μm. Figure 16(a) presents the distribution of focus-inflection positions at pixel C obtained using the proposed algorithm. The focal plane corresponded to the 506th image (i.e., ${M_6}^{\prime}$) associated with the highest ${N_s}({{M_6}^{\prime}} )$, as shown in Fig. 16(d). Two neighboring focus-inflection points corresponded to the 435th (i.e., ${M_5}^{\prime}$) and 585th (i.e., ${M_7}^{\prime}$) images, as respectively shown in Fig. 16(c) and Fig. 16(e). In Fig. 16(c)(d)(e), under the defocal blurring, ${N_s}({{M_5}^{\prime}} )$ and ${N_s}({{M_7}^{\prime}} )$ were smaller than ${N_s}({{M_6}^{\prime}} )$. Equations (1)-10 were applied to create Fig. 16(a). However, in Eq. (1)1, the function of arg min is replaced by arg max in order to find the maximum value of ${N_s}({{M_6}^{\prime}} )$, as shown in Table 2. Further details related to this case remains the subject of future work.
For the sake of comparison, a number of well-known auto-focusing algorithms were applied to 700 images with the same pixel C (200, 200) and the same 91×91 mask, the results of which are shown in Fig. 17(a) and Table 2, where I indicates the intensity value, $({i,j} )$=C (200, 200), $i = {i_1}\sim {i_2}$, and $j = {j_1}\sim {j_2}$ indicate the range of mask sizes (see Eq. (1)). The Brenner gradient algorithm and Square gradient algorithm determined that the focal point at C (200, 200) corresponded to the 506th image, which matches the results obtained using the proposed algorithm. The image power algorithm linked the focal point with the 500th image, whereas the energy Laplace algorithm linked it to the 513th image. The maximum pixel intensity algorithm was an outlier, linking the focal point to the 574th image, due presumably to the roughness of the sample surface. The algorithm based on zero-order interference fringe uses the same equation as the maximum pixel intensity algorithm; however, it does so in searching for pixels associated with the fringe of highest intensity. We can see in Fig. 17(a) that again the re-scanning range ranged from the 0th-699th images due to the lack of inflection points in that range. Using the focus-inflection position (${M_6}^{\prime}$) as well as two neighboring positions (${M_5}^{\prime}$ and ${M_7}^{\prime}$) reduced the re-scanning range of the proposed algorithm from 0th-700th to 436th-586th.
For the other sake comparison with the 21×9 mask (from the original 91×91 mask), the same 700 images are calculated by the proposed algorithm and a number of well-known auto-focusing algorithms, again. The proposed algorithm, Square gradient, and Brenner gradient algorithms determined that the focal point as C (200, 200) corresponded to the 509th, 507th, and 507th images, respectively. Other algorithms fails due to the smaller 21×9 mask. Figure 17(b) presents the 3D profile obtained by applying the proposed algorithm with the smaller 21×9 mask 160,000 times (400×400). And, the proposed algorithm took 1907 seconds to create the 3D profile. The proposed algorithm took less than 0.05 seconds to find one focal pixel, which is acceptable. Therefore, the proposed algorithm is robust with the different mask size.
4.5 Summary
In Subsection 4.1.1 (fringe case; uniform blemishes), the focal point at pixel C corresponding to the lowest ${N_s}({{M_6}^{\prime}} )$ matches the focal point identified by the algorithm based on zero-order interference fringe. The proposed algorithm can also be used to find the two neighboring inflection points associated with the appearance and disappearance of fringes, thereby making it possible to reduce the re-scanning range to within those bounds. In Subsection 4.1.2 (fringe case with uniform blemishes), the accuracy of the proposed algorithm (0.06 μm) was comparable to that of the algorithm based on zero-order interference fringe (0.07 μm). In Subsection 4.1.3 (fringe case with uniform blemishes), a wide range of values was set for parameter ${C_b}$ because ${N_s}(z )$ is higher for blemishes and lower for smooth focal planes. In other words, the distribution of ${f_1}(z )$ and ${f_2}(z )$ in the region of uniform blemishes (Fig. 8(b)) differs from that on the surface (Fig. 7(a)). This makes it possible to differentiate between the two using Eq. (1)1, as indicated by the re-constructed 2D and 3D profiles respectively shown in Fig. 8(a) and Fig. 10(a). Here again, the proposed algorithm is able to reduce the re-scanning range based on the two neighboring inflection points. The proposed algorithm performed well in cases involving non-uniform blemishes or an excessively strong light source (Subsection 4.2). In Subsection 4.3, it also performed well when a 632-nm filter (with FWHM 10 nm) was used to reduce variations in the intensity of the light reflected from the sample surface. Although the values of ${N_s}(z )$ reduced due to the filter, ${N_s}(z )$ could still be used to identify fringes of higher intensity. In Subsection 4.4 (case without fringes), the focal point at pixel C corresponding to the highest ${N_s}({{M_6}^{\prime}} )$ was in good agreement with the focal points identified using the autofocus algorithms listed in Table 2 (for the sample with a rough surface).
5. Conclusions
This study developed a novel image-based autofocus algorithm, which uses the number of slope variations (i.e., ${N_s}(z )$) to identify the focal plane. When applied to interferometric microscopes, the weaker intensity of the ideal fringes and variations in the intensity of the light reflected from the sample surface increased ${N_s}(z )$ values. By contrast, the maximum intensity of the ideal fringes effectively reduced these variations, and the lowest ${N_s}(z )$ occurred along the focal plane. The identification of focus-inflection points makes it possible to reduce the re-scanning range to enhance computational efficiency. In experiments, the proposed algorithm performed at least as well as existing autofocus algorithms, resulting in accuracy of 0.06 μm, which is comparable to the schemes based on zero-order interference fringe (0.07 μm). However, unlike the zero-order interference fringe algorithm, the proposed scheme is able to identify the focal point in regions with uniform surfaces blemishes, strong (non-uniform) blemishes under normal lighting conditions as well as excessively strong light sources. The proposed algorithm generally runs through a large number of iterations (i.e., 400-360,000), resulting in a large number of focal points by which to plot 2D or 3D profiles that are robust and highly consistent. The proposed algorithm was also effective in cases of rough samples without fringes, the results of which were in good agreement with the focal points identified using the autofocus algorithms listed in Table 2.
Funding
NARLabs I-DREAM Grant for International Cooperation with imec; Ministry of Science and Technology, Taiwan (MOST 107-2221-E-492-024-MY3), Hsinchu Science Park Bureau, Ministry of Science and Technology, Taiwan (SIPA 110AT09B).
Disclosures
JING-FENG WENG*, GUO-HAO LU, CHUN-JEN WENG, YU-HSIN LIN, and CHAO-FENG LIU: Taiwan Instrument Research Institute, National Applied Research Laboratories.
ROBBIE VINCKE, HSIAO-CHUN TING, and TING-TING CHANG: imec Taiwan Co.
The authors, JING-FENG WENG*, GUO-HAO LU, CHUN-JEN WENG, YU-HSIN LIN, CHAO-FENG LIU, ROBBIE VINCKE, HSIAO-CHUN TING, and TING-TING CHANG, declare no conflicts of interest.
References
1. S. A. Lee, X. Ou, J. Eugene Lee, and C. Yang, “Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor,” Opt. Lett. 38(11), 1817–1819 (2013). [CrossRef]
2. L. Huang and J.-X. Cheng, “Nonlinear Optical Microscopy of Single Nanostructures,” Annu. Rev. Mater. Res. 43(1), 213–236 (2013). [CrossRef]
3. S. Schaefer, S. A. Boehm, and K. J. Chau, “Automated, portable, low-cost bright-field and fluorescence microscope with autofocus and autoscanning capabilities,” Appl. Opt. 51(14), 2581–2588 (2012). [CrossRef]
4. Young-Duk Kim, Myoung-Ki Ahn, and Dae-Gab Gweon, “Design and Fabrication of a Multi-modal Confocal Endo-Microscope for Biomedical Imaging,” J. Opt. Soc. Korea 15(3), 300–304 (2011). [CrossRef]
5. M. Anthonisen, Y. Zhang, M. Hussain Sangji, and P. Grütter, “Quantifying bio-filament morphology below the diffraction limit of an optical microscope using out-of-focus images,” Appl. Opt. 59(9), 2914–2923 (2020). [CrossRef]
6. G. E. Nevskaya and M. G. Tomilin, “Adaptive lenses based on liquid crystals,” J. Opt. Technol. 75(9), 563–573 (2008). [CrossRef]
7. J. Lee, J. Lee, and Y. H. Won, “Nonmechanical three-dimensional beam steering using electrowetting-based liquid lens and liquid prism,” Opt. Express 27(25), 36757–36766 (2019). [CrossRef]
8. Zengqian Ding, Chinhua Wang, Zhixiong Hu, Zhenggang Cao, Zhen Zhou, Xiangyu Chen, Hongyu Chen, and Wen Qiao, “Surface profiling of an aspherical liquid lens with a varied thickness membrane,” Opt. Express 25(4), 3122–3132 (2017). [CrossRef]
9. H.-M. Son, M. Y. Kim, and Y.-J. Lee, “Tunable-focus liquid lens system controlled by antagonistic winding-type SMA actuator,” Opt. Express 17(16), 14339–14350 (2009). [CrossRef]
10. E. Aytac-Kipergil, E. J. Alles, H. C. Pauw, J. Karia, S. Noimark, and A. E. Desjardins, “Versatile and scalable fabrication method for laser-generated focused ultrasound transducers,” Opt. Lett. 44(24), 6005–6008 (2019). [CrossRef]
11. K.-H. Kim, S.-Y. Lee, and S. Kim, “A mobile auto-focus actuator based on a rotary VCM with the zero holding current,” Opt. Express 17(7), 5891–5896 (2009). [CrossRef]
12. Thomas Chaigne, Jérôme Gateau, Ori Katz, Emmanuel Bossy, and Sylvain Gigan, “Light focusing and two-dimensional imaging through scattering media using the photoacoustic transmission matrix with an ultrasound array,” Opt. Lett. 39(9), 2664–2667 (2014). [CrossRef]
13. C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57(34), F44–F49 (2018). [CrossRef]
14. Jie Cao, Yang Cheng, Peng Wang, Kaiyu Zhang, Yuqing Xiao, Kun Li, Yuxin Peng, and Qun Hao, “Autofocusing imaging system based on laser ranging and a retina-like sample,” Appl. Opt. 56(22), 6222–6229 (2017). [CrossRef]
15. Y. Fujishiro, T. Furukawa, and S. Maruo, “Simple autofocusing method by image processing using transmission images for large-scale two-photon lithography,” Opt. Express 28(8), 12342–12351 (2020). [CrossRef]
16. Ming Tang, Chao Liu, and Xiao Ping Wang, “Autofocusing and image fusion for multi-focus plankton imaging by digital holographic microscopy,” Appl. Opt. 59(2), 333–345 (2020). [CrossRef]
17. Z. Yan, G. Chen, W. Xu, C. Yang, and Y. Lu, “Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm,” Appl. Opt. 57(33), 9714–9721 (2018). [CrossRef]
18. Min Seok Oh, Hong Jin Kong, Tae Hoon Kim, and Sung Eun Jo, “Autofocus technique for three-dimensional imaging, direct-detection laser radar using Geiger-mode avalanche photodiode focal-plane array,” Opt. Lett. 35(24), 4214–4216 (2010). [CrossRef]
19. Y. Tian, “Autofocus using image phase congruency,” Opt. Express 19(1), 261–270 (2011). [CrossRef]
20. Grégoire Saerens, Lukas Lang, Claude Renaut, Flavia Timpu, Viola Vogler-Neuling, Christophe Durand, Maria Tchernycheva, Igor Shtrom, Alexey Bouravleuv, Rachel Grange, and Maria Timofeeva, “Image-based autofocusing system for nonlinear optical microscopy with broad spectral tuning,” Opt. Express 27(14), 19915–19930 (2019). [CrossRef]
21. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008). [CrossRef]
22. Meng Lyu, Caojin Yuan, Dayan Li, and Guohai Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56(13), F152–F157 (2017). [CrossRef]
23. P. Yang, S. Fang, X. Zhu, M. Komori, and A. Kubo, “Autofocus algorithm of interferogram based on object image and registration technology,” Appl. Opt. 52(36), 8723–8731 (2013). [CrossRef]
24. Patrick Sandoz and Gilbert Tribillon, “Profilometry by Zero-order Interference Fringe Identification,” J. Mod. Opt. 40(9), 1691–1700 (1993). [CrossRef]
25. D. Vollath, “Automatic focusing by correlative methods,” J. Microsc. 147(3), 279–288 (1987). [CrossRef]
26. Eugene Hecht, Optics, International Edition (4th. Ed), Addison Wesley, Chapter 9, 2002.