Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image definition assessment based on Tchebichef moments for micro-imaging

Open Access Open Access

Abstract

This paper proposes a Tchebichef moment (TM)-based image definition assessment (IDA) method that employs the difference in the logarithmic spectra (DLS). To avoid the influence of the original image, the essential element point spread function (PSF) is extracted from the DLS to characterize the IDA function uniquely. The amplification of the PSF spot radius to the defocus amount in the micro-imaging system enhances the featural differences among the DLSs, thereby improving the sensitivity to the defocus amount. The DLS with an obvious geometric feature variation is described by a TM with a low order, which improves the anti-noise performance. The performed simulation and experiment verified the superiority of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As a non-contact, full-field micro-imaging tool, optical microscopes have been widely used in the fields of biology, medicine, and chemistry among others [13]. Auto-focusing is an important technique in micro-imaging [47], and the image definition assessment (IDA) is critical to realize a high focusing accuracy in the auto-focusing process. With technological developments such as image sensors, computer control, and digital signal processing, IDA has become the main tool to assess the defocus amount in the auto-focusing process [8,9]. A desirable IDA function should be sensitive to the variation in the image definition, and the properties of monotony, unimodality and unbiasedness should be addressed.

Conventional IDA methods include the frequency domain and spatial domain methods. In the frequency domain methods, the image definition is performed using the high frequency information, such as in the Fourier, discrete cosine, and wavelet transform methods [10]. The spatial domain methods, including the sum modulus difference method, energy of gradient method, Robert method, and Brenner method [5,1113], design the IDA functions based on the edge details of an image [14,15] and are also dependent on the high frequency information. Therefore, such approaches are highly susceptible to noise, resulting in a low sensitivity and inferior anti-noise performance.

An image degradation model can be defined as in Eq. (1), where g(x, y) and f(x, y) denote the blur image and original image, respectively. h(x, y) is the point spread function (PSF), and n(x, y) denotes the noise. The symbol ‘⊗’ represents the convolution operation. The PSF is the intrinsic variable in the defocus process; however, it is not described directly and uniquely in the traditional methods. Owing to the influence of the original image, the change in the PSF among the blur images is weak and difficult to be assessed when the defocus amount is small, resulting in a low sensitivity and coarse auto-focusing accuracy. Furthermore, emphasizing the high frequencies in an image can enhance the noise influence.

$$g(x,y) = f(x,y) \otimes h(x,y) + n(x,y)$$
According to Eq. (1), if an image with a unique PSF is used as the input image in the IDA method, the sensitivity can be considerably improved. Generally, the shape of the PSF is correlated with the shape of the aperture stop, and the image features of the PSF change with the defocus amount.

Furthermore, the image moments are powerful descriptors of the image features. Image moments with low orders describe the low frequency information of an image with an inherent anti-noise ability [16,17]. Therefore, using the image moments to describe an image that presents unique PSF values corresponding to different defocus amounts can improve the sensitivity of the IDA method and demonstrate a satisfactory anti-noise performance. As image feature descriptors, image moments have been widely used in the fields of image recognition [18,19], image reconstruction [20,21], pose estimation [22], blur estimation [23], image watermarking [24], image restoration [25], and so on. Compared with non-orthogonal moments and continuous orthogonal moments, discrete orthogonal moments, such as the Krawtchouk moment, Hahn moment and Tchebichef moment, do not require numerical approximation and coordinate transformation, thereby demonstrating a superior precision and robustness. The Krawtchouk moment and Hahn moment can effectively describe the local features of an image, whereas the TM is suitable for presenting the global features of an image [26]. Ju et al., used a TM with low orders to substitute an entire image as the input for neural network training, which helped considerably compress the computation data and reduce the computation time [27]. Kumar et al., used a TM with low orders to evaluate the blur parameters by combining them with machine learning and achieved satisfactory results [28].

In this paper, we propose an IDA method based on the TM. Section 2 describes the defocus model of micro-imaging, and it is proved that the PSF is influenced by the defocus amount. Section 3 explains the use of the difference in logarithmic spectra (DLS) to describe the PSF in different defocus amounts. The assembly TM with low orders is chosen to describe the DLSs to improve the sensitivity and anti-noise performance. Sections 4 and 5 respectively elaborate upon the simulation and experiment performed to demonstrate the feasibility and superiority of the proposed method. The conclusions are provided in the last section.

2. Defocus analysis of micro-imaging

The principle of micro-imaging is shown in Fig. 1. Affected by the objective, a spot source from the object plane forms an image point on the sensor plane with energy concentration, represented by the yellow rays. Furthermore, a spot source forms a defocus spot when it deviates the object plane along the optical axis, as shown by the red rays. Several points on and off the optical axis, as shown in Fig. 1, are used to distinguish the imaging positions.

 figure: Fig. 1.

Fig. 1. Principle of micro-imaging.

Download Full Size | PDF

Figure 2 shows the geometric diagram of micro-imaging. The solid red lines denote any point imaging on the object plane, such as O1, and the dotted red lines denote any point imaging with deviation from the object plane, such as O2. According to the Gaussian optics and geometric relationship, we have

$$\frac{{l_d^{\prime} - {l^{\prime}}}}{{{l_d}^{\prime}}} = \frac{R}{{D/2}}$$
$$\frac{1}{{l_d^{\prime}}} - \frac{1}{{ - (l - \Delta )}} = \frac{1}{{{f^{\prime}}}}$$
$$\beta = \frac{{{l^{\prime}}}}{l}$$
$$NA = \frac{D}{{2f}}$$
where f and f’ respectively denote the object and image focal lengths, which are equal in an air medium. The terms β, NA, D, Δ, and R denote the magnification, numerical aperture, clear aperture, defocus amount, and defocus spot radius (DSR), respectively. l and l denote the image and object distances of focus point O1, respectively. ld is the imaging distance of the defocus point O2. According to Eqs. (2)–(5), the relationship between R and Δ can be expressed as
$$R ={-} \frac{{fl\beta NA}}{{l - \Delta }} + NA(l\beta - f)$$

 figure: Fig. 2.

Fig. 2. Geometric diagram of micro-imaging.

Download Full Size | PDF

By performing the derivation of R with respect to Δ, one can obtain

$$\frac{{\partial R}}{{\partial \Delta }} = \frac{{fl\beta NA}}{{{{(l - \Delta )}^2}}}$$
The right-hand side of Eq. (7) is always positive, that is, R increases with increase in Δ. For a micro-imaging system, the object distance l is approximately equal to the focal length f, and the defocus amount Δ is always sufficiently small compared to the object distance l. Therefore, Eq. (7) can be approximated as
$$\frac{{\partial R}}{{\partial \Delta }} \approx \beta NA$$
Thus, R changes by βNA times when Δ changes by one unit, where β is the magnification and NA is the numerical aperture. Table 1 lists the β and NA values for different objectives. Evidently, the DSR has an amplification to the defocus amount.

Tables Icon

Table 1. Parameters for different objectives

Ignoring the effect of noise, a series of blur images gn(x, y) (n = 1, 2, 3…) are captured as the defocus amount increases, according to Eq. (1),

$${g_n}(x,y) = f(x,y) \otimes {h_n}(x,y)$$
where hn(x, y) denote the PSFs for different defocus amounts. The PSF for a defocus optical system can be simplified as a disk model [29]:
$$h(x,y) = \left\{ \begin{array}{cc} \frac{1}{{\pi {R^2}}} & \sqrt{{x^2} + {y^2}} \le R\\ 0 & other \end{array}\right.$$
where R is the DSR, which is proportional to the defocus amount. The Fourier transform of Eq. (10) is
$$H(u,v) = \frac{{{J_1}(\pi Rr)}}{{\pi Rr}}$$
Here, J1 (.) is the Bessel function of the first order belonging to the first class, and $r = \sqrt {{u^2} + {v^2}} $. The ideal pattern of H(u,v) is that of circular symmetry, and the radius of the most inner annulus corresponds to the DSR [30], Therefore, the PSF also has an amplification to the defocus amount in the micro-imaging process. It is verified that the features of an image presenting a unique PSF changes significantly in the defocus process. Furthermore, although we can measure the radius of the most inner annulus in the image of the logarithmic spectrum to estimate the defocus amount, the annulus extraction is sensitive to the noise [31].

3. Proposed IDA method

In the proposed method, the DLSs are used as the input images to uniquely describe the PSFs at different defocus position, which enhances the featural differences among the input images. The assembly TM is chosen to calculate the IDA scores because of its powerful feature descriptor and anti-noise performance. The flow chart of the proposed method is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Flow chart of the proposed method.

Download Full Size | PDF

3.1 Analysis of input image

In the image degradation model defined as in Eq. (9), the PSF is the intrinsic variable in the defocus process, and the image features change with the defocus amount. Based on the relationship between the PSF and DSR, the amplification of the DSR to the defocus amount can enhance the difference among the PSFs for different defocus amounts. Therefore, if the input image of an IDA function can describe the PSF uniquely, the difference in the image features for different defocus amounts is enhanced, which is important to improve the sensitivity of the IDA method. However, owing to the influence of the original image, the featural difference of the PSFs for different defocus amounts are difficult to be assessed using the traditional methods, especially if the defocus amount is extremely small.

In the spatial domain, as defined in Eq. (9), it is difficult to extract the PSF in the convolution operation. Taking the Fourier transform of Eq. (9) and applying the logarithmic function,

$$\log ({G_n}(u,v)) = \log (F(u,v)) + \log ({H_n}(u,v))$$
where Gn(u,v), F(u,v), Hn(u,v) are the Fourier transforms of the series of defocus images gn(x, y), original image f(x, y), and series of PSFs hn(x, y), respectively. The subscript n denotes the number of the defocus image. The expression shows that the logarithmic spectra of a defocus image is the combination of the original image and PSF. Based on Eq. (12), the DLSs between the first defocus image and other defocus images are
$$\left\{\begin{array}{c} {\log ({G_2}(u,v)) - \log ({G_1}(u,v)) = \log ({H_2}(u,v)) - \log ({H_1}(u,v))}\\ {\log ({G_3}(u,v)) - \log ({G_1}(u,v)) = \log ({H_3}(u,v)) - \log ({H_1}(u,v))}\\ \vdots \\ {\log ({G_n}(u,v)) - \log ({G_1}(u,v)) = \log ({H_n}(u,v)) - \log ({H_1}(u,v))} \end{array}\right.$$
It is obvious that the DLSs can uniquely describe the PSFs in the defocus process, independent of the original image. Therefore, using the DLSs as the input images can efficiently enhance the featural differences among the defocus images and improve the sensitivity of the proposed method.

3.2 TM-based IDA function

Based on the advantage in describing image features, the TM is used to calculate the IDA scores for input images. For a grey image size M×N, the TM with m + n orders can be expressed as [26]

$$\begin{array}{l} {T_{m,n}} = \frac{1}{{\rho (m,M)\rho (n,N)}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{t_m}(x)} } {t_n}(y)DLS(x,y)\\ (m = 0,1,2\ldots M - 1,n = 0,1,2\ldots N - 1) \end{array}$$
Here, DLS(x, y) denotes the intensity of DLS in the pixel (x, y), tm(x) and tn(y) are the discrete orthogonal polynomials, and ρ(m, M) and ρ(n, N) are the squared norms. These values can be conveniently calculated based on iteration strategies, as given in Eqs. (15) and (16).
$$\left\{ {\begin{array}{{c}} {{t_m}(x) = \frac{{(2m - 1)(2x - M + 1){t_{m - 1}}(x)}}{{mM}} - \frac{{(m - 1)[{M^2} - {{(m - 1)}^2}]{t_{m - 2}}(x)}}{{m{M^2}}}}\\ {{t_0}(x) = 1}\\ {{t_1}(x) = \frac{{2x + 1 - M}}{M}} \end{array}} \right.$$
$$\left\{ {\begin{array}{{c}} {\rho (m,M) = (\frac{{2m - 1}}{{2m + 1}})(1 - \frac{{{m^2}}}{{{M^2}}})\rho (m - 1,M)}\\ {\rho (0,M) = M} \end{array}} \right.$$
The expression of a TM with m + n orders is
$$T = \left[ \begin{array}{ccccc} {{T_{00}}} & {{T_{01}}} & {{T_{02}}} & \cdots & {{T_{0,n - 1}}}\\ {{T_{10}}} & {{T_{11}}} & {{T_{12}}} & \cdots & \vdots \\ {{T_{20}}} & {{T_{21}}} & {{T_{22}}} & \cdots & \vdots \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ {{T_{m - 1,0}}} & \cdots & \cdots & \cdots & {{T_{m - 1,n - 1}}} \end{array} \right]$$
In the expression of the TM, the moment T00 denotes the average of an image. The assembly TMs of Th=[T01,T02,…T0,m-1] and Tv=[T10,T20,…Tn-1,0] are respectively used to express the horizontal and vertical features of an image [28]. Therefore, the assembly TM is defined as follows to describe the features of input images:
$$E = {E_h} + {E_v}$$
Here, $E_{h} = T_{00}^2 + T_{01}^2 + T_{02}^2 + \ldots + T_{0,n-1}^2$, and $E_{v} = T_{10}^2 + T_{20}^2 + \ldots + T_{0,m-1}^2$.

3.3 Analysis of anti-noise performance

According to Eq. (1), the noise is another factor of image degradation and is generally difficult to avoid. An IDA method with superior anti-noise performance can enhance the environmental adaptation in the auto-focusing process. For a given TM, as shown in Eq. (14), the TM value is the correlation degree between the image DLS(x, y) and the kernel of the TM [32]. The kernel of a TM is

$${\phi _{mn}} = {t_m}(x){t_n}(y)$$
Combining Eq. (1), Eq. (14) and Eq. (19),
$$\begin{aligned} {T_{m,n}} &= \frac{1}{{\rho (m,M)\rho (n,N)}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{t_m}(x)} } {t_n}(y)g(x,y)\\ &= \frac{1}{{\rho (m,M)\rho (n,N)}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{\phi _{mn}}} } (f(x,y) \otimes h(x,y) + n(x,y))\\ &= \frac{1}{{\rho (m,M)\rho (n,N)}}(\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{\phi _{mn}}} } (f(x,y) \otimes h(x,y)) + \sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{\phi _{mn}}} } n(x,y))\\ &\approx \frac{1}{{\rho (m,M)\rho (n,N)}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{\phi _{mn}}} } (f(x,y) \otimes h(x,y)) \end{aligned}$$
Because the kernel of a TM with low orders describes the low frequency information [33], it has a low correlation degree to the noise n(x, y) belonging to the high frequency information:
$$\frac{1}{{\rho (m,M)\rho (n,N)}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{\phi _{mn}}} } n(x,y) \approx 0$$
Therefore, a TM with low orders demonstrates satisfactory anti-noise performance. The designed IDA method based on the assembly TM with low orders can thus restrain the noise efficiently.

4. Simulation analysis

Figure 4 shows the comparison between the geometric features of the spatial images and the DLSs for extremely small defocus amounts. β and NA of the micro-imaging system are set as 10× and 0.25, respectively. In the case of extremely small defocus amounts, the featural differences among the spatial images are imperceptible, whereas notable differences are observed among the DLSs.

Table 2 lists the normalized assessment values for different defocus amounts. The assembly TM is chosen from the former 3 + 3 orders of the TM by using Eq. (18). It is clear that using the DLSs instead of the spatial images as the input images is considerable better to the sensitivity for the defocus amount in the proposed method.

 figure: Fig. 4.

Fig. 4. Comparison of geometric features between the (a) spatial images and (b) DLSs for different defocus amounts.

Download Full Size | PDF

Tables Icon

Table 2. Normalized values of the image evaluation function for the spatial images and DLSs shown in Fig. 4

To demonstrate the feasibility and superiority of the proposed method, the performances of the proposed method and conventional methods were compared using a simulation. The following conventional methods were considered: sum modulus difference method, energy of gradient method, Robert method, and Brenner method [5]. The expressions and notations of these methods are presented in Table 3.

Tables Icon

Table 3. Different IDA methods

In the simulation, a cell image, as shown in Fig. 4, is blurred using the blur parameters to simulate different defocus amounts, and a series of blurred images are captured. β and NA of the micro-imaging system are set as 10× and 0.25, respectively; consequently, the depth of field (DoF) is approximately 10 µm. The defocus amount of the micro-imaging system is set as 20 µm, which is two times the DoF. By setting the step size of the defocus amount as 1 µm, 21 defocus images are obtained. Based on Eq. (13), the DLSs are used as the input images in the proposed method. Equation (18) is used to select the assembly TM from the former 3 + 3 orders of the TM. The simulation results are shown in Fig. 5, and it can be noted that all the methods exhibit the properties of monotony, unimodality, and unbiasedness for noise-free images.

 figure: Fig. 5.

Fig. 5. IDA curves for different IDA methods.

Download Full Size | PDF

Except for the properties of monotony, unimodality, and unbiasedness, we use the sensitivity Ts and σ/µ to further evaluate the different IDA methods. The sensitivity Ts represents the difference among the IDA values when the system has an extremely small defocus amount near the focusing plane. A larger value of the sensitivity Ts indicates that the real focusing plane is easier to locate in the auto-focusing process. Ts for the normalized IDA values can be defined as

$${T_s} = \frac{{1 - f({x_f} + \varepsilon )}}{{f({x_f} + \varepsilon )}}$$
where xf is the position of the focusing plane, and ɛ is the defocus amount near the focusing plane. f (xf +ɛ) denotes the IDA value. σ/µ is used to evaluate the working range of a focus measure [34], where, σ and µ denote the standard deviation and mean value, respectively. A larger σ/µ value corresponds to a larger working range, and the σ/µ for the normalized IDA values can be defined as
$$\sigma /\mu = \frac{{\sqrt {\sum\limits_{i = 1}^n {{{(f({x_i}) - \frac{1}{n}\sum\limits_{i = 1}^n {f({x_i})} )}^2}} } }}{{\frac{1}{n}\sum\limits_{i = 1}^n {f({x_i})} }}$$
Here, n is the image number, and f (xi) is the IDA value in the defocus position xi. Table 4 present a comparison of the properties of the IDA methods shown in Fig. 5. Here, the parameter ɛ is 3 µm. The bold data in the table denotes the best values. For the noise-free cases, the proposed method exhibits the best performance, implying that the proposed method is superior in terms of both the sensitivity and working range.

Tables Icon

Table 4. Comparison of properties of the different IDA methods

Figure 6 shows the spatial image and DLS under the effects of Gaussian and speckle noises. Column 6(a) shows the noise-free images; columns 6(b) and 6(c) show the images with Gaussian noise; and columns 6(d) and 6(e) show the images with speckle noise. Because of the presence of the noise, the geometric features of the images are changed considerably in the DLS. Figure 7 shows the performance of the different methods under the effects of noise. As shown in Figs. 7(a) and 7(b), affected by the Gaussian noise, all the methods still exhibit the properties of monotony, unimodality and unbiasedness, although the sensitivity and fluctuation are changed. Under the effect of the speckle noise, as shown in Figs. 7(c) and 7(d), the proposed method retains the properties of monotony, unimodality and unbiasedness. The comparisons of the assessment parameters Ts and σ/µ are presented in Tables 5 and 6, respectively. The bold data in the table represent the best values, and the symbol ‘/’ denotes that the method does not exhibit the properties of monotony, unimodality, or unbiasedness. Overall, the proposed method demonstrates an advantage in terms of the anti-noise capacity, which is consistent with the theoretical analysis.

 figure: Fig. 6.

Fig. 6. Spatial image and DLS (a) without noise, with Gaussian noise having a normalized variance of (b) 0.0001 and (c) 0.0003, and with speckle noise having a normalized variance of (d) 0.001 and (e) 0.003.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Analysis of the anti-noise performance with Gaussian noise having a normalized variance of (a) 0.0001 and (b) 0.0003 and speckle noise having a normalized variance of (c) 0.001 and (d) 0.003.

Download Full Size | PDF

Tables Icon

Table 5. Parameter Ts for different IDA methods with noise*

Tables Icon

Table 6. Parameter σ/μ for different IDA methods with noise*

5. Experimental analysis

To verify the feasibility and superiority of the proposed method, a micro-imaging experiment was performed. The main devices of the imaging system included a microscope, camera, piezoelectric translation (PZT), and resolution target. The parameters of these devices are presented in Table 7. The PZT was used to change the defocus amount, and the resolution target was chosen as the object to assess the definition of the image. The system DoF, according to the microscope parameters, was approximately 10 µm. First, the probable focusing position was estimated, and the start position of the defocus imaging was set at a deviation of 20 µm from the probable focusing position. In the experiment, the movement range and step of the PZT were 40 µm and 2 µm, respectively, and 21 images were captured at different working distances. Figure 8 shows parts of the spatial images and DLSs. In Fig. 8(a), the interval of working distance for any adjacent images is 4 µm. Near the focusing plane, when the defocus amount is extremely small, the featural differences among the spatial images are unapparent, whereas the ones among the DLSs are notable. It is clear that the image that presents a unique PSF corresponds to a better description of the defocus amount, which is important to realize a high sensitivity of the IDA method.

 figure: Fig. 8.

Fig. 8. Parts of (a) spatial images, for which the interval of working distance for any adjacent images is 4 µm, and the (b) corresponding DLSs.

Download Full Size | PDF

Tables Icon

Table 7. Parameters of the devices used in the experiment

Figure 9 presents the experimental results. Every method is able to address the best definition image, and demonstrates the properties of monotony, unimodality and unbiasedness, but the proposed method exhibits the best properties in terms of both in Ts and σ/µ, as shown in Table 8, which is competitive in the auto-focusing process, and consistent with the theoretical and simulation findings.

 figure: Fig. 9.

Fig. 9. Performance comparison of different IDA methods.

Download Full Size | PDF

Tables Icon

Table 8. Comparison of properties of the IDA methods (ɛ = 32µm)

6. Conclusion and discussion

In this paper, we proposed an IDA method based on the assembly TM of the DLSs. The feasibility and superiority of the proposed method were analysed using a theoretical analysis, simulation, and experiment. As the unique variable of the image in the defocus process, the PSF is described efficiently using the DLS, thereby enhancing the differences in the image features for different defocus amounts. By using the DLSs and its assembly TM, the proposed method not only satisfies the properties of monotony, unimodality and unbiasedness but also exhibits superior sensitivity and anti-noise performance. By efficiently distilling and describing the intrinsic features in the defocus process, the proposed method can present a desirable IDA function and achieve a higher accuracy and efficiency in the auto-focusing process.

Funding

National Natural Science Foundation of China (51775352, 61727814).

Disclosures

The authors declare no conflicts of interest.

References

1. J. Li, X. Lu, Q. Zhang, B. Li, J. Tian, and L. Zhong, “Dual-channel simultaneous spatial and temporal polarization phase-shifting interferometry,” Opt. Express 26(4), 4392–4400 (2018). [CrossRef]  

2. S. He, W. Xue, Z. Duan, Q. Sun, X. Li, H. Gan, J. Huang, and J. Y. Qu, “Multimodal nonlinear optical microscopy reveals critical role of kinesin-1 in cartilage development,” Biomed. Opt. Express 8(3), 1771–1782 (2017). [CrossRef]  

3. H. Liu, Z. Ye, X. Wang, L. Wei, and L. Xiao, “Molecular and living cell dynamic assays with optical microscopy imaging techniques,” Analyst 144(3), 859–871 (2019). [CrossRef]  

4. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

5. Z. Yan, G. Chen, W. Xu, C. Yang, and Y. Lu, “Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm,” Appl. Opt. 57(33), 9714–9721 (2018). [CrossRef]  

6. C. S. Chen, C. M. Weng, C. J. Lin, and H. W. Liu, “The use of a novel auto-focus technology based on a GRNN for the measurement system for mesh membranes,” Microsyst. Technol. 23(2), 343–353 (2017). [CrossRef]  

7. B. J. Jung, H. J. Kong, and G. Jeon B, “Autofocusing method using fluorescence detection for precise two-photon nanofabrication,” Opt. Express 19(23), 22659 (2011). [CrossRef]  

8. S. Liu, M. Liu, and Z. Yang, “An image auto-focusing algorithm for industrial image measurement,” Eurasip. J. Adv. Sig. Pr. 2016(1), 70 (2016). [CrossRef]  

9. Y. Wang and Y. Chen, “Acousto-optic tunable filter chromatic aberration analysis and reduction with auto-focus system,” J. Mod. Opt. 65(12), 1450–1458 (2018). [CrossRef]  

10. N. B. Nill and B. Bouzas, “Objective image quality measure derived from digital image power spectra,” Opt. Eng. 31(4), 813–826 (1992). [CrossRef]  

11. M. E. Rudnaya, R. M. M. Mattheij, and J. M. L. Maubach, “Evaluating sharpness functions for automated scanning electron microscopy,” J. Microsc. 240(1), 38–49 (2010). [CrossRef]  

12. H. A. İlhan, M. Doğar, and A. M. Özcan, “Digital holographic microscopy and focusing methods based on image sharpness,” J. Microsc. 255(3), 138–149 (2014). [CrossRef]  

13. J. Kostencka, T. Kozacki, and K. Liżewski, “Autofocusing method for tilted image plane detection in digital holographic microscopy,” Opt. Commun. 297, 20–26 (2013). [CrossRef]  

14. R. Hassen, Z. Wang, and M. M. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Trans. Image Process 22(7), 2798–2810 (2013). [CrossRef]  

15. L. S. S. Singh, A. K. Ahlawat, K. M. Singh, and T. R. Singh, “A review on image enhancement methods on different domains,” Int. J. Eng. Invent. 6, 49–55 (2017).

16. B. Xiao, L. Li, Y. Li, W. Li, and G. Wang, “Image analysis by fractional-order orthogonal moments,” Inf. Sci. 382-383, 135–149 (2017). [CrossRef]  

17. H. Zhu, “Image representation using separable two-dimensional continuous and discrete orthogonal moments,” Pattern. Recogn. 45(4), 1540–1558 (2012). [CrossRef]  

18. I. Batioua, R. Benouini, K. Zenkouar, S. Najah, H. E. Fadili, and H. Qjidaa, “3D Image Representation Using Separable Discrete Orthogonal Moments,” Procedia. Comp. Sci. 148, 389–398 (2019). [CrossRef]  

19. R. Benouini, I. Batioua, K. Zenkouar, A. Zahi, H. E. Fadili, and H. Qjidaa, “Fast and accurate computation of Racah moment invariants for image classification,” Pattern. Recogn. 91, 100–110 (2019). [CrossRef]  

20. M. Yamni, A. Daoui, O. E. Ogri, H. Karmouni, M. Sayyouri, and H. Qjidaa, “Influence of Krawtchouk and Charlier moment’s parameters on image reconstruction and classification,” Procedia. Comp. Sci. 148, 418–427 (2019). [CrossRef]  

21. R. Nayak and D. Patra, “Super resolution image reconstruction using weighted combined pseudo-Zernike moment invariants,” AEU-Int. J. Electron. Commun. 70(11), 1496–1505 (2016). [CrossRef]  

22. H. Cheng and S. M. Chung, “Action recognition from point cloud patches using discrete orthogonal moments,” Multimed. Tools Appl. 77(7), 8213–8236 (2018). [CrossRef]  

23. C. Lim, R. Paramesran, W. A. Jassim, Y. Yu, and K. N. Ngan, “Blind image quality assessment for Gaussian blur images using exact Zernike moments and gradient magnitude,” J. Franklin Inst. 353(17), 4715–4733 (2016). [CrossRef]  

24. L. Zhang, G. B. Qian, and W. W. Xiao, “Geometric invariant blind image watermarking by invariant Tchebichef moments,” Opt. Express 15(5), 2251–2261 (2007). [CrossRef]  

25. D. V. Uchaev and V. A. Malinnikov, “Chebyshev-based technique for automated restoration of digital copies of faded photographic prints,” J. Electron. Imaging. 26(1), 011024 (2017). [CrossRef]  

26. L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, and A. C. Kot, “No-reference image blur assessment based on discrete orthogonal moments,” IEEE Trans. Cybern. 46(1), 39–50 (2016). [CrossRef]  

27. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26(24), 31767–31783 (2018). [CrossRef]  

28. A. Kumar, R. Paramesran, C. L. Lim, and S. C. Dass, “Tchebichef moment based restoration of Gaussian blurred images,” Appl. Opt. 55(32), 9006–9016 (2016). [CrossRef]  

29. R. Gajjar and T. Zaveri, “Defocus blur parameter estimation using polynomial expression and signature based methods,” 2017 4th Int. Conf. Signal. Proc. Integ. Network (SPIN) pp. 71–75.

30. J. P. Oliveira, M. A. T. Figueiredo, and J. M. Bioucas-Dias, “Parametric blur estimation for blind restoration of natural images: Linear motion and out-of-focus,” IEEE Trans. Image Process 23(1), 466–477 (2014). [CrossRef]  

31. M. Sakano, N. Suetake, and E. Uchino, “A robust point spread function estimation for out-of-focus blurred and noisy images based on a distribution of gradient vectors on the polar plane,” Opt. Rev. 14(5), 297–303 (2007). [CrossRef]  

32. P. T. Yap and P. Raveendran, “Image focus measure based on Chebyshev moments,” IEEE Proc-Vis Image Signal Process 151(2), 128–136 (2004). [CrossRef]  

33. K. Thung, R. Paramesran, and C. Lim, “Content-based image quality metric using similarity measure of moment vectors,” Pattern Recogn. 45(6), 2193–2204 (2012). [CrossRef]  

34. L. Guo, X. Cao, and L. Liu, “A Novel Autofocus Measure Based on Weighted Walsh-Hadamard Transform,” IEEE Access 7, 22107–22117 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Principle of micro-imaging.
Fig. 2.
Fig. 2. Geometric diagram of micro-imaging.
Fig. 3.
Fig. 3. Flow chart of the proposed method.
Fig. 4.
Fig. 4. Comparison of geometric features between the (a) spatial images and (b) DLSs for different defocus amounts.
Fig. 5.
Fig. 5. IDA curves for different IDA methods.
Fig. 6.
Fig. 6. Spatial image and DLS (a) without noise, with Gaussian noise having a normalized variance of (b) 0.0001 and (c) 0.0003, and with speckle noise having a normalized variance of (d) 0.001 and (e) 0.003.
Fig. 7.
Fig. 7. Analysis of the anti-noise performance with Gaussian noise having a normalized variance of (a) 0.0001 and (b) 0.0003 and speckle noise having a normalized variance of (c) 0.001 and (d) 0.003.
Fig. 8.
Fig. 8. Parts of (a) spatial images, for which the interval of working distance for any adjacent images is 4 µm, and the (b) corresponding DLSs.
Fig. 9.
Fig. 9. Performance comparison of different IDA methods.

Tables (8)

Tables Icon

Table 1. Parameters for different objectives

Tables Icon

Table 2. Normalized values of the image evaluation function for the spatial images and DLSs shown in Fig. 4

Tables Icon

Table 3. Different IDA methods

Tables Icon

Table 4. Comparison of properties of the different IDA methods

Tables Icon

Table 5. Parameter Ts for different IDA methods with noise*

Tables Icon

Table 6. Parameter σ/μ for different IDA methods with noise*

Tables Icon

Table 7. Parameters of the devices used in the experiment

Tables Icon

Table 8. Comparison of properties of the IDA methods (ɛ = 32µm)

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = f ( x , y ) h ( x , y ) + n ( x , y )
l d l l d = R D / 2
1 l d 1 ( l Δ ) = 1 f
β = l l
N A = D 2 f
R = f l β N A l Δ + N A ( l β f )
R Δ = f l β N A ( l Δ ) 2
R Δ β N A
g n ( x , y ) = f ( x , y ) h n ( x , y )
h ( x , y ) = { 1 π R 2 x 2 + y 2 R 0 o t h e r
H ( u , v ) = J 1 ( π R r ) π R r
log ( G n ( u , v ) ) = log ( F ( u , v ) ) + log ( H n ( u , v ) )
{ log ( G 2 ( u , v ) ) log ( G 1 ( u , v ) ) = log ( H 2 ( u , v ) ) log ( H 1 ( u , v ) ) log ( G 3 ( u , v ) ) log ( G 1 ( u , v ) ) = log ( H 3 ( u , v ) ) log ( H 1 ( u , v ) ) log ( G n ( u , v ) ) log ( G 1 ( u , v ) ) = log ( H n ( u , v ) ) log ( H 1 ( u , v ) )
T m , n = 1 ρ ( m , M ) ρ ( n , N ) x = 0 M 1 y = 0 N 1 t m ( x ) t n ( y ) D L S ( x , y ) ( m = 0 , 1 , 2 M 1 , n = 0 , 1 , 2 N 1 )
{ t m ( x ) = ( 2 m 1 ) ( 2 x M + 1 ) t m 1 ( x ) m M ( m 1 ) [ M 2 ( m 1 ) 2 ] t m 2 ( x ) m M 2 t 0 ( x ) = 1 t 1 ( x ) = 2 x + 1 M M
{ ρ ( m , M ) = ( 2 m 1 2 m + 1 ) ( 1 m 2 M 2 ) ρ ( m 1 , M ) ρ ( 0 , M ) = M
T = [ T 00 T 01 T 02 T 0 , n 1 T 10 T 11 T 12 T 20 T 21 T 22 T m 1 , 0 T m 1 , n 1 ]
E = E h + E v
ϕ m n = t m ( x ) t n ( y )
T m , n = 1 ρ ( m , M ) ρ ( n , N ) x = 0 M 1 y = 0 N 1 t m ( x ) t n ( y ) g ( x , y ) = 1 ρ ( m , M ) ρ ( n , N ) x = 0 M 1 y = 0 N 1 ϕ m n ( f ( x , y ) h ( x , y ) + n ( x , y ) ) = 1 ρ ( m , M ) ρ ( n , N ) ( x = 0 M 1 y = 0 N 1 ϕ m n ( f ( x , y ) h ( x , y ) ) + x = 0 M 1 y = 0 N 1 ϕ m n n ( x , y ) ) 1 ρ ( m , M ) ρ ( n , N ) x = 0 M 1 y = 0 N 1 ϕ m n ( f ( x , y ) h ( x , y ) )
1 ρ ( m , M ) ρ ( n , N ) x = 0 M 1 y = 0 N 1 ϕ m n n ( x , y ) 0
T s = 1 f ( x f + ε ) f ( x f + ε )
σ / μ = i = 1 n ( f ( x i ) 1 n i = 1 n f ( x i ) ) 2 1 n i = 1 n f ( x i )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.