Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast digital refocusing and depth of field extended Fourier ptychography microscopy

Open Access Open Access

Abstract

Fourier ptychography microscopy (FPM) shares its roots with the synthetic aperture technique and phase retrieval method, and is a recently developed computational microscopic super-resolution technique. By turning on the light-emitting diode (LED) elements sequentially and acquiring the corresponding images that contain different spatial frequencies, FPM can achieve a wide field-of-view (FOV), high-spatial-resolution imaging and phase recovery simultaneously. Conventional FPM assumes that the sample is sufficiently thin and strictly in focus. Nevertheless, even for a relatively thin sample, the non-planar distribution characteristics and the non-ideal position/posture of the sample will cause all or part of FOV to be defocused. In this paper, we proposed a fast digital refocusing and depth-of-field (DOF) extended FPM strategy by taking the advantages of image lateral shift caused by sample defocusing and varied-angle illuminations. The lateral shift amount is proportional to the defocus distance and the tangent of the illumination angle. Instead of searching the optimal defocus distance with the optimization search strategy, which is time consuming, the defocus distance of each subregion of the sample can be precisely and quickly obtained by calculating the relative lateral shift amounts corresponding to different oblique illuminations. And then, digital refocusing strategy rooting in the angular spectrum (AS) method is integrated into FPM framework to achieve the high-resolution and phase information reconstruction for each part of the sample, which means the DOF of the FPM can be effectively extended. The feasibility of the proposed method in fast digital refocusing and DOF extending is verified in the actual experiments with the USAF chart and biological samples.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fourier ptychography microscopy (FPM) [19] is a recently developed computational microscopic imaging method, because of its ability to simultaneously achieve large field-of-view (FOV), high spatial resolution and phase imaging. By turning on light-emitting diode (LED) sequentially or multiplexing [10] and stitching the corresponding low-resolution (LR) images together in the Fourier domain, FPM can break through the frequency limit of the employed objective determined by its numerical aperture (NA) and the illumination wavelength. Therefore, the space-bandwidth product (SBP) of the optical imaging system can be effectively increased without any precision mechanical scanning. Nevertheless, there is a critical assumption in conventional FPM framework, which requires the sample under test is sufficiently thin and strictly in focus. In general, the thin sample assumption is considered to be consistent with the first Born approximation condition ($kt\delta n \ll 1$), where k is the wave vector of the illumination sources, t indicates the sample thickness and $\delta n$ is the refractive index difference between sample and medium. Therefore, the theoretical sample thickness in the FPM framework should be less than 0.5um for a typical biological sample in the air [11]. However, even for a relatively thin sample, the non-planar distribution characteristics and the non-ideal position/posture of the sample will cause all or part of FOV to be out-of-focus. For example, if a thin sample is three-dimensional curved distributed, at most only part of it can be in focus for any cases. Besides, since precise pose adjustment is difficult to be performed during sample placement, even thin sample distributed on a plane maybe out-of-focus or in part (when the sample plane is tilted placed). The above-mentioned defocusing situations will result in evidently quality decreasing of the reconstructed complex high resolution (HR) image. Therefore, the depth of field (DOF) of conventional FPM system is restricted within a very small range, which limits its scope of application.

To deal with these problems, a conventional way is mechanical refocusing, which relies heavily on the accuracy and response time of the mechanical moving device and will significantly increase the cost of system construction. In contrast, with the help of computational imaging technique, some reconstruction strategies based on FPM have been proposed to realize digital refocusing and extend the DOF of the microscopic system. For instance, digital refocusing can be achieved by embedding an optimization search module in the standard FPM algorithm framework. By defining a convergence index and iteratively searching the maximum value corresponding to different defocus distances in FPM scheme, Zi. Bian et al. have realized defocus distance correction and imaging DOF extending for thin sample [12]. Nevertheless, there are a lot of repetitive calculations in the above optimization search module, so digital refocusing is a very time-consuming process, especially when the defocus distances for each part are different. Besides, by implementing different 3D sample model in FPM framework, such as multi-slice [13] and 3D k-space model [14], tomography has been accomplished for a thick sample. These strategies can also be utilized for thin sample with non-ideal distribution and placement. However, they may be time consuming for its reconstruction algorithms.

To increase the digital refocusing efficiency, some new methods have been proposed in terms of system working modes and reconstruction algorithms respectively. Symmetrical illumination can alleviate the lateral shift of the out-of-focus image to a certain extent, and therefore expand the DOF [15]. Although they can obtain a high imaging quality with defocus distance of 80um, symmetrical illumination may still cause further blurring of the image, when partially eliminating the effect of image shift. In terms of optimization algorithms, different from Zi. Bian et al.’s scheme of embedding optimization search module within standard FPM scheme, Claveau et al. realize DOF extending and digital refocusing successfully by propagating the reconstructed HR complex images to different defocus distances and finding the optimal focus position according to an image processing pipeline [16]. However, this strategy needs to reconstruct a stack of focal planes so that they can obtain the defocus distance of each cell by using image processing pipeline. It means that obtaining the defocus distance is still an optimization procedure which cannot be obtain directly. Some methods proposed in digital holograms are also effective in DOF extending [17]. They can obtain the defocus distance by automatic focusing metrics based on the contrast evaluation. Generally, lots of methods have been proposed to obtain the defocus distance and realize DOF extension. They can be also rather fast while most of them are optimization strategies.

In this paper, we proposed a fast digital refocusing and depth of field extended Fourier ptychography microscopy by taking the advantages of image lateral shift caused by sample defocusing and angle-varied illuminations. The lateral shift amount is proportional to the defocus distance and the tangent of the illumination angle. Instead of using optimization search module, which is time consuming, the defocus distance of each subregion of the sample can be precisely and quickly obtained by calculating the relative lateral shift amounts corresponding to different oblique illuminations. And then, digital refocusing strategy rooting in the angular spectrum (AS) method is integrated into FPM framework to achieve the high-resolution (HR) and phase information reconstruction for each subregion of the sample, which means the DOF of the FPM is effectively extended. The feasibility of the proposed method in fast digital refocusing and DOF extended is verified in the actual experiments with the USAF chart and biological samples. This paper is organized as follows. The principle of standard FPM framework is presented in Section 2.1. The characteristic of lateral shift for out-of-focus sample under oblique illumination is discussed in Section 2.2. And, the workflow of our method is presented in Section 2.3. In Section 3, simulations are performed to verify the ability of recovering pupil function under defocusing with the proposed method. In Section 4, the USAF chart and biological samples are used to demonstrate the effectiveness of our method in digital refocusing and DOF expansion. Conclusions are summarized in Section 5.

2. Principle

2.1 FPM principle and system setup

A conventional FPM system setup is shown in Fig. 1(a), in which a LED array is utilized for varied-angle illumination. FPM assumes that the sample is sufficiently thin and illuminated with a monochromatic plane wave. Illuminations from different angles will result in different spectrum shift in the Fourier domain. Unlike real-space ptychography that acquires diffraction patterns in the Fourier domain [18,19], FPM records LR images in the spatial domain directly, which can reduce the requirement of dynamic range of the camera efficiently. By stitching LR images in the Fourier domain, FPM can achieve a large FOV, high-resolution and phase recovery imaging simultaneously. LR images acquisition process can be described as Eq. (1).

$${I_n}(x,y) = {|{{\Im^{ - 1}}(\Im (t(x,y)) \cdot P(u,v))} |^2},$$
where $t(x,y) = s(x,y) \cdot {e^{i(x{k_{xn}} + y{k_{yn}})}}$ denotes the exit wave distribution of the sample $s(x,y)$ that is illuminated by an oblique illumination with a wavevector $({k_{xn}},{k_{yn}})$. ‘$\Im$’ and ‘${\Im ^{\textrm{ - }1}}$’ indicate the Fourier and inverse Fourier transform respectively. $P(u,v)$ is the pupil function of the objective, where $(x,y)$ are the 2D spatial coordinates in the spatial domain and $(u,v)$ are the corresponding spatial frequencies in the Fourier domain. ${I_n}(x,y)$ is the intensity image acquired by the camera.

 figure: Fig. 1.

Fig. 1. System setup of FPM. (a) Imaging principle of FPM. (b) System setup in actual experiments.

Download Full Size | PDF

As shown in Fig. 1(b), we build an FPM system to acquire LR images. We utilize an objective (OPTO, TC23004, magnification 2x, NA≈0.1, FOV 4.25mm×3.55mm, working distance 56mm) instead of conventional microscopes for optical imaging, which simplifies the system construction and owns a longer working distance. A LED array (CMN, P 2.5, 19×19) is used for varied-angles illuminations. The wavelength of illuminations is 470 nm. A camera (FLIR, BFS-U3-200S6M-C, sensor size 1”, dynamic range 71.89 dB, pixel size 2.4µm) is used for recording LR images. In our experiments, the distance between the LED array and the sample plane is set to 83 mm. Using the exposure time of 30 ms, we acquire several LR images as a set of raw data.

In conventional FPM strategy, several aberrations existed in the optical imaging system will decrease the final reconstructed quality, including spatially varying pupil aberration of objective [20], lateral motion of LR images [21], defocus distance we termed defocus aberration in this paper, intensity fluctuation of LED array [22] and so on. Although the EPRY algorithm [23] can improve the reconstructed quality to some extent, the recovered pupil function is a coupling aberration that contains all these aberrations, especially the coupling result of spatially varying pupil and defocus aberrations. Thus, the correct spatially varying pupil aberration from an objective is difficult to obtain in the coupling pupil function using the EPRY algorithm.

2.2 Model of image shift

If a sample is placed at out-of-focus plane, there will be an image lateral shift between bright-field (BF) LR images corresponding to varied-angle illuminations. Zheng et al. proposed a single-frame autofocusing hardware for whole-slide imaging system by searching the non-shift position [24]. We found that this image lateral shift is proportional to the defocus distance and the tangent of the illumination angle. Different defocus distances result in different image lateral shifts between BF images with varied-angle illuminations [25]. As shown in Fig. 2, a sample placed at out-of-focus plane is illuminated by two monochromatic light source a and b with different illumination angles, in which the light source a locates at the optical axis, and the illumination angle between light sources a and b is $\theta$. The defocus distance between imaging focus and defocus planes is z. According to the Fresnel transfer function, there is no lateral shift between focus and out-of-focus images if the sample is vertically illuminated [25,26]. Thus, the image corresponding to the vertical illumination LED a is used as a reference value. And the relation between defocus distance z and lateral shift can be described as Eq. (2). Where $\delta {s_i}$ is the lateral shift between the reference and ith LR images (i=2, 3…). ${A_i} = \eta \cdot \tan {\theta _i}$ is a constant decided by the incident angle between the reference and ith LR image, where $\eta$ is a constant determined by the optical system parameters and its detailed derivation can be found in [25]. Figure 3 shows the lateral shifts between different LR images under the defocus distance of 200um and the corresponding lateral shifts correction.

$$\delta {s_i} = \eta \cdot \tan {\theta _i} \cdot z = {A_i} \cdot z,(i = 2,3,4\ldots ),$$

 figure: Fig. 2.

Fig. 2. Imaging model for the lateral shift of the image of a defocused sample under oblique illumination.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Examples of lateral shifts under 200um defocus distance. (see Visualization 1). (a) image lateral shift with correction corresponding to 16th LED. (b) image lateral shift without correction corresponding to 16th LED.

Download Full Size | PDF

According to Eq. (2), we transform calculating the defocus distance z to the lateral image shift $\delta {s_i}$. To obtain the accurate lateral shift between different images, several image registration algorithms can be utilized in this procedure, including Mean Absolute Differences (MAD) method [27], Sum of Absolute Difference (SAD) method [28], phase correlation method [29] and so on. In our method, inspired by Li. Bian et al. [21], we utilize an image registration algorithm which can be expressed as,

$$(\Delta {x_i},\Delta {y_i}) = \mathop {\arg \min }\limits_{(\Delta {x_i},\Delta {y_i})} ({I_c}(x,y) - {I_i}(x,y)),(i = 2,3,4\ldots ),$$
where ${I_c}(x,y)$ and ${I_i}(x,y)$ are the intensity of the reference and ith LR image respectively. The lateral shift $(\Delta {x_i},\Delta {y_i})$ along each axis relative to the reference position can be found by minimizing the difference between the ith LR and reference images. For instance, as shown in Fig. 4(a), the reference and LR images are divided into several subregions. It is worth pointing out that, the selected subregion is likely to obtain different features with different degree of defocus. Thus, in the proposed method, the size of the subregion can be a variational value to match the size of feature. We calculate the lateral shift of each subregion between two images, and then, a lateral shift mapping can be obtained as shown in Fig. 4(b). Red triangles describe the center coordinates of each subregions, and the differences between blue dots and red triangles describe the lateral shift between LR and the reference images. Figure 4(c1-c2) are enlarged areas corresponding to the red-boxed and blue-boxed areas shown in Fig. 4(b). We found that, due to the non-planar distribution characteristics or non-ideal position/posture of the biological sample, different subregions will exist different lateral shifts between two images. Especially, compared with Fig. 4(c1) and (c2), the distance between reference and LR images corresponding to the red-boxed area is 3 pixels, while that distance corresponding to the blue-boxed area is −1 pixel. It certainly verifies the non-planar distribution characteristics mentioned above.

 figure: Fig. 4.

Fig. 4. Lateral shift mapping calculated in our method. (a) Examples of dividing subregions, where each subregion is set to 512 × 512. (b) Lateral shift mapping between LR and reference images. (c1) Enlarged subregion corresponding to the red-boxed area in (b). (c2) Enlarged subregion corresponding to the blue-boxed area in (b).

Download Full Size | PDF

Furthermore, by using a computer with CPU (i5-8300H) and MATLAB 2019b platform and without any parallel computing frameworks utilized, we compared the processing time on calculating defocus distance with the proposed method and the optimization search module proposed by Zi. Bian et al. [12]. In the experiment, for a LR image with 256 × 256 pixels, the proposed method only needs 0.024s to obtain the different lateral shifts between ith LR images (i=2,3…9) and calculates the defocus distances. In contrast, by setting the defocus distance searching range from −100 to 100um, which is a normal defocus range value, and the searching step size is set to 10um, the digital refocusing method in [12] needs 15.21s with the same computational platform. Furthermore, as shown in Fig. 4(b), it only takes 0.8s to obtain this lateral shift map with a 5472 × 3648 image, while each of the subregions includes 512 × 512 pixels. With the same computational platform, the searching method needs 55s for a single subregion and more than one hour to realize the calculation for the whole sample range.

2.3 Digital refocusing scheme

Digital refocusing, an effective method for reconstructing images with different defocus distances, can be classified into two categories. Firstly, digital refocusing can be seen as an optimization search module as mentioned above [12]. However, this strategy is suffering a lot of repetitive calculations in searching the optimal defocus distance. Secondly, digital refocusing can be used in the final reconstructed HR complexity distribution of FPM. By propagating the reconstructed distribution with different defocus distances, a stack of image planes can be obtained. An image processing pipeline is performed to search the different focus planes corresponding to different cells, and then, DOF can be extended by combining these cells together [16]. Nevertheless, post-refocusing several focal planes is still required so that image processing pipeline can be performed.

In our method, benefited by the prior knowledge of defocus distance calculated in Section 2.2, an AS method can be inserted into the iteration of the conventional FPM framework to achieve digital refocusing. Thus, the spatially varying pupil aberration and defocus aberration can be recovered respectively in this strategy. The complex distribution of refocused image can be described as follows,

$${s_1}(x,y) = {\Im ^{ - 1}}\{ \Im ({s_0}(x,y)) \cdot H({k_x},{k_y},z)\} ,$$
where ${s_0}(x,y)$ is a known complex optical field in a given plane ${z_0}$, and ${s_1}(x,y)$ is the field in plane ${z_1}$ obtained by the AS method. $H({k_x},{k_y},z)$ is described in Eq. (5), where z is the defocus distance between planes ${z_0}$ and ${z_1}$.
$$H({k_x},{k_y},z) = \exp (j\frac{{2\pi }}{\lambda } \cdot z \cdot \sqrt {1 - {k_x}^2 - {k_y}^2} ).$$

The framework of digital refocusing of the proposed method is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Algorithm outline of digital refocusing in the proposed method.

Download Full Size | PDF

At the beginning of digital refocusing algorithm outline, we initialize the HR reconstruction distribution ${s_1}(x,y)$ in the spatial domain at the focus plane. The amplitude A and phase $\varphi$ can be an all one matrix or initialized by the resized center LR image. The corresponding spectrum distribution is described as ${S_1}(u,v)$. We obtain the subregion of spectrum ${O_i}(u,v)$ corresponding to ith LR image according to conventional FPM strategy. According to Eq. (4), we propagate ${O_i}(u,v)$ that corresponds to focus plane onto plane z, and then, obtain a spatial distribution ${o_{iref}}(x,y) = {A_i}{e^{j\varphi }}$ at the plane z. The amplitude ${A_i}$ will be replaced with the intensity of ith LR image acquired by experiments while keeping the phase $\varphi$ unchanged. Finally, propagating the updated complex distribution onto the focus plane and updating the object and pupil functions according to the EPRY algorithm.

The flow diagram of the proposed fast digital refocusing FPM is shown in Fig. 6. Firstly, similar to Fig. 4, a lateral shift mapping between the ith and reference images is obtained according to Eq. (3). Secondly, selecting a specific subregion and calculating the corresponding defocus distance ${z_i}$ with Eq. (2), where i indicates the defocus distance calculated with the reference and ith LR images. Then, these two steps will be repeat again until we obtain ${z_i}(i = 2,4,6,8)$ that correspond to the same subregion. To increase the calculating accuracy, the average z of ${z_i}(i = 2,4,6,8)$ is used as the defocus distance corresponding to the specific subregion. Thirdly, digital refocusing algorithm shown in Fig. 5 is utilized to recover both the amplitude and phase of sample corresponding to the specific subregion, and the pupil function of objective can be also recovered. Finally, this flow diagram will be repeat to recover the object function corresponding to different subregions, and then, the digital refocusing and DOF extended of full FOV of object can be realized.

 figure: Fig. 6.

Fig. 6. Flow diagram of the proposed fast digital refocusing FPM.

Download Full Size | PDF

3. Simulations

To evaluate the ability of the proposed method in digital refocusing with different defocus distances and recovering spatially varying pupil aberration, we show some simulations as follows. Firstly, we perform a simulation to confirm the ability of reconstructing HR image under different defocus distances with the EPRY and our proposed methods. By changing the defocus distance from 0-300um, in which the step size is set to 20um, we calculate the Structural Similarity (SSIM) between ground truth amplitude and reconstructed HR amplitude with two algorithms respectively. As shown in Fig. 7, the blue-lined and red-lined curves are the results of the ERPY and proposed algorithms respectively. We found that the EPRY algorithm can reconstruct the HR images effectively within a small range of defocus distance, such as 0-80um. If the defocus distance continuously increases, such as more than 150um, the reconstructed quality of the EPRY algorithm will decrease rapidly because the EPRY algorithm will fail to converge if the defocus distance is too large. However, the quality of the proposed method can keep a higher SSIM value with the increasing of defocus distance. And Fig. 8 shows the reconstructed HR amplitude and phase corresponding to different methods with 0, 100, 200um respectively.

 figure: Fig. 7.

Fig. 7. SSIM value of the EPRY and the proposed algorithms. The blue-lined and red-lined curves correspond to the EPRY and the proposed method respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Reconstructed HR amplitude and phase corresponding to the proposed method and the EPRY method with 0, 100 and 200um respectively.

Download Full Size | PDF

Secondly, a simulation is performed to demonstrate the feasibility of recovering the spatially varying pupil aberration correctly without coupling the defocus aberration as shown in Fig. 9. In this simulation, 225 LR images are used in iterations, where defocus distance is set to 200um. As shown in Fig. 9(a1-a2), two pictures are used as the ground truth of amplitude and phase of the object. By using Ornstein-Zernike equations [30], we generate a spatially varying pupil aberration distribution in simulation which is shown in Fig. 9(a3). The corresponding 37 Zernike polynomials of spatially varying pupil aberration distribution can be found in Table 1. Then, the EPRY and the proposed algorithms are used to recover the object and pupil function respectively. Figure 9(b1-b3) are the amplitude, phase and pupil function recovered by the EPRY algorithm. Figure 9(c1-c3) are the amplitude, phase and pupil function recovered by the proposed method. According to the results in Fig. 9, we found that spatially varying pupil aberration can be recovered effectively by inserting digital refocusing into FPM optimal processing, while the EPRY algorithm cannot recover both the object and pupil function if the defocus distance is too large.

 figure: Fig. 9.

Fig. 9. Spatially varying pupil aberration recovery result in FPM. (a1-a2) Ground truth of amplitude and phase of object respectively. (a3) Spatially varying pupil aberration generated by Ornstein-Zernike equations in simulations. (b1-b2) Amplitude and phase of object recovered by the EPRY algorithm. (b3) Coupling pupil function recovered by the EPRY algorithm. (c1-c2) Amplitude and phase of object recovered by the proposed method. (c3) Pupil function recovered by the proposed method.

Download Full Size | PDF

Tables Icon

Table 1. Zernike polynomials of spatially varying pupil aberration surface

4. Experiments

4.1 System scale factor calibration

According to Eq. (2), coefficient ${A_i}$ is a constant value decided by the illumination angle, where i (i=2,3, 4…) indicates the relationship between reference and ith images. However, ${A_i}$ is an unknown value that should be pre-calibrated by experiments. Thus, we intend to calibrate the coefficient ${A_i}$ by the linear fitting algorithm with a set of prior defocus distances and the corresponding lateral shifts. In this step, the defocus distance is adjusted by a precious moving device where the step size is set to 20um. For each defocus distance, 25 images will be acquired totally, and then, an optimization search module proposed in [12] is used to obtain the prior defocus distance. 25 images with an unknown defocus distance are shown in Fig. 10(a), where ‘R’ is the reference image and the yellow-marked numbers are LR images illuminated with different incident angles. By using the searching algorithm, the defocus distance can be calculated as −95um corresponding to the red-boxed and blue-boxed subregions among these images. Then, similar to the strategy shown in Fig. 4, we calculate four lateral shift mappings between 2nd/4th/6th/8th and reference images respectively. For instance, with the knowledge of defocus distance −95um, the lateral shifts $\delta {s_{xi}}(i = 2,4,6,8)$ and $\delta {s_{yi}}(i = 2,4,6,8)$ that correspond to red-boxed and blue-boxed subregions are −3.7, 0, 3.2, 0 and 0, 3.2, 0, −3.2 respectively, where the subscript x and y indicate the x-y axis. Repeating this procedure, we can obtain a set of lateral shifts corresponding to different prior defocus distances. Then, the linear fitting algorithm is used to obtain a fitted curve. As shown in Fig. 10(b), we show the linear fitting results corresponding to 2nd and 6th images respectively, where the slope indicates the coefficient ${A_i}$ corresponding to reference and ith images. The coefficients ${A_i}(i = 2,4,6,8)$ are calculated as follows: ${A_{x2}} = 0.0264$, ${A_{y4}} = \textrm{ - }0.0248$, ${A_{x6}} = \textrm{ - }0.0260$ and ${A_{y8}} = 0.0250$, where the subscript x and y indicate the x-y axis.

 figure: Fig. 10.

Fig. 10. System scale factor calibration. (a) Space distributions of BF LR images corresponding to different LEDs. The yellow-marked images are used in calibrations. And the red-boxed and blue-boxes subregions are used for calculating defocus distance. (b) Linear fitting results corresponding to BF images 2nd and 6th, where ${A_{x2}} = 0.0264$ and ${A_{x6}} ={-} 0.0260$.

Download Full Size | PDF

In principle, the digital refocusing can be achieved with only one LR image. Nevertheless, as shown in Fig. 10(a), some BF LR images may contain both BF and DF subregions simultaneously with the increasing illumination angle. The DF subregions obviously cannot be used for calculating the lateral shift and digital refocusing, and the calculation error may reduce the accuracy of refocusing. Fortunately, there are eight LR images (i=2, 3, …, 9) in total can be used for digital refocusing to obtain a more robust and accurate result. In fact, considering the computational efficiency and accuracy, we choose two couples of space symmetrical LR images for fast digital refocusing in our method.

4.2 Digital refocusing with USAF chart

To demonstrate the feasibility of digital refocusing, an amplitude-only sample of USAF chart is used. As shown in Fig. 11, the USAF chart is placed at the sample plane and adjusted to some random defocus planes by a moving device. For each defocus plane, a set of LR images (225 pictures) will be acquired. And then, the blue-boxed subregion shown in Fig. 11 is selected to calculate the defocus distance according to Fig. 6. That results corresponding to different defocus planes are −192, −101, 0, 95 and 194um respectively.

 figure: Fig. 11.

Fig. 11. High resolution reconstruction results of conventional FPM and the proposed method with different defocus distances.

Download Full Size | PDF

Sequentially, with the knowledge of defocus distance, the proposed method is used to realize digital refocusing and DOF extended. The conventional FPM is used as a comparison experiment. Compared with the results in Fig. 11, we found that when the sample is placed at the focus plane, both the conventional FPM and the proposed method can recover the HR amplitude successfully. Nevertheless, with the defocus distance increases, the conventional FPM cannot recover the amplitude effectively, which is consistent with the simulation result shown in Fig. 7. In contrast, with the proposed method, the reconstructed quality can be evidently improved compared with the conventional FPM. These results directly prove the effectiveness of the proposed method in realizing digital refocusing and FPM imaging DOF extension. Certainly, in the proposed method, the reconstructed quality corresponding to group 8 and 9 with defocus distances −192 and 194um are not excellent compared with that at the focus plane, but it is still improved a lot compared with the chaotic results recovered by the conventional method at the same defocus plane.

4.3 Digital refocusing with biological sample

Generally, a biological sample is a 3D plane, the non-planar distribution characteristic will cause different defocus distance between different subregions. In conventional FPM, we assume that the biological sample is placed at the focus plane, which certainly misses lots of important information in different subregions without using digital refocusing scheme. In this section, a biological sample Paramecium is used to demonstrate the feasibility of the proposed method in digital refocusing. As shown in Fig. 12, the biological sample is placed randomly at the sample plane and a set of LR images (225 pictures) is acquired. The whole FOV image is divided into several subregions similar to Fig. 4 and different defocus distances corresponding to different subregions are calculated according to Eq. (3) respectively. For instance, as shown in Fig. 12, the defocus distance of subregions 1-3 are 77, 69 and 71um respectively. And then, based on the knowledge of defocus distances, the conventional FPM scheme and the proposed method are used to recover the object complexity distribution corresponding to different subregions respectively. The recovered amplitude and phase of three subregions with the conventional FPM are shown in Fig. 12(a3-a4), (b3-b4) and (c3-c4). Similarly, that results recovered with the proposed method are shown in Fig. 12(a1-a2), (b1-b2) and (c1-c2).

 figure: Fig. 12.

Fig. 12. Phase recovery result of biological sample with the conventional FPM and the proposed method. (a1-a4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 77um defocus distance. (b1-b4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 69um defocus distance. (c1-c4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 71um defocus distance.

Download Full Size | PDF

Compared with those results recovered by different methods, we found that by using digital refocusing method, the proposed method can improve the reconstructed quality to a certain extent, especially for the phase part. For instance, as shown in Fig. 12(a4), due to the out-of-focus imaging, the conventional FPM constructs the HR amplitude and phase on a defocus plane. Thus, it results in a blurred reconstructed result as shown in Fig. 12(a4). At the same time, as shown in Fig. 12(a2), with the proposed method, the reconstructed phase shows rich sample details thanks to the FPM framework combined and digital refocusing. Figure 12(b) and (c) show similar results for different sample subregions, and also proves the effectiveness of the proposed method.

5. Conclusion

In this paper, we proposed a fast digital refocusing and DOF extended FPM by taking the advantages of image lateral shift caused by sample defocusing and varied-angle illuminations. In the optical imaging process, if a sample is placed at out-of-focus plane, there will be an image lateral shift between different images corresponding to different illumination angles. Benefit from the characteristic that the lateral shift is proportional to the defocus distance and the tangent of the illumination angle, we can transform calculating the defocus distance to calculating the image lateral shift, which means the defocus distance can be obtained mathematically. Then, digital refocusing is used to recover the HR and phase information of each subregion corresponding to different defocus distances, and the DOF can be effectively extended. Furthermore, by embedding the pupil function recovered method into the iteration of digital refocusing, the proposed method can recover the spatially varying pupil aberration and defocus aberration respectively, even the defocus distance increases to 200um as shown in Fig. 7. Generally, the DOF of an FPM system can be increased from ${\pm} 50$um to more than ${\pm} 200$um.

Although the experimental results have demonstrated the feasibility of the proposed method, some details can be improved in further work. Firstly, accuracy of defocus distance can be improved. In the proposed method, defocus distance is obtained by the image shift between the reference and other BF images. Although the average of four defocus distances can improve the accuracy to a certain extent, the accuracy of image registration is still an important factor in calculating the defocus distance. For the further work, more BF images and other image registration methods can be utilized in the proposed method to improve the accuracy of defocus distance. Furthermore, in our opinion, the result calculated by the proposed method can be used as an initial value for some advanced optimization methods. With the initial value, the searching range of optimization method can be reduced, and so does the computational time. Secondly, not only the EPRY but also some other effective optimization methods can be embedded in the reconstruction scheme as shown in Fig. 5 to solve the system parameter deviations. For instance, a joint estimation procedure proposed in [31] can be embedded to correct LED positional misalignment, and some other strategies can also be embedded to eliminate misalignment-induced phase artefacts [32]. Moreover, some optimal strategies, such as difference map (DM), relaxed averaged alternating reflection (RAAR) and so on [33], can be also used in our method to improve the optimal process.

Funding

National Natural Science Foundation of China (61735003, 61805011).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

3. P. Song, S. Jiang, H. Zhang, Z. Bian, C. Guo, K. Hoshino, and G. Zheng, “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645–3648 (2019). [CrossRef]  

4. Z. Bian, S. Jiang, P. Song, H. Zhang, P. Hoveida, K. Hoshino, and G. Zheng, “Ptychographic modulation engine: a low-cost DIY microscope add-on for coherent super-resolution imaging[J],” J. Phys. D: Appl. Phys. 53(1), 014005 (2020). [CrossRef]  

5. J. Sun, Q. Chen, J. Zhang, Y. Fan, and C. Zuo, “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365–3368 (2018). [CrossRef]  

6. A. Pan, M. Zhou, Y. Zhang, J. Min, M. Lei, and B. Yao, “Adaptive-window angular spectrum algorithm for near-field ptychography[J],” Opt. Commun. 430, 73–82 (2019). [CrossRef]  

7. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

8. M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019). [CrossRef]  

9. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

10. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

11. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

12. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

13. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

14. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography[J],” Opt. Lasers Eng. 128, 106003 (2020). [CrossRef]  

15. M. Zhang, D. Y. LeiLei Zhang, H. Liu, and Y. Liang, “Symmetrical illumination based extending depth of field in Fourier ptychographic microscopy,” Opt. Express 27(3), 3583–3597 (2019). [CrossRef]  

16. R. Claveau, P. Manescu, M. Elmi, V. Pawar, M. Shaw, and D. Fernandez-Reyes, “Digital refocusing and extended depth of field reconstruction in Fourier ptychographic microscopy,” Biomed. Opt. Express 11(1), 215–226 (2020). [CrossRef]  

17. P. Memmolo, V. Bianco, M. Paturzo, and P. Ferraro, “Numerical manipulation of digital holograms for 3-D imaging and display: an overview,” Proc. IEEE 105(5), 892–905 (2017). [CrossRef]  

18. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

19. J. M. Rodenburg, “Ptychography: early history and 3D scattering effects,” Proc. SPIE 8678, 867809 (2012). [CrossRef]  

20. P. Song, S. Jiang, H. Zheng, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology[J],” APL Photonics 4(5), 050802 (2019). [CrossRef]  

21. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7(11), 4543–4553 (2016). [CrossRef]  

22. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1–11 (2017). PMID: 28901054 [CrossRef]  

23. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

24. C. Guo, Z. Bian, S. Jiang, M. Murphy, J. Zhu, R. Wang, P. Song, X. Shao, Y. Zhang, and G. Zheng, “OpenWSI: a low-cost, high-throughput whole slide imaging system via single-frame autofocusing and open-source hardware,” Opt. Lett. 45(1), 260–263 (2020). [CrossRef]  

25. G. Zhou, S. Zhang, Y. Zhai, Y. Hu, and Q. Hao, “Single-shot through-focus image acquisition and phase retrieval from chromatic aberration and multi-angle illumination,” J. Front. Phys. 9, 137 (2021). [CrossRef]  

26. D. G. Voelz, Computational Fourier Optics: A MATLAB Tutorial[M]. 2011.

27. E. A. E. Habib, “Mean absolute deviation about median as a tool of explanatory data analysis,” Int. J. Res. Rev. Appl. Sci. 11(3), 517–523 (2012).

28. D. Guevorkian, A. Launiainen, P. Liuha, and V. Lappalainen, “Architectures for the sum of absolute differences operation,” IEEE Workshop on Signal Processing Systems, (2002): 57–62.

29. A. Alba, R. M. Aguilar-Ponce, J. F. Vigueras-Gómez, and E. Arce-Santana, Phase Correlation Based Image Alignment with Subpixel Accuracy (Springer Berlin Heidelberg, 2013), pp. 171–182.

30. B. B. Deo, B. P. Das, and A. C. Naik, “Solution of Ornstein-Zernike equation for one-dimensional fluids,” Pramana 18(1), 89–98 (1982). [CrossRef]  

31. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

32. V. Bianco, B. Mandracchia, J. Běhal, D. Barone, P. Memmolo, and P. Ferraro, “Miscalibration-tolerant Fourier ptychography,” IEEE J. Sel. Top. Quantum Electron. 27(4), 7500417 (2021). [CrossRef]  

33. S. Marchesini, “Invited article: A unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78(1), 011301 (2007). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       lateral shifts between different LR images under 200um defocus distance

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. System setup of FPM. (a) Imaging principle of FPM. (b) System setup in actual experiments.
Fig. 2.
Fig. 2. Imaging model for the lateral shift of the image of a defocused sample under oblique illumination.
Fig. 3.
Fig. 3. Examples of lateral shifts under 200um defocus distance. (see Visualization 1). (a) image lateral shift with correction corresponding to 16th LED. (b) image lateral shift without correction corresponding to 16th LED.
Fig. 4.
Fig. 4. Lateral shift mapping calculated in our method. (a) Examples of dividing subregions, where each subregion is set to 512 × 512. (b) Lateral shift mapping between LR and reference images. (c1) Enlarged subregion corresponding to the red-boxed area in (b). (c2) Enlarged subregion corresponding to the blue-boxed area in (b).
Fig. 5.
Fig. 5. Algorithm outline of digital refocusing in the proposed method.
Fig. 6.
Fig. 6. Flow diagram of the proposed fast digital refocusing FPM.
Fig. 7.
Fig. 7. SSIM value of the EPRY and the proposed algorithms. The blue-lined and red-lined curves correspond to the EPRY and the proposed method respectively.
Fig. 8.
Fig. 8. Reconstructed HR amplitude and phase corresponding to the proposed method and the EPRY method with 0, 100 and 200um respectively.
Fig. 9.
Fig. 9. Spatially varying pupil aberration recovery result in FPM. (a1-a2) Ground truth of amplitude and phase of object respectively. (a3) Spatially varying pupil aberration generated by Ornstein-Zernike equations in simulations. (b1-b2) Amplitude and phase of object recovered by the EPRY algorithm. (b3) Coupling pupil function recovered by the EPRY algorithm. (c1-c2) Amplitude and phase of object recovered by the proposed method. (c3) Pupil function recovered by the proposed method.
Fig. 10.
Fig. 10. System scale factor calibration. (a) Space distributions of BF LR images corresponding to different LEDs. The yellow-marked images are used in calibrations. And the red-boxed and blue-boxes subregions are used for calculating defocus distance. (b) Linear fitting results corresponding to BF images 2nd and 6th, where ${A_{x2}} = 0.0264$ and ${A_{x6}} ={-} 0.0260$.
Fig. 11.
Fig. 11. High resolution reconstruction results of conventional FPM and the proposed method with different defocus distances.
Fig. 12.
Fig. 12. Phase recovery result of biological sample with the conventional FPM and the proposed method. (a1-a4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 77um defocus distance. (b1-b4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 69um defocus distance. (c1-c4) Reconstructed amplitude and phase of with the proposed method and conventional FPM with 71um defocus distance.

Tables (1)

Tables Icon

Table 1. Zernike polynomials of spatially varying pupil aberration surface

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = | 1 ( ( t ( x , y ) ) P ( u , v ) ) | 2 ,
δ s i = η tan θ i z = A i z , ( i = 2 , 3 , 4 ) ,
( Δ x i , Δ y i ) = arg min ( Δ x i , Δ y i ) ( I c ( x , y ) I i ( x , y ) ) , ( i = 2 , 3 , 4 ) ,
s 1 ( x , y ) = 1 { ( s 0 ( x , y ) ) H ( k x , k y , z ) } ,
H ( k x , k y , z ) = exp ( j 2 π λ z 1 k x 2 k y 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.