Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fourier ptychographic microscopy with untrained deep neural network priors

Open Access Open Access

Abstract

We propose a physics-assisted deep neural network scheme in Fourier ptychographic microscopy (FPM) using untrained deep neural network priors (FPMUP) to achieve a high-resolution image reconstruction from multiple low-resolution images. Unlike the traditional training type of deep neural network that requires a large labelled dataset, this proposed scheme does not require training and instead outputs the high-resolution image by optimizing the parameters of neural networks to fit the experimentally measured low-resolution images. Besides the amplitude and phase of the sample function, another two parallel neural networks that generate the general pupil function and illumination intensity factors are incorporated into the carefully designed neural networks, which effectively improves the image quality and robustness when both the aberration and illumination intensity fluctuation are present in FPM. Reconstructions using simulated and experimental datasets are demonstrated, showing that the FPMUP scheme has better image quality than the traditional iterative algorithms, especially for the phase recovery, but at the expense of increasing computational cost. Most importantly, it is found that the FPMUP scheme can predict the Fourier spectrum of the sample outside synthetic aperture of FPM and thus eliminate the ringing effect of the recovered images due to the spectral truncation. Inspired by deep image prior in the field of image processing, we may impute the expansion of Fourier spectrums to the deep prior rooted in the architecture of the careful designed four parallel deep neural networks. We envisage that the resolution of FPM will be further enhanced if the Fourier spectrum of the sample outside the synthetic aperture of FPM is accurately predicted.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM), first demonstrated in 2013 [1], is a promising computational imaging technique that expands the space-bandwidth product (SBP) of the optical system by integrating synthetic aperture imaging and a phase retrieval algorithm [2,3]. By changing the angle of the illumination in FPM, the high-frequency information of the sample can be transferred to the low-frequency region of the optical system. Consequently, a series of low-resolution images could be obtained under angularly varying illumination and then a high-resolution, complex-valued sample function can be decoded from these low-resolution images with a phase retrieval algorithm, namely alternative projection (AP) algorithm [4]. To this end, imaging with both high-resolution and large field-of-view (FOV) is achieved using a low-NA objective lens [2,5]. This merit together with other advantages of low cost, non-invasiveness, and no need to measure phase information make FPM successfully applied in quantitative phase imaging [6,7], digital pathology and cytometry [8,9], surface inspection [10,11] and other applications [12,13].

However, the original FPM scheme suffers from a tedious data acquisition, systematic errors, parameter imperfections and so on [2,3,7]. Efforts have continued in the last decade to raise new ideas for improvement of the original FPM such as reducing the data acquisition time [1419], providing robust optimization against noise [2022], and addressing model limitations [2325]. It is known that the systematic errors derived from illumination intensity fluctuations and aberrations of the optics system may degrade the imaging quality of FPM [2]. An adaptive system correction method was proposed to calibrate illumination intensity fluctuations or to perform pupil aberration correction of FPM [26]. But only a limited number of low order aberrations is considered and thus the accuracy of the aberration reconstruction is limited [26,27]. Ou et al. solved this problem of systematic aberrations by incorporating an embedded pupil function recovery (EPRY) into the AP algorithm, which could simultaneously reconstruct both the sample and the pupil function [27]. Compared to these two regular aberration recovery methods [26,27] for which the entire FOV is divided into segments and the pupil aberrations are treated as constants at segments of FOV, Song et al. modeled the spatially varying aberration using the power series aberration modes and their field-dependent function on entire FOV, which speeds up the optimization process and enhances the robustness of the algorithm [28]. However, above-mentioned methods are optimized to solve one aspect of problems only. When the pupil aberration and illumination intensity fluctuation exist simultaneously, simply stacking their correction algorithms would cause conflict with each other [7]. Pan et al. proposed a system calibration procedure to calibrate the mixed systematic errors from an overall perspective [7]. Nevertheless, this calibration procedure is handcrafted according to systematic errors and the corresponding algorithms, which may increase the operation difficulty and computational complexity to some extent.

Recently, deep learning method, which employs a multi-layer neural network to obtain a general model from a large labeled dataset, has demonstrated its capability of solving inverse problems in various applications, such as optical metrology [29], computed tomography [30], optical microscopy [31] and holography [32]. Such a data-driven scheme using deep neural networks (DNNs) has been successfully applied in FPM [3335]. However, the training data with labels is of importance to the well performance of the such training-based DNNs. It is thus impossible to acquire the desired high-resolution image due to the lack of mass labeled datasets or the mismatch between the test dataset and training dataset [35]. Compared to the training-based DNNs, the untrained neural network (UNN), that require a single dataset only, has the potential to resolve these issues by combining the physical model with DNNs [3640]. Jiang et al. modeled the FPM forward imaging process using a DNN and then the real and imaginary parts of the sample spectrum are separately treated as trainable parameters of the DNN [36]. Following this work, researchers incorporated pupil recovery [37], Zernike aberration recovery with total variation constraint [38], position misalignment of LED array [39] into the physics-based neural network (PBNN) for better high-resolution image reconstructions. Zhang et al. proposed another PBNN with channel attention, where a complex field rather than the separate real and imaginary parts as trainable parameters makes it more concise and flexible [40]. In spite of the success of UNNs in FPM, it seems that the learning ability of the DNN is not fully functioned as expected when it is combined with the physical model of FPM.

In fact, the above-mentioned PBNN approaches are still essentially the iterative-based algorithms, and thus we should reconsider the contribution of the PBNN to the optimization process when it is used in the iterative algorithms. Fortunately, in the field of image processing, Ulyanov et al. has shown that a randomly initialized convolutional neural network (CNN) can be used as a handcrafted prior with excellence results, dubbed deep image prior (DIP) [41,42]. Specifically, the structure of the CNN can inherently capture the natural images without external dataset for training. Further, the incorporation of the DIP with task-specific physical models, such as phase imaging [43,44], ghost imaging [45] and focus shaping [46], has been demonstrated. The most significant advantage of DIP is that a generator network can be used without training beforehand, and thus eliminating the need for mass labeled data.

Inspired by the DIP and its following related works [4146], we propose a physics-assisted DNNs architecture to achieve FPM imaging with both high-resolution and large FOV. A deep prior is introduced by a careful design of the untrained DNN and therefore the proposed approach is termed as FPM using untrained DNN priors (FPMUP). The word “untrained” indicates that the proposed scheme doesn’t need to be trained with a large labeled dataset, instead it directly reconstructs the high-resolution image from the low-resolution measurements by an iterative process. Therefore, it requires higher computational source as compared to the training based DNN method. Specifically, the untrained DNN consists of four parallel neural networks targeted for different functions. Their input can be randomly chosen. The output of the first two neural networks are the amplitude and phase of the complex-valued sample function, respectively, while that of the last two are the generalized pupil function and illumination intensity fluctuation factors. The combination of the output of the four parallel DNNs is fed into the FPM forward imaging model to acquire estimations of multiple low-resolution images. By minimizing the defined loss function between these estimates and the true measurements, the parameters of the untrained DNNs can be optimized to produce the high-resolution complex-valued sample function with the pupil aberration and illumination intensity corrections. In this regard, the superior reconstruction performance is demonstrated in FPM using both simulated and experimental datasets.

The contributions of the proposed FPMUP scheme are summarized as follows: First, the FPMUP scheme can recover the Fourier spectrum of sample outside the synthetic aperture of FPM system, which is probably attributed to the prior information derived from the properly designed unstrained DNNs [42,47]. To the best of our knowledge, this is the first time to report the prediction of Fourier spectrum outside the synthetic aperture of FPM. Although the predicted Fourier spectrum is not exact with the original spectrums, it is still conducive to eliminate the ringing effect of the reconstructed images due to the spectral truncation. Secondly, by introducing the last two parallel networks, the proposed FPMUP scheme can correct either the pupil aberration or illumination intensity fluctuation, even simultaneously both of them when the sample function is reconstructed. The quality of the correction is better than the traditional algorithms such as AP and EPRY. Finally, compared with the state-of-the-art deep learning methods [3336], the proposed FPMUP scheme uses a generator network that does not require the training beforehand, and thus eliminating the need for mass labeled data. The design of the neural network is more flexible according to the requirement of FPM system. But we need to note its requirement of the higher computational resources due to its nature of iteration.

The rest of the paper is organized as follows: Section 2 presents a detailed principle of FPM. Section 3 provides the framework of physics-assisted DNNs combined with the FPM model and describes the architecture of the proposed untrained neural networks. In Section 4, the simulated and experimental results are provided for demonstrations. Finally, we conclude this paper with some summaries and discussions in Section 5.

2. Principle of FPM

The fundamental theories of FPM are divided into two parts: the forward imaging model and the inverse reconstruction. The forward imaging model describes the process that the sample is illuminated by light sources and then the transmitted light is collected as images by the sensor. The inverse reconstruction model recovers the high-resolution complex-valued sample function from the measured low-resolution intensity images. The schematic diagram of the FPM forward imaging model is shown in Fig. 1. The light sources of LED array consist of N LEDs that successively illuminate the sample at different angles. Then an image sensor is used to collect low-resolution intensity images representing information in different frequency regions of the sample’s Fourier spectrum. Without the loss of the generality, assuming that the sample is thin and the sample is illuminated by the nth LED, the light emitted from this LED does not change its direction after transmission, so the transmitted light field through the sample ${t_n}({x,y} )$ can be given by

$$\begin{array}{{c}}{{t_n}({x,y} )= o({x,y} )\cdot {e^{j{k_0}({{k_{xn}} \cdot x + {k_{yn}} \cdot y + {k_{zn}} \cdot z} )}}} \end{array}$$
where $o({x,y} )$ represents the sample function, and ${k_n} = ({{k_{xn}},{k_{yn}},{k_{zn}}} )$ is the normalized wave vector of the plane waves emitted by the nth LEDs. ${k_0} = 2\pi/\lambda $ is the wave number. Therefore, the light field at the imaging plane for the nth illumination can be expressed as
$$\begin{array}{{c}}{{e_n}({x,y} )= h({x,y} )\otimes {t_n}({x,y} )}\end{array}$$
where $h({x,y} )$ is the point spread function of the optical system, and ${\otimes} $ represents the convolution operation. The counterpart of Eq. (2) in Fourier space can be rewritten as:
$$\begin{array}{{c}}{{E_n}({{k_x},{k_y}} )= H({{k_x},{k_y}} ){T_n}({{k_x},{k_y}} )}\end{array}$$
$$\begin{array}{{c}}{H({{k_x},{k_y}} )= \left\{ {\begin{array}{{c}}{1\;\;\;\;\;if\;k_x^2 + k_y^2 \le k_c^2}\\{0\;\;\;\;\;else}\end{array}} \right.}\end{array}$$
where ${E_n}({{k_x},{k_y}} )$, ${T_n}({{k_x},{k_y}} )$ and $H({{k_x},{k_y}} )$ represent the Fourier transform of ${e_n}({x,y} )$, ${t_n}({x,y} )$ and $h({x,y} )$, respectively. $H({{k_x},{k_y}} )$ is the coherence transfer function (CTF) of optical system and the cutoff spatial frequency is given by ${k_c} = {k_0} \cdot {\rm{NA}}$, where ${\rm{NA}}$ is the numerical aperture of the objective lens. Finally, the intensity of nth low-resolution image collected at imaging plane is expressed as:
$$\begin{array}{{c}}{{I_n}({x,y} )= {{|{{F^{ - 1}}\{{{E_n}({{k_x},{k_y}} )} \}} |}^2} = {{|{{F^{ - 1}}\{{H({{k_x},{k_y}} )O({{k_x} - {k_0}{k_{xn}},{k_y} - {k_0}{k_{yn}}} )} \}} |}^2}}\end{array}$$
where ${F^{ - 1}}$ denotes the inverse Fourier transform, and $O({{k_x},{k_y}} )$ is the Fourier transform of the sample function $o({x,y} )$. It can be found that in the Fourier space, the optical microscopy system is equivalent to a low-pass filtering system. The information in the high-frequency region is filtered out, resulting in a degradation of imaging resolution. By illuminating the sample from different angles, the FPM system can linearly shift the high-frequency information of the sample to the low-frequency region in the Fourier space. As a result, a series of low-resolution intensity images that represent the information in different Fourier space region is measured at the imaging plane. However, it is usually difficult to directly measure the phase in an optical system and thus a high-resolution image cannot be simply synthesized in Fourier domain.

To reconstruct a high-resolution image, a phase retrieval algorithm, named AP algorithm, is used in the inverse reconstruction for phase reconstruction and spectral synthesis. Specifically, an initial high-resolution spectrum estimate is alternately constrained in real space and Fourier space, and the high-resolution image is obtained after continuous iterations. The AP algorithm and its variant are already a popular algorithm in FPM, and the specific architectures and operating procedures can be found in the literature [13,7], which will not be detailed here.

 figure: Fig. 1.

Fig. 1. The schematic diagram of the FPM imaging system

Download Full Size | PDF

3. FPMUP scheme

3.1 Framework of FPMUP

Inspired by deep image prior [41,42], we propose untrained deep neural network combined with FPM forward imaging model to achieve both high-resolution and wide FOV imaging. A deep prior is introduced by a careful design of the DNNs and therefore the proposed approach is termed as FPM using untrained deep neural network priors (FPMUP). The framework of the FPMUP is shown in Fig. 2, which consists of four parallel properly designed DNNs. The input of the four DNNs can be randomly chosen (here all inputs are set to 1). The networks (a) and (b) in Fig. 2 are used to output amplitude and phase distribution of the high-resolution complex sample function while the networks (c) and (d) are used to calibrate the systematic errors including aberrations and illumination intensity fluctuations. When the four DNNs are randomly initialized, the first two networks output the estimated sample function $\tilde O({x,y} )= A({x,y} ){e^{i\phi ({x,y} )}}$ and the last two networks output the generalized pupil function P and the LED fluctuation factors ${c_n}$. Subsequently, $\tilde O({x,y} )$ with P and ${c_n}$ are fed into the forward imaging model to generate the low-resolution images ${I_n}({x,y} )$ in different illumination angles:

$$\begin{array}{{c}}{{I_n}({x,y} )= {f_n}\{{\tilde O({x,y} ),{c_n},P} \}}\end{array}$$
where ${f_n}$ represents the propagator of the forward FPM physical model with different LED illumination as shown in Eq. (5). With respect to the ${I_n}({x,y} )$ and the real measured intensity ${I_{rn}}({x,y} )$, the loss function is defined as
$$\begin{array}{{c}}{Loss({\tilde O} )= \mathop \sum \limits_{n = 1}^N {{\left( {{c_n}\sqrt {{I_n}} - \sqrt {{I_{rn}}} } \right)}^2} + \alpha ||W||_2^2} \end{array}$$
where the first term is fidelity term measuring the closeness of the estimated intensity to the real intensity. ${||\cdot ||_2}\;$ denotes Euclidean distance and thus the second term is a L2 regularization term, which imposes a limit on parameters $\{W \}$ of the DNNs and thus is beneficial to avoid modal degradation and achieve improvements with large number of kernels [48]. The regularization parameter ${\rm{\alpha }}$ plays an important role and a suitable ${\rm{\;\alpha \;}}$ can strike the right balance between simplicity and training-data fit. In this paper, we randomly sub sample the data a number of times and observe the variation in the estimate and then repeat the process for a larger value to see how it affects the variability of the estimate. By observation, the regularization parameter α is finally chosen as 0.001 for simulated datasets and 0.01 for experimental datasets in Section 4.

So far, the proposed FPMUP scheme is proposed by combining the resulting physical model of Eq. (6) with the properly designed untrained DNNs. By minimizing the defined loss function in Eq. (7), back-propagation algorithm is used to optimize the weights $\{W \}$ of DNNs via gradient descent and the corresponding output of the DNNs is just the high-resolution sample function. Accordingly, the reconstruction of high-resolution sample function $O({x,y} )$ is converted to the retrieval of weights of DNNs:

$${W^{\rm{\ast }}} = \mathop {{\rm{arg\;min}}}\limits_W \{{Loss({\tilde O} )} \}$$

 figure: Fig. 2.

Fig. 2. The schematic diagram of FPMUP scheme. (a) – (d) represent the four parallel neural networks. Such a careful design of network architectures is to some extent considered as the deep prior [42]. Specifically, a fully connect layer is followed by two convolution layers to generate the amplitude and phase of the complex-valued sample function in (a) and (b), respectively. Different fully connect layers are used for pupil aberration and illumination fluctuation factor correction in (c) and (d), respectively. The combination of the four output of the neural networks is fed into the FPM forward imaging model and then the defined loss function with respect to the real measurement is then employed to optimized weights and biases via back-propagation algorithm. The details of each layer are shown at the bottom. Conv (3${\times} $3,1024) represents a convolution layer with a kernel size of 3${\times} $3 and kernel number of 1024 while Dense ($M \times M$) represents a fully connect layer with nodes of $M \times M$.

Download Full Size | PDF

Compared to the conventional training-type DNNs, the FPMUP iteratively reconstructs the high-resolution sample function using a single dataset instead of mass dataset training process, which is beneficial to applications like medical imaging for which a large amount of dataset is not available. Most importantly, inspired by the deep prior approaches [42,47], we properly design the architecture of the four parallel neural works that can be considered as deep prior to outperform the conventional hand-crafted prior and offer a good performance. The details of the specific neural networks are described in Section 3.2 and the superior performance is demonstrated in the Section 4.

3.2 Architecture of neural networks

In Section 3.1, the FPMUP scheme is explicitly depicted. It is showed that the reconstruction of high-resolution sample function is converted to the optimization of weights of DNNs. Inspired by the DIP, we hypothesize that the careful design of DNN architectures is considered as the deep prior. As shown in Fig. 2, the neural networks are divided into two parts. For the first part that parallelly output the amplitude (a) and phase (b) of the complex sample function, each of them consists an input layer followed by a fully connected layer of M${\times} $M nodes (Dense 1), and then connect a reshape layer followed by CNNs (Conv 1 & Conv 2). Compared with the fully connected layer, CNN can deepen and widen the network with relatively less parameters and add more nonlinear information. The Conv1 extracts the spatial feature information of the image by deepening and widening the network, and the second one Conv 2 is to combine the feature maps of the prior CNN to output high-resolution images. Consequently, the randomly initialized CNNs may be regarded as a deep prior [42] to accelerate the convergence and improve the quality of reconstructed images.

To demonstrate this point, we simulated the high-resolution image reconstructions using the proposed networks with CNNs and without CNNs. In the simulation, the structure of the neural network is shown in Fig. 2, but only contains the first two parallel deep neural networks ((a) and (b) in Fig. 2) for each of which a fully connected layer of 256${\times} $256 nodes with the following CNNs. Two images of 256${\times} $256 pixels as shown in Fig. 3(e) and 3(j) are used as the amplitude and phase of the sample function, namely ground truth (GT), respectively. A LED array of 15${\times} $15 with a pitch of 4 mm is used for illumination of a wavelength 630 nm. The simulated low-resolution measurements are generated by the forward FPM model, and its size is 64${\times} $64${\times} $225. The NA of the objective lens is 0.08, the pixel size of the CCD is 2.75 ${\mathrm{\mu}\textrm{m}}$, the distance between the sample and the LED array is 90 mm. Figure 3(a) and 3(f) show the reconstructed amplitude and phase of the sample function using the neural networks but with the fully connected layer only, respectively. It is found that there exists a severe conflict between the amplitude and phase reconstruction. When the CNNs (Conv1 with convolution kernels of 256 and Conv2) are introduced in the neural networks, the amplitude reconstruction is similar (Fig. 3(b)) but the conflict is slightly alleviated (Fig. 3(g)). With the increase of convolution kernel as shown in Fig. 3 (A3 and A4 column), the quality of reconstruction can be increasingly improved and then arrives a stabilization, therefore, the convolution kernels of Conv1 are chosen as 1024 in the following simulations and experiments. To be clear, the quantitative indicators including the mean square error (MSE) and structure similarity index (SSIM) that compare the reconstruction results with the GT, are provided in Table 1, from which the similar phenomena can be obviously attained.

 figure: Fig. 3.

Fig. 3. Reconstructed complex sample function when the first two parallel neural networks consist of one fully connected layer only (the first column, A1) and one fully connected layer with CNNs (the second to fourth column, A2 to A4). The last column of (e) and (j) shows the ground truth (GT) of the complex-valued sample function. The first row (a-e) and second row (f-j) present the recovered amplitude and phase of the complex-valued sample function, respectively. The second to fourth columns correspond to reconstructions when the Conv1 layer with 256, 512, 1024 convolution kernels (kernels size 3${\times} $3), respectively.

Download Full Size | PDF

Tables Icon

Table 1. The mean square error (MSE) and structure similarity index (SSIM) of the reconstructions in Fig. 3

For the second part of the DNNs ((c) and (d) in Fig. 2) in the FPMUP scheme, it includes two parallel networks targeted for the correction of the pupil aberrations and LED illumination intensity fluctuations. As is known to us, there are aberrations derived from the defocusing of the sample and the manufacturing process of the lens, to name a few in actual optical experiments. According to wave aberration theory, aberrations will force the actual wavefront at the exit pupil to deviate from the theoretical wavefront, resulting in image distortion. Therefore, it is reasonable to modify the CTF of FPM system from a truncated circle function to a generalized pupil function that can describe the aberration of the system. Here, the Zernike polynomial is employed to represent the aberration since it has a good corresponding relationship with geometric aberration and fewer degrees of freedom, which is beneficial to reconstruct the aberration properly. Therefore, the generalized pupil function P including aberration can be expressed as the multiplication of CTF and ${e^{i{\phi _{{\rm{dev}}}}}}$, where ${\phi _{{\rm{dev}}}}$ is the phase deviation caused by aberration:

$$\begin{array}{{c}}{{\phi _{{\rm{dev}}}}({r,\theta } )= \mathop \sum \limits_{j = 0}^{15} {a_j}{Z_j}\left( {\frac{r}{R},\theta } \right)}\end{array}$$
where ${Z_j}$ is the Zernike polynomial, ${a_j}$ is the coefficient, $({r,\theta } )$ represents the polar coordinates in pupil plane, and $R = NA \cdot {k_0}$ is the scaling factor related to the numerical aperture of the objective lens. It is only necessary to replace $H({{k_x},{k_y}} )$ in Eq. (5) with generalized pupil function P to correct the forward model of FP. The coefficient ${a_j}$ is the output of dense4 (Fig. 2(c)), which will be automatically updated and optimized according to the defined loss function.

Besides the aberration correction, the LED intensity illumination fluctuation correction is also necessary since LED intensity fluctuation would cause the degradation of reconstructed image [2]. In fact, both aberrations and illumination intensity fluctuations exist in the optical experiments [3]. By introducing CTF updating in the intensity correction method based on AP algorithm, the reconstruction shows a strong conflict between LED intensity and aberration correction, which may be imputed to the mutual transformation of different errors [7]. Instead of designing complex system calibration procedure, we add two parallel DNNs that contain fully connect layers only, to output the generalized pupil function and the LED intensity fluctuation factor in the FPMUP scheme. Through the back-propagation algorithm, the parameter could be jointly optimized, and finally a high-resolution reconstructed image is obtained, which will be demonstrated in the following Sections.

In the following simulations, a normal personal computer with the configuration of CPU: Intel Core i5-9400F 3.20G, RAM:16G, and GPU: GTX 1650 4G is used for all the computation. For the experimental datasets that has a big dimension, the configuration of CPU: Intel Xeon Platinum 8255C 2.50G, RAM:45G and GPU: RTX 2080Ti 11G is used. We use TensorFlow 2.3 and Python 3.7.7 to construct such a pipeline of FPMUP. The optimizer we used is adaptive moment estimation (Adam) [49], a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. We use 0.1 as the learning rate with a decay step size of 100 and a decay rate of 0.95, which is a good setting determined by simulation tests. We make our implementation of the FPMUP scheme as open software in Python, which can be found at https://github.com/SYSU-SP/FPMUP.

4. Result and discussion

4.1 Reconstruction with simulation dataset

In order to show the effectiveness of the proposed FPMUP scheme and further elucidate the function of the careful designed architecture of neural networks as deep prior, we conduct the reconstruction using the simulated datasets in this subsection. For comparison, the AP algorithm [1], EPRY algorithm [27] and/or a deep learning method (Jiang’s method [36]) simulations are also provided. In these simulations, the parameters of FPM system are the same as those in Section 3.2. The simulation is threefold. The first one is the reconstructions without any systematic errors in FPM system, the second one is with aberrations only and the last one is with both aberrations and LED illumination intensity fluctuations.

Figure 4 shows the performance of the reconstructed results without any systematic errors and a quantitative comparison of the SSIM and MSE indicators of the reconstructed amplitude and phase compared to the ground truth is shown in Table 2. As shown in Fig. 4(a) and 4(e), the traditional AP algorithm can reconstruct the amplitude information well, but fails to recover the phase information in good quality. This is because the AP algorithm only updates the sample’s Fourier spectrum within the synthetic aperture alternately, as shown in Fig. 4(i), resulting in the loss of information outside the synthetic aperture and thus the degradation of the phase quality. Similar to the AP algorithm, the recovery Fourier spectrum using the deep learning method proposed by Jiang [36] is also limited in the synthetic aperture in Fig. 4(j). But it seems that some components of the Fourier spectrum are lost, which may account for poor reconstruction quality and the larger crosstalk of the amplitude and phase in Fig. 4(b) and 4(f). In contrast, the proposed FPMUP scheme can reconstruct the amplitude and phase well in Fig. 4(c) and 4(g), and especially the quality of the reconstructed phase has been greatly improved, which is also quantitively reflected in Table 2. Most differently, the proposed FPMUP can predict the Fourier spectrum outside the synthetic aperture of FPM system and then it is not surprising that the FPMUP can achieve the best results compared to the AP algorithm and Jiang’s deep learning method [36].

 figure: Fig. 4.

Fig. 4. Complex sample function reconstruction of different algorithms without any systematic errors, (a, e and i) AP algorithm, (b, f and j) deep learning method (Jiang [36]) and (c, g and k) the proposed FPMUP scheme. The last column (d, h, and l) is served as the GT of the complex sample function. Besides the amplitude (a-d, the first row) and phase (e-h, the second tow), the Fourier spectrum of the reconstructed high-resolution images are also given in the last row (i - l).

Download Full Size | PDF

Tables Icon

Table 2. SSIM and MSE indicators of reconstructed images in Fig. 4

To investigate the effect of aberration on the proposed FPMUP scheme, Fig. 5 provides the performance of the reconstructed results using different algorithms when the FPM system has aberrations only. Here, the aberration is set by the sample deviated from the focus plane and two defocus locations 20 µm and 200 µm are chosen for comparison. When the defocus is 20 µm (Fig. 5(a2) - 5(l2)), AP algorithm can reconstruct the amplitude in Fig. 5(a2), but the reconstructed phase is damaged by the amplitude feature as shown in Fig. 5(b2). The EPRY algorithm can obtain better results in Fig. 5(d2)-5(f2) than AP algorithm since the EPRY algorithm is modified version of AP algorithm by incorporating an embedded pupil function recovery [27]. As expected, the FPMUP scheme offers an amplitude reconstruction in Fig. 5(g2) equivalent to the EPRY algorithm and a much better phase recovery in Fig. 5(h2) than the EPRY algorithm, both of which could be attributed to the prediction of the Fourier spectrum outside the synthetic aperture of FPM system as shown in Fig. 5(i2). In addition, both the EPRY and the FPMUP scheme can recover the pupil function as shown in Fig. 5(k2) - 5(l2). But we should note that the reconstruction quality of the FPMUP scheme is slightly better than that of the EPRY compared to the GT in Fig. 5(j2). When the defocus is increased to 200 µm and thus the aberration increases, both AP and EPRY algorithm fail to obtain the satisfied results as shown in Fig. 5(a1)-5(f1). This point may be confirmed from the spectral truncation and the irregular Fourier spectrum in Fig. 5(c1) and 5(f1). As for the proposed FPMUP scheme, the Fourier spectrum in Fig. 5(i1) shows a seemingly existing but not clear cutoff boundary compared to the smaller aberration case in Fig. 5(i2), nevertheless, the reconstructed amplitude and phase images are still satisfactory. Regarding to the pupil function, the recovery using EPRY fails in Fig. 5(k1), while that using the FPMUP scheme performs well in Fig. 5(l1).

 figure: Fig. 5.

Fig. 5. Reconstructions of different algorithms with aberrations only, (the first row) AP algorithm, (the second tow) EPRY algorithm [27]) and (the third row) the proposed FPMUP scheme. The last row is the recovered pupil function using (k) the EPRY and (l) FPMUP. The GT of the pupil function is provided in (j). The aberration is set by the sample deviated from the focus plane. (a1) - (l1) are the reconstructions of different algorithms at the defocus of 200 µm, while (a2) - (l2) are the reconstructions at the defocus of 20 µm. For each defocus, the reconstructed amplitude and phase as well as the Fourier spectrum are provided in column.

Download Full Size | PDF

Further, we simulate the reconstructed images in the presence of both aberrations and LED intensity fluctuations since the LED intensity fluctuation is of importance to the reconstruction in the real experiments. In this simulation, the aberration setting is the same as the previous case and the LED intensity fluctuations are modeled by multiplying each low-resolution measurement by a random attenuation factor between 0 and 1 as shown in Fig. 6(j). As a comparison, the EPRY algorithm incorporating LED illumination correction [26,27], termed as EPRY + LEDIC, is provided. Figure 6 shows the reconstruction amplitude and phase image of the complex sample function using EPRY + LEDIC algorithm and the proposed FPMUP scheme, respectively. When the defocus is 20 µm (Fig. 6(a2)-6(l2)), EPRY + LEDIC algorithm can reconstruct the amplitude (Fig. 6(a2)), but the phase is damaged by the amplitude feature as shown in Fig. 6(b2). When the defocus increases to 200 µm, the EPRY + LEDIC algorithm fails to obtain the satisfied results as shown in Fig. 6(a1)-6(c1). This is because the EPRY + LEDIC algorithm corrects the intensity error of the LED by using the ratio between the estimated intensity and the measurement of the low-resolution image in each iteration [26]. Compared to EPRY + LEDIC algorithm, the FPMUP scheme obtains better results for defocus of both 20 µm and 200 µm, as shown in Fig. 6(d)–6(f). As for the Fourier spectrum, the expansion of Fourier spectrum as the previous simulation in Fig. 5 can be observed in the reconstructions, which is reflected in Fig. 6(f). The recovered pupil functions of both approaches in the third row of Fig. 6 show the same phenomena as the previous case in Fig. 5.

 figure: Fig. 6.

Fig. 6. Reconstructions of different algorithms in the presence of both aberrations and LED intensity fluctuation. (The first row) EPRY + LEDIC algorithm, (The second tow) the proposed FPMUP scheme. The aberration setting is of the same as Fig. 5. (a1) - (l1) are for reconstructions of different algorithms at the defocus of 200 µm, while (a2) - (l2) are for reconstructions at the defocus of 20 µm. For each defocus, the reconstructed amplitude and phase as well as the Fourier spectrum are provided in column. The recovered pupil functions and intensity factor of LEDs are shown at the bottom. (g)-(i) are the ground truth (GT) of pupil function, recovered pupil functions of EPRY + LEDIC algorithm and FPMUP scheme, respectively. (j)-(l) are the ground truth of intensity distribution of LED array, the recovered intensity distribution using EPRY + LEDIC algorithm and FPMUP scheme, respectively. The normalized mean-square error (NMSE) with respect to GT is provided at the bottom of the corresponding images.

Download Full Size | PDF

Regarding the LED intensity fluctuations factor correction, an indicator of the normalized mean-square error (NMSE) with respect to GT is defined as $\sum {|{A - GT} |^2}/\sum {|{GT} |^2}$, where A denotes the reconstructed LED intensity factor. When the aberration is smaller (the defocus is 20 µm), the proposed FPMUP scheme (Fig. 6(l2), NMSE: 0.002) has a slightly better reconstruction than the EPRY + LEDIC algorithm (Fig. 6(k2), NMSE: 0.009). Further, when the defocus increases to 200 µm, the proposed FPMUP scheme still shows the robust reconstruction in Fig. 6(l1). However, the EPRY + LEDIC algorithm fails to recover the correct intensity factor, which shows a much larger deviation (NMSE: 0.389) from the GT in Fig. 6(j1).

 figure: Fig. 7.

Fig. 7. Reconstruction using experimental datasets with AP algorithm (the first row), EPRY + LEDIC algorithm (the second row) and the proposed FPMUP scheme (the third row). Two open source datasets [2], (a1-m1) the blood cells sample and (a2-m2) USAF resolution test chart sample, are used in the reconstruction. The reconstructed amplitude and phase of the sample function as well as the Fourier spectrum are provided in columns. The recovered (j-k) pupil functions and (l-m) intensity factor of LEDs using EPRY + LEDIC algorithm and the proposed FPMUP scheme are at the bottom.

Download Full Size | PDF

Based on the above simulations, it is found that the FPMUP scheme can jointly correct the aberration and the intensity fluctuation in the reconstruction at the same time. When there are no systematic errors in the FPM system, the reconstruction of the proposed scheme has less crosstalk between the amplitude and phase. Even though the FPM system has both aberrations and LED illumination intensity fluctuations, the proposed scheme can still reconstruct the sample function well. The first reason is that the FPMUP scheme can predict the spectrum outside the synthetic aperture of FPM system and thus eliminate the ringing effect due to the spectral truncation, which is beneficial to the reconstruction. The second reason is that the last two parallel neural networks in Fig. 2 are incorporated into the FPMUP scheme to calibrate the aberration and the intensity fluctuation, which ensures the reconstruction in presence of systematic errors. It should be noted that the improvement of the reconstruction is achieved at expense of the higher computational sources compared to the traditional iterative algorithm, such as AP and EPRY algorithm. We take the case in Fig. 5 as an example, the reconstruction time of the proposed FPMUP scheme is about 230 seconds (500 epochs) while the traditional EPRY method needs about 30 seconds (500 iterations). To further validate the effectiveness of the proposed scheme, the reconstruction with experiment datasets is shown in next subsection.

4.2 Reconstruction with experiment datasets

In this subsection, the proposed FPMUP scheme is demonstrated using the real experimental dataset. The open source blood cell and USAF resolution test chart datasets provided by Zheng et al. [2] (https://github.com/SmartImagingLabUConn/Fourier-Ptychography) are used for reconstruction. AP algorithm and EPRY + LEDIC algorithm are provided for comparison. In these experimental datasets, the objective lens used was a Nikon 2X with a numerical aperture of 0.1, the pixel size of the original low-resolution image on the sampling plane was 1.845 ${\mathrm{\mu}\textrm{m}}$, the thickness of the slide was 1 mm, and the refraction index was 1.52. A 15${\times} $15 LED array was used for illumination. The distance between the LED array and the sample was 90.88 mm, the spacing between each LED was 4 mm, and the wavelength of the illumination light was 630 nm. The size of the measurements is 128${\times} $128${\times} $225 and the size of high-resolution reconstruction is 512${\times} $512.

Figure 7 shows the reconstructed amplitude and phase of the blood cell and USAF chart as well as their Fourier spectrums. For AP algorithm, the reconstruction is poor in Fig. 7(a)-7(b) due to the existence of system aberrations and LED intensity fluctuation errors. And the EPRY + LEDIC algorithm can improve the reconstruction as shown Fig. 7(d)–7(e) because the EPRY + LEDIC algorithm is a modified version of AP algorithm by incorporating an embedded pupil function recovery and LED illumination fluctuation correction. Compared to these two traditional iteration algorithms, it is not difficult to find that the amplitude and phase images of blood cells reconstructed by the FPMUP scheme in Fig. 7(g1)-7(h1) are sharper and also show finer features. The prediction of the Fourier spectrum outside the synthetic aperture of the FPM in Fig. 7(i1) may account for this improvement.

Usually, the expansion of Fourier spectrum is useful for the resolution enhancement. Thus, the USAF chart is imaged using the proposed FPMUP scheme, which presents almost similar amplitude but much better phase reconstruction as compared to the EPRY + LEDIC algorithm. However, we can only resolve element 8-4 of USAF chart in Figs. 7(a2), 7(d2) and 7(g2) for all three different approaches, which means that the proposed FPMUP scheme does not improve the resolution although the Fourier spectrum outside the synthetic aperture of the FPM is predicted. This is because the predicted high frequency spectrum is not exact with the original spectrum of the sample function. Therefore, the improved reconstruction using the proposed FPMUP scheme is imputed to the elimination of the ringing effect due to the spectral truncation. It is expected that the resolution of FPM could not be enhanced until the Fourier spectrum outside the synthetic aperture of the FPM is accurately predicted.

The profile of the recovered pupil function using EPRY + LEDIC (Fig. 7(j)) is similar to that using the FPMUP scheme (Fig. 7(k)). Nevertheless, the recovered pupil function using FPMUP scheme is smoother, which may contribute to a better reconstruction. In Fig. 7(l) and 7(m), the recovered intensity factors of LEDs using both methods show such a distribution that the closer it is to the center, the larger value is, which is consistent with real situation in the experiments.

5. Conclusion

In this paper, we build a physics-assisted untrained deep neural network scheme in FPM, which combines the FPM forward imaging model to reconstruct a high resolution, complex-valued sample function with a large FOV. It is found that the proposed FPMUP scheme can not only achieve better reconstruction of amplitude and phase than the traditional algorithm, but also recover the pupil function and LED intensity fluctuation factors. The proposed scheme could be further extended to three-dimensional imaging such as Fourier ptychographic diffraction tomography [50]. It is also noticed that the proposed scheme can predict the Fourier spectrum outside of synthetic aperture of FPM, which, to the best of our knowledge, has not been reported in the FPM. This could be why we can achieve the better image and high robustness to mixed systematic errors. Inspired by DIP that has been demonstrated in the field of image processing [42,47], we could impute the expansion of Fourier spectrum to deep priors rooted in the careful design of the four parallel DNNs architecture.

However, we should note the following two points. First, the proposed FPMUP scheme does not improve the resolution of FPM, although it can predict the Fourier spectrum outside of synthetic aperture of FPM. This is because the predicted high-frequency spectrum is not exact with the original spectrum of the sample function. Therefore, it only eliminates the ringing effect of the images due to spectral truncation. In this regard, we envisage that the resolution of FPM imaging could be further enhanced if the corresponding Fourier spectrum of samples outside the synthetic aperture can be accurately predicted by the properly designed architecture of DNNs. However, exploring the most relevant neural architectures for a specific-task problem such as image restoration and FPM is still an open research problem [47]. Therefore, how to design such architecture of DNNs in FPM would be the interesting and meaningful work in the future [51]. Secondly, as an iterative optimization process, the proposed FPMUP scheme requires higher computational resources as compared to the training type of deep learning method. Compared with the traditional iterative algorithm, the proposed FPMUP scheme needs more time for reconstruction due to the optimization in a high-order space like neural networks. Therefore, we may take advantage of high-performance GPUs and optimizing python code to accelerate the algorithm, which is beneficial to alleviate the computational cost due to its inherent iterative nature.

Funding

National Natural Science Foundation of China (61905291); Basic and Applied Basic Research Foundation of Guangdong Province (2020A1515010626); Fundamental Research Funds for the Central Universities (22qntd3002).

Disclosures

The authors declare no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

3. A. Pan, C. Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

4. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

5. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

6. J. Sun, C. Zuo, J. Zhang, Y. Fan, and Q. Chen, “High-speed Fourier ptychographic microscopy based on programmable annular illuminations,” Sci. Rep. 8(1), 7669 (2018). [CrossRef]  

7. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1 (2017). [CrossRef]  

8. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]  

9. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imaging Graph. 42, 38–43 (2015). [CrossRef]  

10. H. Lee, B. H. Chon, and H. K. Ahn, “Reflective Fourier ptychographic microscopy using a parabolic mirror,” Opt. Express 27(23), 34382–34392 (2019). [CrossRef]  

11. C. Shen, A. C. S. Chan, J. Chung, D. E. Williams, A. Hajimiri, and C. Yang, “Computational aberration correction of VIS-NIR multispectral imaging microscopy based on Fourier ptychography,” Opt. Express 27(18), 24923–24937 (2019). [CrossRef]  

12. J. Holloway, Y. C. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, “SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Sci. Adv. 3(4), e1602564 (2017). [CrossRef]  

13. C. Detlefs, M. A. Beltran, J. P. Guigay, and H. Simons, “Translative lens-based full-field coherent X-ray imaging,” J. Synchrotron Radiat. 27(1), 119–126 (2020). [CrossRef]  

14. Y. Zhou, J. Wu, Z. Bian, J. Suo, G. Zheng, and Q. Dai, “Fourier ptychographic microscopy using wavelength multiplexing,” J. Biomed. Opt. 22(6), 066006 (2017). [CrossRef]  

15. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

16. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

17. P. C. Konda, J. M. Taylor, and A. R. Harvey, “Parallelized aperture synthesis using multi-aperture Fourier ptychographic microscopy,” arXiv preprint arXiv:1806.02317 (2018).

18. T. Aidukas, P. C. Konda, J. M. Taylor, and A. R. Harvey, “Multi-camera Fourier Ptychographic Microscopy,” in Imaging and Applied Optics 2019 (COSI, IS, MATH, pcAOP), OSA Technical Digest (Optical Society of America, 2019), CW3A.4.

19. Y. Zhang, W. Jiang, L. Tian, L. Waller, and Q. Dai, “Self-learning based Fourier ptychographic microscopy,” Opt. Express 23(14), 18471–18486 (2015). [CrossRef]  

20. Y. Fan, J. Sun, Q. Chen, M. Wang, and C. Zuo, “Adaptive denoising method for Fourier ptychographic microscopy,” Opt. Commun. 404, 23–31 (2017). [CrossRef]  

21. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

22. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

23. P. Li, D. J. Batey, T. B. Edo, and J. M. Rodenburg, “Separation of three-dimensional scattering effects in tilt-series Fourier ptychography,” Ultramicroscopy 158, 1–7 (2015). [CrossRef]  

24. X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3140–3150 (2016). [CrossRef]  

25. P. Song, S. Jiang, H. Zhang, Z. Bian, C. Guo, K. Hoshino, and G. Zheng, “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645–3648 (2019). [CrossRef]  

26. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

27. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

28. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]  

29. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Li, P. Fan, J. Han, K. Qian, and Q. Chen, “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

30. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE Transactions on Image Processing 26(9), 4509–4522 (2017). [CrossRef]  

31. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

32. Y. Jo, S. Park, J. Jung, J. Yoon, H. Joo, M. H. Kim, S. J. Kang, M. C. Choi, S. Y. Lee, and Y. Park, “Holographic deep learning for rapid optical screening of anthrax spores,” Sci. Adv. 3(8), e1700606 (2017). [CrossRef]  

33. T. Nguyen Thanh, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

34. Y. Cheng, M. Strachan, Z. Weiss, M. Deb, D. Carone, and V. Ganapati, “Illumination pattern design with deep learning for single-shot Fourier ptychographic microscopy,” Opt. Express 27(2), 644–656 (2019). [CrossRef]  

35. J. Zhang, T. Xu, Z. Shen, Y. Qiao, and Y. Zhang, “Fourier ptychographic microscopy reconstruction with multiscale deep residual network,” Opt. Express 27(6), 8612–8625 (2019). [CrossRef]  

36. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

37. M. Sun, X. Chen, Y. Zhu, D. Li, Q. Mu, and L. Xuan, “Neural network model combined with pupil recovery for Fourier ptychographic microscopy,” Opt. Express 27(17), 24161–24174 (2019). [CrossRef]  

38. Y. Zhang, Y. Liu, S. Jiang, K. Dixit, P. Song, X. Zhang, X. Ji, and X. Li, “Neural network model assisted Fourier ptychography with Zernike aberration recovery and total variation constraint,” J. Biomed. Opt. 26(03), 036502 (2021). [CrossRef]  

39. J. Zhang, X. Tao, L. Yang, R. Wu, P. Sun, C. Wang, and Z. Zheng, “Forward imaging neural network with correction of positional misalignment for Fourier ptychographic microscopy,” Opt. Express 28(16), 23164–23175 (2020). [CrossRef]  

40. J. Zhang, T. Xu, J. Li, Y. Zhang, S. Jiang, Y. Chen, and J. Zhang, “Physics-based learning with channel attention for Fourier ptychographic microscopy,” J. Biophotonics 15, e202100296 (2022). [CrossRef]  

41. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep Image Prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE2018), 9446–9454.

42. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep Image Prior,” Int. J. Comput. Vis. 128(7), 1867–1888 (2020). [CrossRef]  

43. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

44. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

45. F. Wang, C. Wang, M. Chen, W. Gong, Y. Zhang, S. Han, and G. Situ, “Far-field super-resolution ghost imaging with a deep neural network constraint,” Light: Sci. Appl. 11(1), 1 (2022). [CrossRef]  

46. Z.-Y. Chen, Z. Wei, R. Chen, and J.-W. Dong, “Focus shaping of high numerical aperture lens using physics-assisted artificial neural networks,” Opt. Express 29(9), 13011–13024 (2021). [CrossRef]  

47. A. Qayyum, I. Ilahi, F. Shamshad, F. Boussaid, M. Bennamoun, and J. Qadir, “Untrained Neural Network Priors for Inverse Imaging Problems: A Survey,” TechRxiv Preprint. https://doi.org/10.36227/techrxiv.14208215.v1 (2021)

48. C. Cortes, M. Mohri, and A. Rostamizadeh, “L2 regularization for learning kernels,” arXiv preprint arXiv:1205.2653 (2012).

49. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv:1412.6980 (2014).

50. C. Zuo, J. S. Sun, J. J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Lasers Eng. 128, 106003 (2020). [CrossRef]  

51. S. Dittmer, T. Kluth, P. Maass, and D. Otero Baguer, “Regularization by Architecture: A Deep Prior Approach for Inverse Problems,” J. Math. Imaging Vis. 62(3), 456–470 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The schematic diagram of the FPM imaging system
Fig. 2.
Fig. 2. The schematic diagram of FPMUP scheme. (a) – (d) represent the four parallel neural networks. Such a careful design of network architectures is to some extent considered as the deep prior [42]. Specifically, a fully connect layer is followed by two convolution layers to generate the amplitude and phase of the complex-valued sample function in (a) and (b), respectively. Different fully connect layers are used for pupil aberration and illumination fluctuation factor correction in (c) and (d), respectively. The combination of the four output of the neural networks is fed into the FPM forward imaging model and then the defined loss function with respect to the real measurement is then employed to optimized weights and biases via back-propagation algorithm. The details of each layer are shown at the bottom. Conv (3${\times} $3,1024) represents a convolution layer with a kernel size of 3${\times} $3 and kernel number of 1024 while Dense ($M \times M$) represents a fully connect layer with nodes of $M \times M$.
Fig. 3.
Fig. 3. Reconstructed complex sample function when the first two parallel neural networks consist of one fully connected layer only (the first column, A1) and one fully connected layer with CNNs (the second to fourth column, A2 to A4). The last column of (e) and (j) shows the ground truth (GT) of the complex-valued sample function. The first row (a-e) and second row (f-j) present the recovered amplitude and phase of the complex-valued sample function, respectively. The second to fourth columns correspond to reconstructions when the Conv1 layer with 256, 512, 1024 convolution kernels (kernels size 3${\times} $3), respectively.
Fig. 4.
Fig. 4. Complex sample function reconstruction of different algorithms without any systematic errors, (a, e and i) AP algorithm, (b, f and j) deep learning method (Jiang [36]) and (c, g and k) the proposed FPMUP scheme. The last column (d, h, and l) is served as the GT of the complex sample function. Besides the amplitude (a-d, the first row) and phase (e-h, the second tow), the Fourier spectrum of the reconstructed high-resolution images are also given in the last row (i - l).
Fig. 5.
Fig. 5. Reconstructions of different algorithms with aberrations only, (the first row) AP algorithm, (the second tow) EPRY algorithm [27]) and (the third row) the proposed FPMUP scheme. The last row is the recovered pupil function using (k) the EPRY and (l) FPMUP. The GT of the pupil function is provided in (j). The aberration is set by the sample deviated from the focus plane. (a1) - (l1) are the reconstructions of different algorithms at the defocus of 200 µm, while (a2) - (l2) are the reconstructions at the defocus of 20 µm. For each defocus, the reconstructed amplitude and phase as well as the Fourier spectrum are provided in column.
Fig. 6.
Fig. 6. Reconstructions of different algorithms in the presence of both aberrations and LED intensity fluctuation. (The first row) EPRY + LEDIC algorithm, (The second tow) the proposed FPMUP scheme. The aberration setting is of the same as Fig. 5. (a1) - (l1) are for reconstructions of different algorithms at the defocus of 200 µm, while (a2) - (l2) are for reconstructions at the defocus of 20 µm. For each defocus, the reconstructed amplitude and phase as well as the Fourier spectrum are provided in column. The recovered pupil functions and intensity factor of LEDs are shown at the bottom. (g)-(i) are the ground truth (GT) of pupil function, recovered pupil functions of EPRY + LEDIC algorithm and FPMUP scheme, respectively. (j)-(l) are the ground truth of intensity distribution of LED array, the recovered intensity distribution using EPRY + LEDIC algorithm and FPMUP scheme, respectively. The normalized mean-square error (NMSE) with respect to GT is provided at the bottom of the corresponding images.
Fig. 7.
Fig. 7. Reconstruction using experimental datasets with AP algorithm (the first row), EPRY + LEDIC algorithm (the second row) and the proposed FPMUP scheme (the third row). Two open source datasets [2], (a1-m1) the blood cells sample and (a2-m2) USAF resolution test chart sample, are used in the reconstruction. The reconstructed amplitude and phase of the sample function as well as the Fourier spectrum are provided in columns. The recovered (j-k) pupil functions and (l-m) intensity factor of LEDs using EPRY + LEDIC algorithm and the proposed FPMUP scheme are at the bottom.

Tables (2)

Tables Icon

Table 1. The mean square error (MSE) and structure similarity index (SSIM) of the reconstructions in Fig. 3

Tables Icon

Table 2. SSIM and MSE indicators of reconstructed images in Fig. 4

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

t n ( x , y ) = o ( x , y ) e j k 0 ( k x n x + k y n y + k z n z )
e n ( x , y ) = h ( x , y ) t n ( x , y )
E n ( k x , k y ) = H ( k x , k y ) T n ( k x , k y )
H ( k x , k y ) = { 1 i f k x 2 + k y 2 k c 2 0 e l s e
I n ( x , y ) = | F 1 { E n ( k x , k y ) } | 2 = | F 1 { H ( k x , k y ) O ( k x k 0 k x n , k y k 0 k y n ) } | 2
I n ( x , y ) = f n { O ~ ( x , y ) , c n , P }
L o s s ( O ~ ) = n = 1 N ( c n I n I r n ) 2 + α | | W | | 2 2
W = a r g m i n W { L o s s ( O ~ ) }
ϕ d e v ( r , θ ) = j = 0 15 a j Z j ( r R , θ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.