Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Redundant information model for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a computational optical imaging technique that overcomes the traditional trade-off between resolution and field of view (FOV) by exploiting abundant redundant information in both spatial and frequency domains for high-quality image reconstruction. However, the redundant information in FPM remains ambiguous or abstract, which presents challenges to further enhance imaging capabilities and deepen our understanding of the FPM technique. Inspired by Shannon's information theory and extensive experimental experience in FPM, we defined the specimen complexity and reconstruction algorithm utilization rate and reported a model of redundant information for FPM to predict reconstruction results and guide the optimization of imaging parameters. The model has been validated through extensive simulations and experiments. In addition, it provides a useful tool to evaluate different algorithms, revealing a utilization rate of 24%±1% for the Gauss-Newton algorithm, LED Multiplexing, Wavelength Multiplexing, EPRY-FPM, and GS. In contrast, mPIE exhibits a lower utilization rate of 19%±1%.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Invented by Yang and Zheng et al. in 2013 [1], Fourier ptychographic microscopy (FPM) overcomes the trade-off between the high resolution (HR) and the large field of view (FOV) by combining the optical phase retrieval [27] with the synthetic aperture radar [8]. And it transforms the original diffraction limit of the coherent imaging system from ${\lambda / {N{A_{obj}}}}$ to ${\lambda / {({N{A_{obj}} + N{A_{illu}}} )}}$[9,10], resulting in a considerable increase in resolution, where $N{A_{obj}}$ and $N{A_{illu}}$ represent the numerical aperture (NA) of illumination and objective lens, respectively, and $\lambda $ is the wavelength used for imaging. FPM acquires a series of low-resolution (LR) images by utilizing LED arrays instead of traditional condenser for multi-angle illumination, and then the reconstruction algorithm is used to iteratively restore the HR image based on LR images. Its essence is to use redundant overlapping information in the frequency domain for the phase recovery. Currently, digital pathology is one of the earliest and most successful applications of FPM [11], and it has achieved high-precision, high-efficiency imaging, and fast full-color imaging [12,13] so far. It is developing towards high-throughput imaging [14,15], 3D imaging [16,17], and remote sensing imaging [8].

To enhance imaging efficiency, certain approaches are being explored with the aim of optimizing data redundancy utilization, such as LED multiplexing [18,19] and deep learning method [2022]. Tian et al. [18] reported a sequence-coded FPM method to achieve 1.25Hz in vitro dynamic imaging of living cells by collecting 21 images (4 bright field images and 17 dark field images) with 8 simultaneous random lit LED reuse lighting. This method is based on a 4×/0.2NA objective lens and a 2×rotating lens to achieve 0.8 synthetic NA without any sacrifice compared to traditional FPM. But the imaging speed is still a bit inadequate. While others are researched with the aim of minimizing superfluous data duplication. Li et al. [23] significantly reduced the volume of data acquired by using the information entropy of the image as a judgment function. Typical methods also include sparse LED array [24] and lossy compression algorithm [25,26]. The aforementioned methods all indicate that the redundant information of FPM could further be reduced, or the utilization rate could be improved to achieve faster FPM imaging and thereby improve the overall imaging throughput.

However, there is currently a lack of research evaluating the redundancy of information within the entire FPM system. The definition of redundant information remains an abstract concept without a quantitative measuring method, which presents significant challenges for further exploration into the limits of FPM. The space-bandwidth product (SBP) [10] is a commonly used metric for quantifying the information throughput of an optical imaging system. It represents the number of distinguishable pixels within the FOV of the system. A larger SBP indicates that more information can be transmitted by the system. SBP can be calculated as $ SBP = (FOV/0.5r )^2$, where $ r$ is the diffraction limit resolution [27]. The achievable SBP of an imaging system is fundamentally limited by the optical system employed, primarily the imaging objective and detector. This limitation results in a trade-off between resolution and FOV, which is challenging to enhance solely through improvements in optical design parameters. Conversely, FPM enables simultaneous achievement of quantitative phase imaging with high resolution and large FOV. The SBP of FPM can partially reflect the redundant information that can be transmitted by the system. However, the calculation of SBP does not directly reflect the impact of imaging system parameters on FPM redundancy information, thereby rendering it unsuitable for quantitatively evaluating redundancy information or determining its limits.

So far, the current research on redundant information in FPM and its impact on reconstruction quality remains largely based on empirical hypotheses and experimental summaries. The system parameters that affect the redundant information include $N{A_{obj}}$, $\lambda $, pixel size of the detector, height, and LED setting. Additionally, there exist empirical parameters, sampling rate (${R_{cam}}$) in spatial domain and overlap rate (${R_{overlap}}$) in frequency domain. We make an appropriate summary here. When the $N{A_{obj}}$ is larger, the quality of the image obtained will be better, $\lambda $ and $N{A_{obj}}$ are related to the cut-off frequency of the coherent transfer function (CTF). Sun et al. [24] experimentally found that there is a compensation relationship between ${R_{cam}} $ and ${R_{overlap}}$, and FPM requires ${R_{overlap}} $ of at least 30% for samples such as a United States Air Force (USAF) resolution target. However, it was found in our experiment that the complexity of samples also affects the quality of reconstruction, thus the required minimum ${R_{overlap}}$ in the frequency domain is not fixed and can only be qualitatively used for reference, (see Figure S1 in Supplement 1). The ${R_{cam}}$ and ${R_{overlap}}$ can approximately be regarded as linear compensation for each other within a certain range and they both are proportional to the redundant information. The height and LED settings also affect ${R_{cam}}$ and ${R_{overlap}}$. In addition, the more non-repetitive LR images are, the more redundant information will be provided [28]. Note that the repeated LR images do not provide additional information because they do not have additional redundant information, which do not help improve the final imaging results. The quality of HR images is closely related to the reconstruction algorithm employed. It is evident that the current comprehension of redundant information in FPM and its sensitivity to experimental parameters has been obtained through specific sample experiments utilizing particular algorithms. Hence, evaluating the reconstruction algorithm's efficacy in extracting information and assessing sample complexity are crucial factors for establishing a redundant information model of FPM.

To evaluate the redundant information, a redundant information model for FPM based on Shannon information theory is reported in this work. The model defines three crucial parameters, namely the amount of redundant information, sample complexity, and algorithm utilization rate. It takes into account all imaging parameters that affect the redundant information and predicts imaging results by calculating redundant information. Moreover, it can provide suggestions for experimental parameter modification when reconstruction results are unsatisfactory. The model also offers a quantitative comparison approach for different reconstruction algorithms based on the information utilization rate. The validity of the redundant information model has been confirmed through numerous simulations and experiments. And the utilization rate of the Gauss-Newton algorithm, LED Multiplexing, Wavelength Multiplexing, EPRY-FPM, and GS is calibrated as 24%±1%, while that of mPIE is 19%±1%.

The paper is organized as follows: in Section 2, we introduce the principles of our redundant information model for FPM. Then, in Section 3 and Section 4, we demonstrate the validation and application of the model with simulations and experiments. We provide final discussions and conclusions in Section 5.

2. Principle

2.1 Redundant information model for FPM

For a general communication system, as depicted in Fig. 1(a), the communication process can be generally described as follows. The message/information from the source is modulated or encoded by the transmitter to form a signal. During transmission, this signal is interfered with noise and then received by the receiver. Finally, it is demodulated or decoded to obtain the original message. The Shannon's information theory [29] is a seminal work that investigates the transmission of information throughout a process, focusing on optimizing system performance and achieving this through rigorous mathematical analysis. The concept of entropy is reported to precisely measure information. As Fourier optics shares analogous concepts with signal processing theory, extending a one-dimensional signal to the two-dimensional space enables the transfer of information entropy from communication systems to optical imaging systems. While the FPM system model requires an additional step in the reconstruction process, as depicted in Fig. 1(b). The sample is illuminated by LEDs from various angles and the resulting LR images containing noise are collected by CCD/sCMOS to obtain intensity information. The HR images are then reconstructed using reconstruction algorithms. Like a normal communication system, the FPM system is a linear space-invariant system. Therefore, drawing inspiration from Shannon's information theory, we aim to establish a model of redundant information for FPM.

 figure: Fig. 1.

Fig. 1. (a) A general communication system, (b) imaging process of FPM.

Download Full Size | PDF

In Shannon’s information theory [28], it referred to the Boltzmann form of thermodynamic entropy $S = K\ln (W )$ and proposed information entropy to describe the amount of information required by a communication system. The theory includes two important formulas: entropy $H(x )$ (Eq. (1)) and channel capacity $C$ (Eq. (2))

$$H(x )={-} \sum\limits_i {{p_i}\log } {p_i},\quad{x \sim }p(x )$$
$$C = 2W\log ({1 + S/N} )$$
where ${p_i},W,{S / N}$ represent the probability of message i, bandwidth, and signal-to-noise ratio respectively. Entropy $H(x )$ is positively correlated with the degree of chaos in the system and the amount of information. The channel capacity $C$ limits the information throughput, that is, the maximum information throughput of the system.

We define Q to quantitatively measure the redundant information of the FPM system, and it determines the limited amount of redundant information that is required for accurate reconstruction, where both intensity and phase can be accurately recovered. According to the empirical analysis in the introduction, it is feasible to transfer the form of Eq. (2) to construct the redundant information model for the FPM imaging system as Eq. (3).

$$Q = {R_{overlap}}\sum\limits_i^n {{Q_i}} = {R_{overlap}}\sum\limits_i^n {{R_{cam}} \cdot W \cdot \ln ({1 + SN{R_i}} )}$$
$$SN{R_i} = \frac{{{{({\hat{S}_{x,y}^i\textrm{ - }N_{x,y}^i} )}^2}}}{{{{({N_{x,y}^i} )}^2}}}$$

The ${Q_i}$ indicates the amount of information of each LR image and it shares the same form as Eq. (2), but a little bit of difference in terms of expression. Physicists are accustomed to describing physical phenomena or events with natural logarithm, which is also adopted in our formula as the substitution of base-2 logarithm. We might safely accept the modification as the selection of base has no impact on the problem essence. The redundant information is generated by the aperture overlaps in the frequency domain and oversampling in the spatial domain. Therefore, Q is extracted from the accumulation of Qi with a certain proportion. Physically speaking, the circular pupil area of an FPM system is similar to the bandwidth of a communication system. Therefore, we define $W = {{\pi NA_{obj}^2} / {{\lambda ^2}}}$ as the bandwidth of the FPM system. The $\hat{S}_{x,y}^i$ represents the signal of the LR image and $N_{x,y}^i$ represents the noise, $({x,y} )$ represents the pixel position in the image. They are used to calculate $SN{R_i},i = 1,2,\ldots ,n$ with each LR image, which should be positively correlated with redundant information. The $SN{R_i}$ is often directly related to the noise of the imaging system and pre-processing method, which can be regarded as prior information to calculate. In our simulation, the SNR is accurate because the noise is artificially added. While in the experiment, the real noise cannot be completely separated, so only an approximate $SN{R_i}$ can be obtained. Considering the robustness of FPM algorithm, it does not require a complicated pre-processing method. The common pre-processing steps include background subtraction, coefficient multiplication method, and low-pass filtering [30,31]. We tested these methods and found that they have minimal impact on the final SNR calculation (less than 2 dB). Therefore, we utilize background subtraction method for preprocessing in our raw data. For systems with different cameras, it is possible to take multiple dark field images without placing samples and then take the average image as the background noise image. Considering the ${R_{cam}}$ is not always one in practice according to the Nyquist sampling theorem, we adjust the constant coefficient of Eq. (2) to ${R_{cam}}$. Here, we directly give the calculation of ${R_{cam}}$ and ${R_{overlap}}$ as follows [24]:

$${R_{cam}} = {{\lambda \cdot Mag} / {({2\Delta {x_{cam}} \cdot N{A_{obj}}} )}}$$
$${R_{overlap}} = {1 / \pi }\left[ {2\arccos ({{1 / {2{R_{LED}}}}} )- {{\sqrt {1 - {{({{1 / {({2{R_{LED}}} )}}} )}^2}} } / {{R_{LED}}}}} \right]$$
where $Mag$ is the magnification of the microscope and $\Delta {x_{cam}}$ is the pixel size of the detector. ${R_{LED}}$ represents the spatial sampling rate
$${R_{LED}} = {{NA \cdot \sqrt {{D_{LED}}^2 + {h^2}} } / {{D_{LED}}^2}}$$
where ${D_{LED}}$ is the distance between adjacent LEDs and h is the distance between the LED array and the sample.

2.2 Definition of sample complexity

The complexity of the sample itself also directly affects the final reconstruction results. To add this impact to the redundant information calculation, we define the concept of sample complexity ($\eta$) and use it to impose constraints on the redundant information of the FPM system. The sample with higher complexity suggests that more redundant information is required for FPM to meet accurate reconstruction. Thus, there exists no uniform limit of ${R_{overlap}}$ that applies to any sample for the FPM system. In imaging systems, the concept of image entropy, as a statistical measure of features, enables the reflection of the average information content of images, thus we introduce entropy to calculate $\eta $ for the sample. The one-dimensional entropy commonly used can effectively capture the aggregation characteristics of image gray distribution, yet it fails to depict the spatial characteristics adequately. Its calculation is illustrated in Eq. (8), where ${p_i} = f(i )/MN$ is the proportion of pixels in the image with gray level $i,({0 \le i \le 255} ),\; \; f({\cdot} )$ denotes the frequency of occurrence of the feature in the image, while $MN$ denotes the total number of pixels in the image. The introduction of two-dimensional entropy [32], based on one-dimensional entropy, incorporates the spatial characteristic quantity of gray scale distribution. The corresponding calculation is demonstrated in Eq. (9), where ${p_{ij}} = f({i,j} )/MN$ is the proportion of pixels in the image with feature binary groups $({i,j} ),$ $0 \le j \le 255$ is the average of the gray level of the pixel in the i neighborhood. All the simulations and experiments in this paper adopt two-dimensional entropy calculation.

$$\eta ={-} \sum\limits_{i = 0}^{255} {{p_i}} \log {p_i}$$
$$\eta ={-} \sum\limits_{i = 0}^{255} {\sum\limits_{j = 0}^{255} {{p_{ij}}\log {p_{ij}}} }$$

To ensure the effectiveness of our model for transparent or semi-transparent samples, (i.e., phase-only samples), we consider defocusing the image at the central bright field to introduce phase information, making η more realistic, because it may not be able to distinguish between different transparent or semi-transparent samples when using a focus image to calculate η. Defocus is mathematically expressed by introducing a quadratic phase factor into the CTF, as shown in Eq. (10), it breaks the symmetry of the weak object transfer function (WOTF), thus obtaining phase contrast information [30].

$$P(u) = |{P(u)} |{e^{j \cdot kz \cdot \sqrt {1 - {\lambda ^2}|{{u^2}} |} }},\lambda |u |\le 1$$
where P(u) is the CTF and u is a simple record of the coordinates in the frequency domain. k is the cutoff frequency of the system. z denotes the defocusing distance, and the range of z is usually no more than 10 µm in FPM experiment [31]. The simulation conducted in Section 3 supports our findings that there are obvious differences in η calculated by defocused and focused images respectively, but the η corresponding to different z (0<z<10µm) is less different. Therefore, the setting of z does not have a great influence on η. To be non-parametric, z is set to an empirical constant in our simulation in the latter section. Usually, the defocus image contains all the phase information, the difference is only the phase contrast.

2.3 Definition of algorithm utilization

To the best of our knowledge, it is understood that different reconstruction algorithms possess varying information extraction capabilities. As a result, certain algorithms may be able to utilize a limited number of images or even a single image for the purpose of reconstruction. Hence, we introduce the concept of utilization rate $(\alpha )$ to describe an algorithm's ability to extract information from LR images. As an inherent characteristic of the reconstruction algorithm, $\alpha$ should remain consistent across different experimental conditions once the algorithm has been determined. Once $\alpha$ is determined, $\alpha Q$ represents the effective redundant information extracted by the algorithm. When $\alpha Q$ is greater than or equal to $\eta$, we consider the FPM system to provide sufficient redundant information for accurate sample reconstruction. Conversely, if this condition is not met, accurate reconstruction cannot be achieved. The value of Q that satisfies Eq. (11) represents the limit of redundancy required for accurate reconstruction.

$$\alpha Q \ge \eta$$

The successful calibration of $\alpha$ is crucial as it determines the model's suitability for evaluating reconstruction results and providing recommendations for parameter optimization. Based on the hypothesis that $\alpha$ is an inherent algorithmic attribute, we estimated a statistical value by identifying ${Q_{limit}}$ for accurate reconstructions across various experimental groups, including different samples, objective lenses, cameras, wavelengths and LED numbers. Then estimate the algorithm utilization rate with $\alpha = {\eta / {{Q_{limit}}}}$. The calibration process is illustrated in Fig. 2(a), where we first acquire LR images and compute $\eta$ for the sample using the defocused image at the central bright field, followed by preprocessing to obtain the total SNR of all LR images and derive $Q$ via Eq. (3). Finally, we ascertain whether ${Q_{limit}}$ has been attained by analyzing the reconstructed image. Once $\alpha$ is determined by the calculation flow in Fig. 2(a), the model predicts and optimizes the imaging system as shown in Fig. 2(b). The reconstruction outcomes are determined by evaluating whether the algorithm has effectively extracted redundant information from the entire dataset that surpasses the complexity of the sample. The optimization proposal is presented through adjustments to imaging parameters that satisfy Eq. (11).

 figure: Fig. 2.

Fig. 2. The process of FPM redundancy information measurement. (a) During the calibration of utilization rate α, a series of LR images is collected; the signal and noise information are separated by pre-processing and then used to calculate the total redundant information Q of the FPM system by Eq. (3); η is obtained by calculating the 2D information entropy based on the statistics of gray features of the defocused image at the central bright field. $f(a,b)$ is a set representing spatial features of the image, consisting of one pixel and its domain mean by 3 × 3. When Q drops to Qlimit by adjusting parameters, α is calculated with η/Qlimit. (b) After α is calibrated, the redundant information model can be used to judge and further provide suggestions on parameter modification.

Download Full Size | PDF

3. Simulation

To verify the correctness and feasibility of the model for FPM, we simulate it in different FPM imaging systems to calibrate $\alpha$ for the Gauss-Newton algorithm, which is widely used as the reconstruction algorithm for FPM. For setting the defocus distance, we first made a quantitative verification using the photographer's image with 4×/0.1 NA objective, and found that when z = 0, that is, the η calculated by using the focused image is 7.6311bit, and when z changes in 1µm∼10µm, the corresponding change range of η is 7.6311 ± 0.2bit, which can be ignored. Thus we set the z to 3 µm in simulations. Gaussian noise and Poisson noise are added to emulate the photon shot noise and readout noise in LR images generated by the system. The proportion of Gaussian noise is set as 5%, which is mainly added in dark field images. And the mean value of Poisson noise is obtained by shrinking the pixel value at a scale of 103, and it is added in the bright and dark field images at the same time. By observing the simulation results of the same sample under 2×/0.08 NA, 4×/0.1 NA, and 10×/0.25 NA objective lenses respectively, we calibrate the corresponding limit position. According to our hypothesis, $\alpha$ is a fixed value for a certain construction algorithm, thus we need to observe if the value of $\alpha$ calculated under the limit positions of different imaging systems is the same or similar.

Equation 11 is established on the premise of accurate reconstruction, and $\eta$ indicates the information required for accurate reconstruction, so sufficient redundant information should be provided for the left side of Eq. (11) to meet the equation. The ${Q_{limit}}$ and $\alpha$ are calculated at the limit position by taking Eq. (11) as an equal sign. To find the limit position of accurate reconstruction, we take the limit value of ${R_{cam}}$ which just satisfies the Nyquist sampling theorem, and then the value of $N{A_{illu}}$ is adjusted near 0.6 by tuning the illumination height (h) and the number of LEDs for each simulation, the step size of height is 2 mm. Empirically speaking, the quality of LR images obtained when $N{A_{illu}}$ is around 0.6 is enough to meet the prior information required by most samples’ reconstruction. The work in [24] suggests a Roverlap greater than 30% can lead to successful reconstruction, but this only applies to intensity and not phase. For accurate reconstruction in redundant information model proposed, both intensity and phase recovery are required, resulting in a Roverlap set higher than 30%. We use minimum root mean square error of intensity $({I_{rmse}})$ and phase $({P_{rmse}})$ as the judgment basis for accurate reconstruction. Under conditions described above, both ${I_{rmse}}$ and ${P_{rmse}}$ converge for each simulation, and the reconstruction quality will be higher when the error value is lower. The value of ${I_{rmse}}$ and ${P_{rmse}}$ obtained in the last iteration after the reconstruction at different heights are utilized to generate the three error graphs corresponding to the three objectives, as shown in Fig. 3(b1, c1, d1). It is observed that ${I_{rmse}}$ decreases steadily with an increase in height fine-tuning, while there is a certain level of fluctuation in ${P_{rmse}}$. Therefore, the position at which the error falls below the established threshold and ${P_{rmse}}$ starts to steadily decrease is considered as the limit position for accurate reconstruction. For 2×/0.08 NA and 4×/0.1 NA objectives, the error values are close, with thresholds set at 0.01 for ${I_{rmse}}$ and 0.1 for ${P_{rmse}}$ according to the empirical reference values for simulation reconstruction. However, the 10×/0.25 NA objective can achieve superior reconstruction quality at a lower height directly. This is attributed to the fact that significant gaps in magnification and numerical aperture lead to a large variation in system cutoff frequency and up-sampling magnification, which are equivalent to modifications of image content. We choose 0.01 and 0.1 as threshold reference values for ${I_{rmse}}$ and ${P_{rmse}}$ in the simulations of 10×/0.25 NA objective. Finally, through the application of threshold judgment and consideration of fluctuations in ${P_{rmse}}$, we are able to determine the limit position of accurate reconstruction for three simulated objectives. The reconstruction results at the limit position are shown in Fig. 3(b, c, d).

 figure: Fig. 3.

Fig. 3. The result of limit position for accurate reconstruction with Gauss-Newton. (a) Ground Truth; Intensity and phase images reconstructed at ${Q_{limit}}$ with objectives of (b) 2×/0.08 NA, (c) 4×/0.1 NA, (d) 10×/0.25 NA. (b1,c1,d1) are the corresponding error fluctuation of ${I_{rmse}},{P_{rmse}}$ changes with height (h). ${Q_{limit}}$ has been marked with coordinates.

Download Full Size | PDF

The corresponding system settings include height, LED array (the number N and spacing Δd), pixel size, RcamRoverlap calculated, and the calculation of redundant information by the model are shown in Table 1. The $\eta$ at three systems has slight differences, which are 7.6316 for 2×/0.08 NA, 7.6311 for 4×/0.1 NA, and 7.6291 for10×/0.25 NA respectively. The change trend of $\eta$ is consistent with the empirical knowledge, that is, the larger NA will make the observed image clearer, thus the complexity of the sample calculated by the image at normal incidence will be lower and the algorithm reconstruction will require less redundant information. Due to the randomness of the noise added by the simulation, it will affect the calculation of $SN{R_i}$, so that $\alpha$ will fluctuate in a certain range. We study the fluctuation of $\alpha$ caused by Gaussian noise within 10% under three objective lenses and fine-tune the corresponding heights that affect the overlap rate at different noise levels to keep the ${Q_{limit}}$ relatively stable. Then we obtain the fluctuation range of different systems are 24.30%±0.56%, 24.46%±0.44%, and 24.29%±0.64% respectively, the result is shown in Fig. 4. To make the calibration of $\alpha$ statistically significant, we get $\mathop{\alpha}\limits^{\sim}$ which within the range of 24%±1% by multiple simulations and appropriate expansion.

 figure: Fig. 4.

Fig. 4. The fluctuation of $\alpha$ with Gaussian noise within 10%. And the fluctuations with three systems are all in a similar range, 24% ± 1%.

Download Full Size | PDF

Tables Icon

Table 1. The imaging parameters and calibration results by different imaging systems in simulation.

4. Experiment

There are three parts in this section, we first calibrate $\alpha$ use the real system and different samples to verify whether the $\alpha$ matches the result of the simulation in Section 3 and then provide some possible applications of the redundant information model proposed. We also calibrate other algorithms at last.

4.1 Calibration of $\alpha$ in experiment

The FPM system implementation consists of a 32 × 32 programmable R/G/B LED array and a compact inverted microscope. All the raw data are captured by a 4×/0.1 NA plane achromatic objective and an 8-bit CCD camera (DMK 23U445, Imaging Source, 1280 × 960 pixels, 3.75µm pixel pitch). The LED source provides an incident illumination wavelength of 518.08 nm and the distance between adjacent LEDs is 4 mm. The sample used in the experiment is a United States Air Force (USAF) and a biological sample coming from the section of a cotton leaf, and the complexity of the former as a general sample is lower than that of the latter. We first make the $N{A_{illu}}$ as close as possible to 0.6, slowly reduce the height and observe the quality of the reconstructed image, including intensity and phase images until obvious artifacts appear and then the previous height is regarded as the limit position. This is also a common way to specify the limit overlap rate or other parameters. We also test multiple samples and show the results in Figure S2-S5 of the Supplement 1.

We collected raw data at 39mm∼68 mm using the central 13 × 13 LEDs. Reconstruction results of two samples in and around the limit position are shown in Fig. 5 and Fig. 6. The redundancy analysis conducted by the USAF can serve as a valuable reference for other sample experiments, given its ground truth and ease of resolution. The results in Fig. 5(b1,c1) demonstrate that the group 9, element 5 of USAF can be restored at positions 41 mm and 43 mm, denoted by (9-5). However, at position 39 mm, only the (9-2) can be restored with noticeable artifacts present in both intensity and phase images (Fig. 5(a1,a2)). To ensure accurate reconstruction, we designate the recovery level for intensity and phase images at 41 mm as (9-5). The theoretical horizontal spatial resolution achievable by FPM for further analysis is calculated using formula ${\lambda / {({N{A_{obj}} + N{A_{illu}}} )}}$. It should be noted that this calculation is based on amplitude resolution, and the measured resolution may exceed this value due to frequency broadening after intensity measurement [33]. In the USAF experiment, the theoretical resolution corresponding to 41 mm is 0.7023. According to the spatial frequency table of USAF 1951 resolution target (lp/mm), the measured resolution corresponding to recovery level (9-5) is 0.8127. The theoretical resolution for 39 mm is 0.6848, but the measured resolution for reconstruction results (9-2) is only at 0.5747, which falls short of its theoretical counterpart, thus indicating that accurate reconstruction cannot be achieved at a height of 39 mm and that limit redundant information exists at 41 mm. The reconstruction results of cotton leaf slice show that clearly indicates artifacts at 43 mm (Fig. 6(a1, a2)), while the presence of artifacts gradually diminishes in the results obtained at 45 mm (Fig. 6(b1, b2)) and 48 mm (Fig. 6(c1, c2)). Finally, the limit position of cotton leaf slice is chosen as 43 mm, based on the reference to the USAF's limit position and the assumption that biological samples with higher complexity necessitate a greater amount of redundant information for accurate reconstruction. Detailed redundant calculation results are shown in Table 2. The complexity of the cotton leaf slice (8.1087bit) is higher than that of USAF (6.1250bit) in this imaging system, so more redundant information is required for the cotton leaf slice by increasing the height. Similarly, considering the effect of noise disturbance, we conducted data collection operations for the two samples at the limit position several times and finally got a statistical result of 24%±1%, which is also consistent with the calibration results in the simulations. Therefore, for the sake of conservatism, the minimum statistical value $\mathop{\alpha}\limits^{\sim}$ ($\mathop{\alpha}\limits^{\sim}$=23%) can be selected as the utilization rate of the Gauss-Newton algorithm.

 figure: Fig. 5.

Fig. 5. Experimental results of USAF. (a1-a3) The reconstructed images at 39 mm; (b1-b3) the reconstructed images at 41 mm; (c1-c3) the reconstructed images at 43 mm. (d) Raw data (1280 × 960 pixels) and magnified image of the region of interest (80 × 80 pixels). (e) Line profile of group 9, element 5 in (a1) (b1) (c1). The results at 41 mm correspond to the critical position of redundant information required for accurate reconstruction, as indicated by the line profile (e) and resolution analysis.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Experimental results of cotton leaf slice. (a1-a3) The reconstructed images at 43 mm; (b1-b3) the reconstructed images at 45 mm; (c1-c3) the reconstructed images at 48 mm. (d) Raw data (1280 × 960 pixels) and magnified image of the region of interest (300 × 300 pixels). The results at 45 mm correspond to the critical position of redundant information required for accurate reconstruction, as indicated by conducting a comparison of artifact elimination effectiveness in detail and analyzing redundant information.

Download Full Size | PDF

Tables Icon

Table 2. The calibration results of USAF and cotton leaf slice in experiments.

4.2 Application and analysis of redundant information model

After the calibration of the algorithm utilization rate is completed (23% for the Gauss-Newton algorithm), the redundant information model can be used to predict and interpret the experimental results of the new sample and provide theoretical guidance for the selection of imaging parameters. We selected another USAF at the height of 80 mm under a 2×/0.1 NA objective lens for the reconstruction experiment. The illumination wavelength is 0.6311µm and in this case, ${R_{cam}}$ and ${R_{overlap}}$ are respectively calculated as 1.6830 and 68.54%, which are sufficient for reconstruction if judged by experience. However, according to the redundant information model of FPM proposed, it can be calculated that the $\eta$ is 4.5049bit and the estimated ${Q_{limit}}$ is 19.5865bit. Then the total redundant information of real data is 5.9305bit, which is not meet Eq. (11) and is much smaller than ${Q_{limit}}$, thus the prediction result based on the model for this data is impossible to reconstruct.

To verify the reliability of the prediction result, the LR images were reconstructed by Gaussian Newton, as shown in Fig. 7. The image quality after reconstruction is not improved effectively, and the phase is almost invisible, which is consistent with our prediction, and also explains why the data cannot be reconstructed accurately even though it has a high value for ${R_{cam}}$, ${R_{overlap}}$. Further, for this composition image parameter, we can give some suggestions to increasing the information for the imaging system according to the model, including but not limited to the following conditions: (1) 2×/0.1 NA objective lens can be replaced by a 4×/0.1 NA objective lens; (2) collect a larger number of LR intensity images. (3) improve the SNR by using a better detector. In practice, these suggestions are not unique, but can also work together to increase the redundant information Q to meet ${Q_{limit}}$.

 figure: Fig. 7.

Fig. 7. A failure case of reconstruction with the sample observed under 2×/0.1 NA objective lens at the height of 80 mm. (a) Magnified images for the religion of interest of raw data; (b) intensity, (c) phase and (d) Fourier spectrum. after FPM reconstruction.

Download Full Size | PDF

4.3 More calibration results of reconstruction algorithm

Theoretically, different algorithms should possess distinct capabilities in terms of information utilization, and the algorithm information utilization rate is an inherent attribute, independent of samples and parameters, but solely dependent on the algorithm itself. Consequently, any changes to the algorithm core will naturally result in a different utilization rate. This attribute serves as one of the foundations for establishing our redundant evaluation model. Thus we calibrate the utilization rates of some reconstruction algorithms, including sequential Gauss-Newton [34], GS [35], EPRY-FPM [36], wavelength multiplexing algorithm [37] and LED multiplexing algorithm [18], mPIE [38].

After replacing the reconstruction algorithm of the simulation group with the GS algorithm, the utilization rate $\alpha$ of is 24.09%, 24.53%, and 23.88%, still within the statistical range of 24% ± 1%. However, the samples cannot be accurately reconstructed by mPIE at the limit position of Gauss-Newton. This phenomenon indicates that mPIE needs more redundant information to achieve accurate reconstruction. To find the utilization of mPIE, we conducted a large number of simulations under three sets of objective lenses respectively. By adjusting imaging parameters to increase either the Rcam or Roverlap, Q was increased. Since there are many imaging parameters involved in the redundant information model, we start by changing only one parameter, such as simply increasing the height or the number of LEDs, which is also the lowest cost parameter adjustment method in the experiment. In cases where accurate reconstruction cannot be achieved, we will continue to adjust parameters at the detector's pixel size to improve results. The adjustment of each parameter necessitates highly fine-tuned (the step size of height is 2 mm) reconstruction and redundant assessment to determine the limit position based on the error threshold and ${P_{rmse}}$ fluctuation trend (as demonstrated in Section 3), enabling the calculation of mPIE algorithm utilization. The error fluctuation results corresponding to different objectives and different height are shown in Fig. 8, and the imaging parameters where Qlimit is located are summarized, as shown in Table 3. Similarly, for each set of experiments, we repeat several times to observe the fluctuation of α. Finally, we find that the utilization rate of mPIE is at 19%±1%.

 figure: Fig. 8.

Fig. 8. The calibration of mPIE in simulation with the different objective lens. The corresponding error fluctuation of ${I_{rmse}},{P_{rmse}}$ changes with height are shown in (a) 2×/0.08 NA, (b) 4×/0.1 NA, (c) 10×/0.25 NA. ${Q_{limit}}$ has been marked with coordinates.

Download Full Size | PDF

Tables Icon

Table 3. Calibration results of mPIE in simulation

The results in Table 4 have been obtained through the consistent application of redundancy evaluation and simulation calibration processes across different algorithms. Since FPM is essentially an ill-conditioned inverse problem in mathematics, it is also a non-convex problem, which is more intricate compared to convex problems with globally optimal solutions. Common algorithms, such as GS, EPRY-FPM, sequential Gaussian Newton, LED multiplexing and wavelength multiplexing primarily employ the gradient sequence increment method, that is, the approximate gradient increment is calculated by a single image. Only a small part of the corresponding object function of an image is updated each time, and the traversal of all captured images is required for each iteration. The uniqueness of mPIE lies primarily in its incorporation of the momentum concept, wherein the gradient of mPIE is computed as a weighted combination of the current iteration's gradient and the previous iteration's gradient. Despite exhibiting faster convergence and lower error values, mPIE is susceptible to parameter influence, thereby theoretically resulting in lower information utilization compared to other algorithms. Our simulation calibration in Table 4 has also substantiated this speculation.

Tables Icon

Table 4. Calibration results of $\alpha$ for different algorithms

5. Conclusions and discussions

In this work, we reported a redundant information model for FPM. This model transfers the concept of Shannon information theory to FPM imaging by considering the similarity between a normal communication system and a typical FPM system. The model comprehensively considers several key parameters that affect reconstruction quality. We have verified the validity of the model through numerical simulations and experiments, demonstrating its application in predicting imaging results, optimizing imaging parameters, and evaluating algorithms. It is significant in evaluating the entire FPM imaging system from a higher perspective.

It must be acknowledged that our model still has certain limitations. Although this study offers valuable insights into assessing redundant information by applying Shannon’s information theory to FPM for the first time and attempting to evaluate redundant information of FPM from a holistic perspective. However, currently we lack a theoretical mathematical derivation. We can only verify the rationality and feasibility of this model through numerous simulations and experiments. We have thoroughly reviewed all accumulated data from years of FPM research and found that the experimental results are consistent. But it is challenging to comprehensively discuss the specific number of samples required. Therefore, we can only conduct a statistical experiment and enhance its randomness by utilizing unrelated samples such as animal specimens, plant specimens, pathological specimens, and arbitrary images. In addition, the model is expected to be used for evaluating other algorithms, however, our attempts at calibrating the FPM reconstruction algorithm based on deep learning which has garnered significant attention were unsuccessful. Compared to traditional algorithms, this type of algorithm can be completed with just a few images, which may be explained by its higher utilization rate, but calibration of this is difficult due to the need for multiple processing of raw data to meet its training requirements. Another algorithm based on Kramers-Kronig relations (KK-FPM) [39,40] is also worth discussing, as it only requires four images to reconstruct HR images. Unlike FPM, it cannot reconstruct pupils, which may be due to the fact that FPM can acquire sufficient redundant information for pupil reconstruction. However, the model does not yet account for pupil redundant information separately, and just treats it as an aberration included in the SNR calculation. This may result in inaccurate calibration of KK-FPM by the model.

The redundant information model proposed for FPM involves intuitive and empirical judgments because of the complexity and nonlinearity of mathematical derivation. Furthermore, the majority of computations are executed on captured images that may contain operational inaccuracies during acquisition and result in minor discrepancies in outcomes. Even so, we believe that this model is still significant as it offers a more advanced level of evaluation for FPM imaging and hope that it can inspire further research on the more consummate redundant information theory.

Funding

National Natural Science Foundation of China (12104500); Key Research and Development Projects of Shaanxi Province (2023-YBSF-263).

Acknowledgments

An Pan thanks Prof. Changhuei Yang (California Institute of Technology, USA) for comprehensive discussions during his visiting at Caltech in 2018–2019.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Data availability

Data will be made available by the corresponding author on reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. P. C. Konda, L. Loetgering, K. C. Zhou, et al., “Fourier ptychography: current applications and future promises,” Opt. Express 28(7), 9603–9630 (2020). [CrossRef]  

3. A. Pan, C Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

4. A. Pan, Y. Zhang, T. Zhao, et al., “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt 22(09), 1 (2017). [CrossRef]  

5. Y. Zhang, A. Pan, M. Lei, et al., “Data preprocessing methods for robust Fourier ptychographic microscopy,” Opt. Express 56(12), 1 (2017). [CrossRef]  

6. A. Pan, C. Zuo, Y. Xie, et al., “Vignetting effect in Fourier ptychographic microscopy,” Opt. Lasers Eng. 120, 40–48 (2019). [CrossRef]  

7. A. Pan, K. Wen, and B Yao, “Linear space-variant optical cryptosystem via Fourier ptychography,” Opt. Lett. 44(8), 2032–2035 (2019). [CrossRef]  

8. M. Xiang, A. Pan, Y. Zhao, et al., “Coherent synthetic aperture imaging for visible remote sensing via reflective Fourier ptychography,” Opt. Lett. 46(1), 29–32 (2021). [CrossRef]  

9. J. W. Goodman, Introduction to Fourier Optics. (Macmillan Learning: New York, 2017).

10. J. Park, D. Brady, G. Zheng, et al., “Review of bio-optical imaging systems with a high space-bandwidth product,” Adv Photonics 3(04), 044001 (2021). [CrossRef]  

11. R. Horstmeyer, X. Ou, G. Zheng, et al., “Digital pathology with Fourier ptychography,” Comput. Med. Imaging Graph. 42, 38–43 (2015). [CrossRef]  

12. Y. Gao, J. Chen, A. Wang, et al., “High-throughput fast full-color digital pathology based on Fourier ptychographic microscopy via color transfer,” Sci. China Phys. Mech. 64(11), 114211 (2021). [CrossRef]  

13. J. Chen, A. Wang, A. Pan, et al., “Rapid full-color Fourier ptychographic microscopy via spatially filtered color transfer,” Photonics Res. 10(10), 2410–2421 (2022). [CrossRef]  

14. A. Pan, Y. Zhang, K. Wen, et al., “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

15. ACS Chan, J. Kim, A. Pan, et al., “Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),” Sci. Rep. 9(1), 11114 (2019). [CrossRef]  

16. R. Horstmeyer, J. Chung, and X. Ou, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

17. S. Chowdhury, M. Chen, R. Eckert, et al., “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211–1219 (2019). [CrossRef]  

18. L. Tian, X. Li, and K. Ramchandran, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

19. L. Tian, Z. Liu, L-H. Yeh, et al., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

20. Y. Xue, S. Cheng, Y. Li, et al., “Reliable deep-learning-based phase imaging with uncertainty Quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

21. T. Nguyen, Y. Xue, Y. Li, et al., “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

22. A. Robey and V. Ganapati, “Optimal physical preprocessing for example-based super-resolution,” Opt. Express 26(24), 31333–31350 (2018). [CrossRef]  

23. Y. Li, C. Liu, J. Liu, et al., “Adaptive and efficient fourier ptychographic microscopy based on information entropy,” J. Opt. 22(4), 045702 (2020). [CrossRef]  

24. J. Sun, Q. Chen, Y. Zhang, et al., “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

25. L. Bian, J. Suo, G. Situ, et al., “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]  

26. Y. Zhang, W. Jiang, and L. Tian, “Self-learning based Fourier ptychographic microscopy,” Opt. Express 23(14), 18471–18486 (2015). [CrossRef]  

27. D. Mendlovic and A. W. Lohmann, “Space–bandwidth product adaptation and its application to superresolution: fundamentals,” J. Opt. Soc. Am. A 14(3), 563–567 (1997). [CrossRef]  

28. G. Zheng, C. Shen, S. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

29. C. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J 27(3), 379–423 (1948). [CrossRef]  

30. Y. Zhang. Research on computational microscopic imaging methods based on Fourier ptychographic microscopy. Xi’an Institute of Optics & Precision Mechanics, Chinese Academy of Sciences, (2018).

31. A. Pan. High-resolution large field-of-view fast-speed Fourier ptychographic microscopy. Xi’an Institute of Optics & Precision Mechanics, Chinese Academy of Sciences, (2020).

32. S. Abdel-Khalek, A. Ben Ishak, O.A. Omer, et al., “A two-dimensional image segmentation method based on genetic algorithm and entropy,” Optik 131, 414–422 (2017). [CrossRef]  

33. G. Zheng, Innovations in imaging system design: gigapixel, chip-scale and multi-functional microscopy. California Institute of Technology, Pasadena, CA, USA (2013).

34. L-H. Yeh, J. Dong, J. Zhong, et al., “Experimental robustness of Fourier ptychography phase retrieval Algorithms,” Opt. Express 23(26), 33214 (2015). [CrossRef]  

35. R.W. Gerchberg and W.O. Saxton, “Phase determination from image and diffraction plane pictures in electron-microscope,” Optik 34(3), 275–283 (1971).

36. X. Qu, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

37. W. Lee, D. Jung, S. Ryu, et al., “Single-exposure quantitative phase imaging in color-coded LED microscopy,” Opt. Express 25(7), 8398–8411 (2017). [CrossRef]  

38. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

39. Y. Li, C. Shen, J. Tan, et al., “Fast quantitative phase imaging based on Kramers-Kronig relations in space domain,” Opt. Express 29(25), 41067–41080 (2021). [CrossRef]  

40. C. Shen, M. Liang, A. Pan, et al., “Non-iterative complex wave-field reconstruction based on Kramers–Kronig relations,” Photonics Res. 9(6), 1003 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

Data will be made available by the corresponding author on reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) A general communication system, (b) imaging process of FPM.
Fig. 2.
Fig. 2. The process of FPM redundancy information measurement. (a) During the calibration of utilization rate α, a series of LR images is collected; the signal and noise information are separated by pre-processing and then used to calculate the total redundant information Q of the FPM system by Eq. (3); η is obtained by calculating the 2D information entropy based on the statistics of gray features of the defocused image at the central bright field. $f(a,b)$ is a set representing spatial features of the image, consisting of one pixel and its domain mean by 3 × 3. When Q drops to Qlimit by adjusting parameters, α is calculated with η/Qlimit. (b) After α is calibrated, the redundant information model can be used to judge and further provide suggestions on parameter modification.
Fig. 3.
Fig. 3. The result of limit position for accurate reconstruction with Gauss-Newton. (a) Ground Truth; Intensity and phase images reconstructed at ${Q_{limit}}$ with objectives of (b) 2×/0.08 NA, (c) 4×/0.1 NA, (d) 10×/0.25 NA. (b1,c1,d1) are the corresponding error fluctuation of ${I_{rmse}},{P_{rmse}}$ changes with height (h). ${Q_{limit}}$ has been marked with coordinates.
Fig. 4.
Fig. 4. The fluctuation of $\alpha$ with Gaussian noise within 10%. And the fluctuations with three systems are all in a similar range, 24% ± 1%.
Fig. 5.
Fig. 5. Experimental results of USAF. (a1-a3) The reconstructed images at 39 mm; (b1-b3) the reconstructed images at 41 mm; (c1-c3) the reconstructed images at 43 mm. (d) Raw data (1280 × 960 pixels) and magnified image of the region of interest (80 × 80 pixels). (e) Line profile of group 9, element 5 in (a1) (b1) (c1). The results at 41 mm correspond to the critical position of redundant information required for accurate reconstruction, as indicated by the line profile (e) and resolution analysis.
Fig. 6.
Fig. 6. Experimental results of cotton leaf slice. (a1-a3) The reconstructed images at 43 mm; (b1-b3) the reconstructed images at 45 mm; (c1-c3) the reconstructed images at 48 mm. (d) Raw data (1280 × 960 pixels) and magnified image of the region of interest (300 × 300 pixels). The results at 45 mm correspond to the critical position of redundant information required for accurate reconstruction, as indicated by conducting a comparison of artifact elimination effectiveness in detail and analyzing redundant information.
Fig. 7.
Fig. 7. A failure case of reconstruction with the sample observed under 2×/0.1 NA objective lens at the height of 80 mm. (a) Magnified images for the religion of interest of raw data; (b) intensity, (c) phase and (d) Fourier spectrum. after FPM reconstruction.
Fig. 8.
Fig. 8. The calibration of mPIE in simulation with the different objective lens. The corresponding error fluctuation of ${I_{rmse}},{P_{rmse}}$ changes with height are shown in (a) 2×/0.08 NA, (b) 4×/0.1 NA, (c) 10×/0.25 NA. ${Q_{limit}}$ has been marked with coordinates.

Tables (4)

Tables Icon

Table 1. The imaging parameters and calibration results by different imaging systems in simulation.

Tables Icon

Table 2. The calibration results of USAF and cotton leaf slice in experiments.

Tables Icon

Table 3. Calibration results of mPIE in simulation

Tables Icon

Table 4. Calibration results of α for different algorithms

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

H ( x ) = i p i log p i , x p ( x )
C = 2 W log ( 1 + S / N )
Q = R o v e r l a p i n Q i = R o v e r l a p i n R c a m W ln ( 1 + S N R i )
S N R i = ( S ^ x , y i  -  N x , y i ) 2 ( N x , y i ) 2
R c a m = λ M a g / ( 2 Δ x c a m N A o b j )
R o v e r l a p = 1 / π [ 2 arccos ( 1 / 2 R L E D ) 1 ( 1 / ( 2 R L E D ) ) 2 / R L E D ]
R L E D = N A D L E D 2 + h 2 / D L E D 2
η = i = 0 255 p i log p i
η = i = 0 255 j = 0 255 p i j log p i j
P ( u ) = | P ( u ) | e j k z 1 λ 2 | u 2 | , λ | u | 1
α Q η
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.