Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Generalized proximal smoothing (GPS) for phase retrieval

Open Access Open Access

Abstract

In this paper, we report the development of the generalized proximal smoothing (GPS) algorithm for phase retrieval of noisy data. GPS is a optimization-based algorithm, in which we relax both the Fourier magnitudes and support constraint. We relax the support constraint by incorporating the Moreau-Yosida regularization and heat kernel smoothing, and derive the associated proximal mapping. We also relax the magnitude constraint into a least squares fidelity term, whose proximal mapping is available as well. GPS alternatively iterates between the two proximal mappings in primal and dual spaces, respectively. Using both numerical simulation and experimental data, we show that GPS algorithm consistently outperforms the classical phase retrieval algorithms such as hybrid input-output (HIO) and oversampling smoothness (OSS), in terms of the convergence speed, consistency of the phase retrieval, and robustness to noise.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Phase retrieval has been fundamental to several disciplines, ranging from imaging, microscopy, crystallography and optics to astronomy [1–4]. It aims to recover an object only from its Fourier magnitudes. Without the Fourier phases, the recovery can be achieved via iterative algorithms when the Fourier magnitudes are sampled at a frequency sufficiently finer than the Nyquist interval [5]. In 1972, Gerchberg and Saxton developed an iterative algorithm for phase retrieval, utilizing the magnitude of an image and the Fourier magnitudes as constraints [6]. In 1982, Fienup generalized the Gerchberg-Saxton algorithm by developing two iterative algorithms: error reduction (ER) and hybrid input-output (HIO), which use a support and non-negativity as constraints in real space and the Fourier magnitudes in reciprocal space [7]. In 1998, Miao, Sayre and Chapman proposed, when the number of independently measured Fourier magnitudes is larger than the number of unknown variables associated with a sample, the phases are in principle encoded in the Fourier magnitudes and can be retrieved by iterative algorithms [5]. These developments finally led to the first experimental demonstration of coherent diffractive imaging (CDI) by Miao and collaborators in 1999 [8], which has stimulated wide spread research activities in phase retrieval, CDI, and their applications in the physical and biological sciences ever since [2, 9, 10].

For a finite object, when its Fourier transform is sampled at a frequency finer than the Nyquist interval (i.e. oversampled), mathematically it is equivalent to padding zeros to the object in real space. In another words, when the magnitude of the Fourier transform is oversampled, the correct phases correspond to the zero-density region surrounding the object, which is known as the oversampling theorem [5, 11]. The phase retrieval algorithms iterate between real and reciprocal space using zero-density region and the Fourier magnitudes as dual-space constraints. A support is typically defined to separate the zero-density region from the object. The support constraint, which is also known as the non-negativity constraint, is applied to the density inside the support. In the ER algorithm, the no-density region outside the support and the negative density inside the support are set to zero in each iteration [7]. The HIO algorithm relaxes the ER in the sense that it gradually reduces the densities that violate the support constraint instead of directly forcing them to zero [7]. This relaxation often leads to good reconstructions from noise-free patterns. However, in real experiments, the diffraction intensities, which are proportional to the square of Fourier magnitudes, are corrupted by a combination of Gaussian and Poisson noise and missing data. In the presence of experimental noise and missing data, phase retrieval becomes much more challenging, and the ER and HIO algorithms may only converge to sub-optimal solutions. Simply combining ER and HIO still suffers from stagnation and the iterations can get trapped at local minima [12]. To alleviate these problems, more advanced phase retrieval algorithms have been developed such as the shrink-wrap algorithm and guided HIO (gHIO) [4, 13]. In 2010, a smoothness constraint in real space was first introduced to improve the phase retrieval of noisy data [14]. Later, a noise robust framework was implemented for enhancing the performance of existing algorithms [15]. Recently, Rodriguez et al. proposed to impose the smoothness constraint on the no-density region outside the support by applying Gaussian filters [16]. The resulting oversampling smoothness (OSS) algorithm successfully reduces oscillations in the reconstructed image, and is more robust to noisy data than the existing algorithms.

Since phase retrieval can be cast as a non-convex minimization problem, many efforts have been made to study phase retrieval algorithms from the viewpoint of optimization. For example, Bauschke et al. [17] related HIO to a particular relaxation of the Douglas-Rachford algorithm [18] and introduced the hybrid projection reflection algorithm [17, 19]. Using similar ideas, researchers further proposed several projection algorithms such as iterated difference map [20] and relaxed averaged alternation reflection [21]. In [22], Chen and Fannjiang analyzed a Fourier-domain Douglas-Rachford algorithm for phase retrieval. By taking noise into account, the Wirtinger Flow [23] replaces the amplitude constraint by the intensity constraint which is then relaxed into a least square fidelity term that measures the misfit of measured Fourier intensity data, to which gradient descent is applied. Other methods in this line include alternating direction methods [24–26] that have been widely used in image processing, as well as lifting approaches [27] such as PhaseLift [28, 29] by Candès et al. and its variants [30, 31].

In this paper, we propose an optimization-based phase retrieval method, termed generalized proximal smoothing (GPS), which effectively addresses the noise in both real and Fourier spaces. Motivated by the success of OSS [16], GPS incorporates the idea of Moreau-Yosida [32, 33] regularization with heat kernel smoothing, to relax the support constraint into a continuous penalty term. We further relax the magnitude constraint into a least squares fidelity term, for de-noising in Fourier space. To minimize the resulting primal-dual formulation, GPS iterates back and forth between efficient proximal mappings of the two relaxed functions, respectively. Our experimental results using noisy experimental data of biological and inorganic specimens demonstrate that GPS consistently outperforms the state-of-the-art algorithms HIO and OSS in terms of both speed and robustness. We also refer readers to the recent paper [34] about training quantized neural networks, which shows another success of using Moreau-Yosida regularization to relax the hard constraint.

Notations

Let us fix some notations. For any complex-valued vectors u, v ∈ ℂn, u¯ is the complex conjugate of u, whereas u*:=u¯ is the Hermitian transpose. Re(u) and Im(u) are the real and imaginary parts of u, respectively.

u,v:=u*v=i=1nui¯vi is the Hermitian inner product of u and v. uv is the element-wise (Hadamard) product of u and v given by (uv)i = ui vi. u:=u,u denotes the 2 norm of u. Given any Hermitian positive semi-definite matrix K ∈ ℂn×n, we define uK:=u,Ku.

arg(u):=tan1(Im(u)Re(u)) denotes the argument (or phase) of a non complex-valued vector u = Re(u) + i · Im(u). For convenience we set arg(0) = 1.

X is the indicator function of a closed set Xn given by X(x)=0 if xX and X(x)= otherwise

projX(u):=argminvXvu is the projection of u onto X,

proxf(u):=argminv{f(v)+12vu2} is the proximal mapping of the function f.

h*(y):=supuC{Rey,uh(u)} is the extended Legendre transformation to the complex plane of a real-valued function h.

2. The proposed model

We first fix some settings of the phase retrieval problem. Consider the reconstruction of a 2D image u defined on a discrete lattice Ω := {(i, j) :1 ≤ in1, 1 ≤ jn2}.

For simplicity, we represent u in terms of a vector in ℝn by the lexicographical order with n = n1 × n2. Then ui represents the density of image at the i-th pixel. Due to oversampling, the object densities reside in a sub-domain S ⊂ Ω known as the support, and u is supposed to be zero outside S. Throughout the paper, we assume that the support S is centered around the domain Ω. The support constraint is S:={un:ui0ifiS,ui=0otherwise}.

The Fourier magnitude data is obtained as b = |ℱu|, where :n1×n2n1×n2 is the discrete Fourier transform (DFT). We denote the magnitude constraint by T:={un:|u|=b}.

In the absence of noise, phase retrieval (PR) problem is simply to

findun,such thatuST.

This amounts to the following composite minimization problem

minunf(u)+g(u),
where f(u):=S(u) and g(z):=|z|=b(z) are two indicator functions that enforce the object and Fourier magnitudes constraints, respectively. Note that f is a closed and convex function while g is closed but non-convex. This gives rise to the non-convex optimization problem of (2).

2.1. A new noise-removal model

In real experiments, the Fourier data are contaminated by experimental noise. Moreover, the densities outside the support are not exactly equal to zero. In the noisy case, the image to be reconstructed no longer fulfills either the Fourier magnitudes or the support constraint. The ER algorithm (a.k.a. alternating projection), which performs alternatively projects onto these two constraints, apparently does not take care of the noise. The HIO “relaxes” the support constraint on densities wherever it is violated. This relaxation only helps in the noiseless case. In the presence of noise, the feasible set ST can be empty, and alternating projection method and its relaxation, like ER and HIO, may fail to converge and keep oscillating. The OSS [16] improves the HIO by applying extra Gaussian filters to smooth the densities outside the support at different stages of the iterative process. None of them, however, seems to properly address the corruption of the Fourier magnitudes.

Introducing the splitting variable z = ℱu ∈ ℂn, we reformulate (2) as

minu,znf(u)+g(z)subjecttoz=u.

In the presence of noise, we seek to relax the indicator functions f and g that enforce hard constraints into soft constraints. To this end, we first relax g into a least square fidelity, which is the sum of squared errors as follows

gσ(z)=12σ|z|b2.

This fidelity term has been considered in the literature by assuming the measurements being corrupted by i.i.d. Gaussian noise; see [24] for example. In practice, we observe that it works well even with a combination of Gaussian and Poisson noise. This relaxation is related to the Moreau-Yoshida regularization [32, 33], which will be explained in detail in the appendix.

Following this line, we further relax f=S into

fG(u):=infv{f(v)+12vuG12}=infvS12vuG12
for some Hermitian positive definite matrix G. The choice of G here is tricky, and will be discussed later in section 3. The relaxation of both constraints thus leads to the proposed noise-removal model
minu,znfG(u)+gσ(z)subjecttoz=u.

This constraint optimization can be solved by a splitting method, such as alternating direction method of multipliers (ADMM) [35–37] or primal dual hybrid gradient (PDHG) algorithm [38–41].

2.2. A primal-dual formulation

We solve the constraint optimization problem (6) by introducing the Lagrangian

(u,z;y)=fG(u)+gσ(z)+Rey,*zu,
where ℱ = ℱ−1 is the adjoint of ℱ or the inverse DFT. The corresponding Karush-Kuhn-Tucker (KKT) condition is
yfG(u),ygσ(z)*z=ufG*(y),ygσ(z),
where the right hand side equivalence is obtained by the convex conjugate property and is exactly the KKT condition of the following min-max saddle point problem
minzmaxygσ(z)fG*(y)+Rez,y,
with fG*(y) is the generalized Legendre-Fenchel transformation of fG* and has an explicit form thanks to Moreau-Yoshida decomposition
fG*(y)=f*(y)+12yG2,
where f is the Legendre-Fenchel transformation of f defined as
f*(y)=supvSRey,v={0ifRe(y)0onS,otherwise.

For notational convenience, we denote S*:={ySn:Re(y)0onS} the dual constraint set of S. Our finalized min-max problem is thus given by

minzmaxyS*12σ|z|b2+12yG2+Rez,y.

3. Generalized proximal smoothing (GPS) algorithm

We carry out the minimization of the saddle point problem (9) or (12) by a generalized primal dual hybrid gradient (PDHG) algorithm [38–41], which iterates

{zk+1=proxtgσ(zktyk)yk+1=proxsfG*(yk+s1(2zk+1zk))
for some step sizes s, t > 0. The update of zk+1 calls for computing the proximal mapping of tgσ [24], whose analytic solution is given by
proxtgσ(z)=argminvn12σ|v|b2+12tvz2=bexp(iarg(z))+(σ/t)z1+σ/t,
which is essentially a weighted average between z and its projection onto the magnitude constraint {z ∈ ℂn : |z| = b}.

Moreover, we need to find the proximal mapping of sfG* for updating yk+1, which reduces to

proxsfG*(y)=argminvnfG*(v)+12svy2=argminvnf*(v)+12vG2+22svy2=argminvS*12vG2+22svy2.

Eq. (15) has closed-form solution only when G is a diagonal matrix. In the other case, we provide an approximation.

We devise two versions of GPS algorithm based on different choices of G.

3.1. Real space smoothing

One choice of G is G = γ DTD. Here D is the discrete gradient operator, and then DTD is the negative of discrete Laplacian. In this case,

fG*(y)=f*(y)+γ2Dy2,
which we shall refer as the real space smoothing. Since G is not diagonal, the closed-form solution to Eq. (15) is not available. For small γ, we approximate the solution by 2 steps: projection and smoothing,
proxsfG*(y)(I+sγDD)1projS*(y).

The linear inversion is in fact the Laplacian smoothing that follows the projection to ensure the smoothness of the reconstructed image after each iteration. The real space smoothing is related to the diffusion process and can be approximated by a low-pass filter. Recall Gt(x):=1(4πt)n/2exp(x24t) is a (heat) Gaussian kernel and its Fourier transform Wt (ξ) = exp(−4π2t|ξ|2) is a normalized Gaussian function. Then replacing the linear inversion by the Gaussian kernel and convolution give a fast approximated implementation of Eq. (17) for small γ

yk+1=GsγprojS*(yk+s1(2zk+1zk)),

The projection on S* can be computed as follows

projS*(y)i={Re(yi)+iIm(yi)ifiS,yiotherwise.

Here x := min(0, x) and x+ := max(0, x) for x ∈ ℝ. The remaining convolution can be done via the efficient DFT by computing

Gγu=Re{1(Wγu)}.

Using Eq. (14), Eq. (19), and Eq. (20), we arrive at our first algorithm

Algorithm 1: GPS-R features low-pass filters for smoothing. Here we abuse notation γ to imply the filter. Inspired by OSS [16], we choose a sequence of increasing spatial frequency filters {γl} (a sequence of finer filters). In our experiments, we do 1000 iterations with a total of 10 stages. Each stage contains 100 iterations, in which the filter frequency is held constant. We monitor the R-factor (relative error) during the iterative process, which is defined as

RF(uk)=i||uk|ibi|ibi.

The reconstruction with minimal RF at each stage is fed into the next stage. By applying the smoothing on the entire domain, GPS-R can remove noise in real space and obtain the spatial resolution with fine features.

Tables Icon

Algorithm 1. GPS-R: GPS with smoothing in real space.

3.2. Fourier space smoothing

Another simple choice of G is the diagonal matrix G = γ Diag(rr) where r ∈ ℝn and ri is the distance in the original 2D lattice between the i-th pixel and the center of image. Note that G is not invertible since ri = 0 for the pixel at the center. By Eq. (41),

fG(u)={i|uprojS(u)|i22γri2ifui0atthecenter,otherwise,

So fG(u) is a weighted sum of squares penalty on u. The weight is inversely proportional to the squared radius, which is infinity for density in the center. The further the density off the center, the smaller the penalty for the support constraint being violated.

By Parseval’s identity, for a square-integrable function u, we have

|ddξu^(ξ)|2dξ=|xu(x)|2dx,
where u^ is the Fourier transform of u. In the discrete setting, this amounts to
Du2=ru2,

Therefore, by Eq. (10),

fG*(y)=f*(y)+γ2ry2=f*(y)+γ2Dy2.

This means that we are smoothing f*(y) by regularizing with the 2 gradient of Fourier coefficients of y. We thus refer to it as Fourier space smoothing.

Since G is diagonal, Eq. (15) has the closed-form solution

proxsfG*(y)=11+sγr2projS*(y)exp(sγr2)projS*(y).

Define Wsγ4π2:=exp(sγr2). The above quantity can be approximated by a direct multiplication for small γ. We update yk+1 as

yk+1=Wsγ4π2projS*(yk+s1(2zk+1zk)).

GPS with smoothing in Fourier space (GPS-F) is summarized in Algorithm 2.

Tables Icon

Algorithm 2. GPS-F: GPS with smoothing in Fourier space.

We illustrate the core of GPS algorithm as a flowchart in Fig. 1

 figure: Fig. 1:

Fig. 1: Flowchart of GPS algorithm for stage 1. For simplicity, we denote ϵ=σσ+1 in Eq. (14). The input z0 is initialized with random phases and magnitudes are equal the measurements, while y0 is a zero vector. The projS* and the convolution ∗ follows Eq. (19) and Eq. (20) respectively. zk and yk with the smallest RF will be fed as an input into the next stage where a Gaussian kernel Gγ with larger γ is applied.

Download Full Size | PDF

3.3. Incomplete measurements

In practice, not all diffraction intensities can be experimentally measured. For example, to prevent a detector from being damaged by an intense X-ray beam, either a beamstop has to be placed in front of the detector to block the direct beam or a hole has to be made at the center of the detector, resulting in missing data at the center [42]. Furthermore, missing data may also be present in the form of gaps between detector panels. For incomplete data, the alternating projection algorithms skip the projection onto the magnitude constraint in this region. Similarly, we only apply the relaxation gσ=12σi||zi|bi|2 on the known data for GPS. A simple exercise shows that

zik+1={(proxtgσ(zktyk))iifbiisknown,(zktyk)iotherwise.

4. Experimental results

4.1. Reconstruction from simulated data

We expect GPS to be a reliable PR algorithm in the reconstruction of the images of weakly scattering objects, in particular biological specimens, which have become more popular [43]. Since OSS has been shown to consistently outperform ER, HIO, ER-HIO, NR-HIO [16], we perform both quantitative and qualitative comparisons between GPS and OSS.

Due to the discrete nature of photon counting, experimentally measured data inherently contain Poisson noise that is directly related to the incident photon flux on the object. In addition to Poisson noise, the data is also corrupted with zero-mean Gaussian noise to simulate readout from a CCD. Any resulting negative values are set to zero. Therefore, an accurate simulation of bi can be calculated as

bi=Poisson(|uo|i2fluxuo2)+N(0,σ),
where uo is the noise-free model, and b are noisy Fourier magnitudes. σ is proportional to the readout noise. [44]. We use Rnoise to quantify the relative error with respect to the noise-free Fourier measurements
Rnoise=i|bi|uo|i|i|uo|i,

For simulation, the Fourier magnitudes of a vesicle model are corrupted with Poisson and Gaussian noises by Eq. (29). The corresponding relative error is approximated Rnoise ≈ 6%.

In some cases, the reconstructed image yields a small relative error RF but has low quality. This is the issue of over-fitting, an example of which can be seen in certain reconstructions using ER-HIO [16]. Smoothing is a technique to avoid data over-fitting. To validate results and show that our algorithm does not develop over-fitting, we measure the difference between the reconstructed image and the model by

Rreal=i|uikuio|iuio.

In addition, we also look at the residual Res=||u|b|| which measures the difference between the Fourier magnitudes of the reconstructed image and the experimental measurements. The residual can validate the noise-removal model, telling how much noise is removed. Figure 2 shows the reconstruction of vesicle model from simulated noisy data using HIO, OSS, GPS-R, and GPS-F. GPS-F and GPS-R obtain lower RF and Rreal than OSS. Moreover, GPS-F can get very close to the ground truth with RF =5.90% and Rreal = 0.7%. In addition to lower R values, GPS-R and GPS-F converge to zero outside the support. They both obtain lower residuals than OSS, specifically GPS-F produces the least residual. If we choose larger parameter for the 2 gradient regularizer in Fourier space, we will get a smoother residual. Overall for realistic Gaussian and Poisson noise in Fourier space, GPS-F is a suitable noise-removal model.

 figure: Fig. 2:

Fig. 2: First row: vesicle model with log-scaled diffraction pattern (left), zoom-in image (center) and residual (right). Second row: HIO: RF = 12.87%, Rreal = 21.14%. Third row: OSS: RF = 6.08%, Rreal = 3.59%. Fourth row: GPS-R RF = 5.90%, Rreal = 2.85%. Fifth row: GPS-F RF = 5.89% and Rreal = 0.7%. Third column: residual.

Download Full Size | PDF

Figure 3 shows the histogram and the convergence of RF and Rreal on 100 independent, randomly seeded runs using HIO, OSS and GPS on the simulated vesicle data. The histogram shows that GPS is more consistent and robust than OSS. It has a higher chance to converge to good minima with lower RF and Rreal than OSS. Furthermore, RF and Rreal of OSS scatter widely due to the initial value dependency. In contrast, GPS is more selective and less dependent on initial values. RF and Rreal of GPS are seen at a lower mean minimum with less variance.

 figure: Fig. 3:

Fig. 3: Histogram (first row), and convergence (second row) of RF and Rreal on Vesicle data using HIO, OSS, GPS. GPS consistently produces smaller RF and Rreal than HIO or OSS. Moreover, GPS converges fastest with the fewest oscillations.

Download Full Size | PDF

Similar to HIO and ER, OSS keeps oscillating until a finer low-pass filter is applied. In contrast, GPS converges faster and is less oscillatory than OSS. In the presence of noise, alternating projection methods (ER, HIO, OSS) keep oscillating but do not converge. Applying smoothing and replacing the measurement constraint by the least squares fidelity term gσ(z)=12σ|z|b2 helps to reduce the oscillations; hence, the method can converge to a stationary point. Note that larger σ reduces more oscillations, but also decreases the chance to escape from local minima. Alternating projection methods have σ = 0 since they impose measurement constraints. GPS obtains both smaller RF, Rreal, and lower variance. Even though RF are close to each other, Rreal of GPS is much smaller than OSS. This means GPS recovers the vesicle cell with higher quality than OSS.

Next, we test the consistency of GPS algorithm for vesicle model. Since OSS has been shown to outperform HIO, ER-HIO and HR-HIO [16], it suffices to carry out the comparison among HIO, OSS and GPS. We look for the mean and variance of the best five of 500 independent reconstructions by HIO, OSS, GPS-F and GPS-R (lowest RF). Figure 4 shows that GPS has much smaller variance than OSS and HIO. These tests shows that GPS is more reliable and consistent than OSS and HIO.

 figure: Fig. 4:

Fig. 4: The (scaled) means and variances of the best five of 500 independent reconstructions by HIO, OSS, GPS-F and GPS-R

Download Full Size | PDF

4.2. Reconstructions from experimental data

4.2.1. S. pombe yeast spore

To demonstrate the applicability of GPS to biological samples, we do phase retrieval on the diffraction pattern in Fig. 5 taken of a S. Pombe yeast spore from an experiment done using beamline BL29 at SPring-8 [45]. We do 500 independent, randomly seeded reconstructions with each algorithm and record RF, excluding the missing center. We choose default parameters for these experiments: t = 1, s = 0.9. The sequence of low-pass filters are chosen to be the same as in OSS [16]. For the first 400 iterations, σ = 0.01, then is increased to σ = 0.1 for the remaining 600 iterations. The left column of Fig. 5 is the mean of the best 5 reconstructions obtained by the respective algorithm. The right column shows the variance of the same 5 reconstructions. It is evident from the variance that GPS achieves more consistent reconstructions. Figure 6 shows the histogram and convergence of RF. We can conclude that not only are GPS-R results the most consistent, but also the most faithful to the experimental data.

 figure: Fig. 5:

Fig. 5: S. pombe yeast spore log-scaled diffraction pattern, size 500 × 500 (top). Means (first row) and variance (second row) are computed from the best 5 of 500 independent reconstructions. HIO: RF = 15.697% ± 0.526%, OSS: RF = 9.775% ± 0.202%, and GPS: RF = 8.672% ± 0.025%.

Download Full Size | PDF

 figure: Fig. 6:

Fig. 6: Top: histogram of RF in 500 independent runs (top). Bottom: the convergence curves of a singe construction of RF on S. pombe yeast spore by GPS-RF, OSS, and HIO.

Download Full Size | PDF

4.2.2. Nanorice

To demonstrate the generality of GPS, we also do testing with experimental data of inorganic samples. The diffraction patterns shown in the top row of Fig 7 from ellipsoidal iron oxide nanoparticles (referred to as ‘nanorice1’and ‘nanorice2’) were taken at the AMO instrument at LCLS at an X-ray energy of 1.2keV [46]. This data is freely available online on the CXIDB [47]. We choose default parameters for these experiments: t = 1, s = 0.9. The sequence of low-pass filters are chosen to be the same as in OSS [16]. The fidelity parameter σ is chosen small for the first 800 iterations, specifically σ ∈ [0, 0.01], to produce oscillations which is necessary for the algorithm to skip bad local minima. Once the reconstruction arrives at a good local minimum region, we increase σ to reduce oscillations. This later value of σ depends on noise level and data. We test different values of σ and σ = 1 has been found to be the optimal for both nanorice data. Figure 7 shows OSS (second row) and GPS-F(third row) reconstructions of the two nanorice particles. Figure 8 shows again that GPS obtains more consistent and faithful reconstructions as compared to those obtained by OSS. GPS-F with σ = 1 converges to lower relative error than OSS at all times. OSS cannot get lower relative error because σ = 0 does not work for this case. In general, alternating projection methods do not treat noise correctly. For example, in this case, HIO keeps oscillating but does not converge. Therefore, its results are omitted here. The better approach, OSS model, can reduce oscillations by smoothing but this is not enough. In contrast, the least squares gσ(z)=12σ|z|b2 of GPS works for noise removal since relaxing the constraints allows GPS to reach lower relative error. The values of σ depend on noise level and type. To optimize the convergence of GPS-F on nanorice2, we apply σ = 0.01 for the first 400 iterations, σ = 0.1 for the next 300 iterations, and σ = 1 for the last 300 iterations. This test shows the effect of σ on the convergence. OSS (σ = 0) oscillates the most. GPS with σ = 0.01, 0.1, 1 oscillates less and less. As σ increases, GPS also gets to lower RF. The algorithm finally reaches a stable minimum as σ goes up to 1. Continuing to increase σ does not help with RF. Choosing large σ in the beginning may reduce oscillations but also limit the mobility to skip local minima. We recommend start with small σ and then gradually increase it until the iterative process reaches a stable minimum.

 figure: Fig. 7:

Fig. 7: Diffraction pattern of nanorice1 and nanorice2 253 × 253 (first column) and reconstructions using OSS: RF = 18.23%, 16.32% and GPS-F: RF = 17.40%, 15.83% respectively. GPS obtains less noise on the boundary and lower relative error RF.

Download Full Size | PDF

 figure: Fig. 8:

Fig. 8: Histogram (first row) and convergence (second row) of OSS and GPS-F on nanorice1 (first column) and nanorice2 (second column). The results of HIO are omitted due to lack of convergence.

Download Full Size | PDF

5. Conclusion

In this work, we have developed a fast and robust phase retrieval algorithm GPS for the reconstruction of images from noisy diffraction intensities. Similar to [34], the Moreau-Yosida regularization was used to relax the hard constraints considered in the noiseless model. GPS utilizes a primal-dual algorithm and a noise-removal technique, in which the [2 gradient smoothing is effectively applied on either real or Fourier space of the dual variable. GPS shows more reliable and consistent results than OSS, HIO for the reconstruction of weakly scattered objects such as biological specimens. Looking forward, we aim to explore the role of dual variables in non-convex optimization. Smoothing the dual variable, which is equivalent to smoothing the gradient of convex conjugate, represents a new and effective technique that can in principle be applied to other non-smooth, non-convex problems.

Appendix

Proximal mapping on complex domain

We extend the definition of the Moreau-Yosida regularization [32, 33] to complex domain. Let G ∈ ℂn×n be a Hermitian positive definite matrix. The Moreau-Yosida regularization of a lower semi-continuous extended-real-valued function h : ℂn → (−∞, ∞], associated with G, is defined by

hG(u):=infvn{h(v)+12vuG12}.

We see that hG converges pointwise to h as ‖G‖ → 0+. In the special case where G = tI is a multiple of identity matrix with t > 0, hG reduces to

ht(u):=infvn{h(v)+12tvu2}.

For any indicator function h=X of a closed set Xn,

ht(u):=12tinfvXvu2=12tuprojX(u)2
is 12t of the squared 2 distance from u to the set X. Similar idea of relaxing an indicator function into a distance function has been successfully applied to the quantization problem of deep neural networks in [34]. Taking X={zn:|z|=b} to be the magnitude constraint set and σ > 0, we first relax g=|z|=b in (3) into
gσ(z):=12σinf|v|=bvz2.

Since proj|z|=b(z) = b ⊙ exp(i · arg(z)) is the projection of z onto the set {z ∈ ℝn : |z| = b} a simple calculation shows that

gσ(z)=12σbexp(iarg(z))z2=12σ(b|z|)exp(iarg(z))2=12σ|z|b2
is a least squares fidelity, which measures the difference between the observed magnitudes and fitted ones.

Generalized Legendre-Fenchel transformation

We can express any function h : ℂn → ℝ as a function h˜ defined on ℝ2n in the following way

h(u):=h˜(Re(u),Im(u))=h˜(u˜),
where u˜=[Re(u);Im(u)]2n and h˜:2n. We define that h is convex, if h˜ is convex on ℝ2n. Note that for any u, y ∈ ℂn,
Reu,y=Re(u),Re(y)+Im(u),Im(y)=u˜,y˜.

We propose to generalize the Legendre-Fenchel transformation (a.k.a. convex conjugate) [48] of an extended-real-valued convex function h defined on ℂn as

h*(y):=supun{Rey,uh(u)}.

Infimal convolution and Moreau-Yoshida decomposition

By Eq. (5)fG is the infimal convolution [49] between the convex functions f and 12G12. The Moreau-Yoshida decomposition decouples the Legendre-Fenchel transformation of fG

fG*(y)=f*(y)+(12G12)*(y),
where (12G12)*(y)=12yG2. Here we remark that G only needs to be positive semi-definite in Eq. (5), as fG can take the extended value ∞. In this case, although G−1 does not exist, since fG is convex and lower semi-continuous, and the strong duality fG**=fG holds here, we can re-define fG via the biconjugate as
fG(u)=fG**(u)=(fG*(y))*=supyn{Reu,yfG*(y)}=supyn{Reu,yf*(y)12yG2}=supyS*{Reu,y12yG2}.

Funding

National Science Foundation Science & Technology Center (NSF) (DMR 1548924); U.S. Department of Energy (DOE) (DE-SC0013838)

Acknowledgments

We thank Professor Jose A. Rodriguez for help and providing the experimental yeast spore data.

References

1. R. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Am. A 7, 394–411 (1990). [CrossRef]  

2. J. Miao, T. Ishikawa, I. Robinson, and M. Murnane, “Beyond crystallography: Diffractive imaging using coherent x-ray light sources,” Science 348, 530–535 (2015). [CrossRef]   [PubMed]  

3. Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32, 87–109 (2015). [CrossRef]  

4. S. Marchesini, “Invited article: A unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78, 011301 (2007). [CrossRef]  

5. J. Miao, D. Sayre, and H. Chapman, “Phase retrieval from the magnitude of the fourier transforms of nonperiodic objects,” J. Opt. Soc. Am. A. 15, 1662–1669 (1998). [CrossRef]  

6. R. Gerchberg and W. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

7. J. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

8. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400, 342–344 (1999). [CrossRef]  

9. H. Chapman and K. Nugent, “Coherent lensless x-ray imaging,” Nat. Photon. 4, 833–839 (2010). [CrossRef]  

10. I. Robinson and R. Harder, “Coherent x-ray diffraction imaging of strain at the nanoscale,” Nat. Mater. 8, 291–298 (2009). [CrossRef]   [PubMed]  

11. J. Miao and D. Sayre, “On possible extensions of x-ray crystallography through diffraction-pattern oversampling,” Acta Crystallogr. A 56, 596–605 (2000). [CrossRef]   [PubMed]  

12. J. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

13. C. Chen, J. Miao, C. Wang, and T. Lee, “Application of optimization technique to noncrystalline x-ray diffraction microscopy: Guided hybrid input-output method,” Phys. Rev. B 76, 064113 (2007). [CrossRef]  

14. K. Raines, S. Salha, R. Sandberg, H. Jiang, J. Rodriguez, B. Fahimian, H. Kapteyn, J. Du, and J. Miao, “Threedimensional structure determination from a single view,” Nature 463, 214–217 (2010). [CrossRef]  

15. A. V. Martin, F. Wang, N. D. Loh, T. Ekeberg, F. R. N. C. Maia, M. Hantke, G. van der Schot, C. Y. Hampton, R. G. Sierra, A. Aquila, S. Bajt, M. Barthelmess, C. Bostedt, J. D. Bozek, N. Coppola, S. W. Epp, B. Erk, H. Fleckenstein, L. Foucar, M. Frank, H. Graafsma, L. Gumprecht, A. Hartmann, R. Hartmann, G. Hauser, H. Hirsemann, P. Holl, S. Kassemeyer, N. Kimmel, M. Liang, L. Lomb, S. Marchesini, K. Nass, E. Pedersoli, C. Reich, D. Rolles, B. Rudek, A. Rudenko, J. Schulz, R. L. Shoeman, H. Soltau, D. Starodub, J. Steinbrener, F. Stellato, L. Strüder, J. Ullrich, G. Weidenspointner, T. A. White, C. B. Wunderer, A. Barty, I. Schlichting, M. J. Bogan, and H. N. Chapman, “Noise-robust coherent diffractive imaging with a single diffraction pattern,” Opt. Express 20, 16650–16661 (2012). [CrossRef]  

16. J. Rodriguez, R. Xu, C. Chen, Y. Zou, and J. Miao, “Oversampling smoothness (OSS): an effective algorithm for phase retrieval of noisy diffraction intensities,” J. Appl. Crystallogr. 46, 312–318 (2013). [CrossRef]   [PubMed]  

17. H. Bauschke, P. Combettes, and D. Luke, “Hybrid projection-reflection method for phase retrieval,” J. Opt. Soc. Am. A. 20, 1025–1034 (2003). [CrossRef]  

18. J. Douglas and H. Rachford, “On the numerical solution of heat conduction problems in two and three space variables,” Trans. Amer. Math. Soc. 82, 421–439 (1956). [CrossRef]  

19. H. Bauschke, P. Combettes, and D. Luke, “Phase retrieval, error reduction algorithm, and fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A. 19, 1334–1345 (2002). [CrossRef]  

20. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A. 20, 40–55 (2003). [CrossRef]  

21. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inv. Prob. 21, 37–50 (2005). [CrossRef]  

22. P. Chen and A. Fannjiang, “Fourier phase retrieval with a single mask by Douglas-Rachford algorithms,” Appl. Comput. Harmon. Anal. 44, 665–699 (2018). [CrossRef]   [PubMed]  

23. E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” IEEE Trans. Info. Theory 61, 1985–2007 (2015). [CrossRef]  

24. H. Chang, Y. Lou, M. Ng, and T. Zeng, “Phase retrieval from incomplete magnitude information via total variation regularization,” SIAM J. Sci. Comput. 38, A3672–A3695 (2016). [CrossRef]  

25. H. Chang, S. Marchesini, Y. Lou, and T. Zeng, “Variational phase retrieval with globally convergent preconditioned proximal algorithm,” SIAM J. Imaging Sci. 11, 56–93 (2018). [CrossRef]  

26. Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inv. Prob . 28, 115010 (2012). [CrossRef]  

27. A. Chai, M. Moscoso, and G. Papanicolaou, “Array imaging using intensity-only measurements,” Inv. Prob . 27, 015005 (2011). [CrossRef]  

28. E. Candès, Y. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imaging Sci. 6, 199–225 (2013). [CrossRef]  

29. E. Candès, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Comm. Pure Appl. Math. 66, 1241–1274 (2013). [CrossRef]  

30. I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program. 149, 47–81 (2015). [CrossRef]  

31. P. Yin and J. Xin, “Phaseliftoff: an accurate and stable phase retrieval method based on difference of trace and frobenius norms,” Commun. Math. Sci. 13, 1033–1049 (2015). [CrossRef]  

32. J.-J. Moreau, “Proximité et dualité dans un espace hilbertien,” Bull. de la Société Mathématique de France 93, 273–299 (1965). [CrossRef]  

33. K. Yosida, Functional analysis (Springer, 1964).

34. P. Yin, S. Zhang, J. Lyu, S. Osher, Y. Qi, and J. Xin, “Binaryrelax: A relaxation approach for training deep neural networks with quantized weights,” SIAM J. on Imaging Sci. 11, 2205–2223 (2018). [CrossRef]  

35. R. Glowinski and A. Marroco, “Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisationdualité d’une classe de problèmes de dirichlet non linéaires,” Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9, 41–76 (1975). [CrossRef]  

36. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learn. 3, 1–122 (2011). [CrossRef]  

37. M. Yan and W. Yin, “Self equivalence of the alternating direction method of multipliers,” in Splitting Methods in Communication, Imaging, Science, and Engineering, (Springer, 2016), pp. 165–194. [CrossRef]  

38. A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” J. Math. Imaging Vis. 40, 120–145 (2010). [CrossRef]  

39. E. Esser, X. Zhang, and T. Chan, “A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science,” SIAM J. Imaging Sci. 3, 1015–1046 (2010). [CrossRef]  

40. T. Goldstein, M. Li, X. Yuan, E. Esser, and R. Baraniuk, “Adaptive primal-dual hybrid gradient methods for saddle-point problems,” https://arxiv.org/abs/1305.0546.

41. D. O’Connor and L. Vandenberghe, “On the equivalence of the primal-dual hybrid gradient method and douglasrachford splitting,” https://doi.org/10.1007/s10107-018-1321-1.

42. J. Miao, Y. Nishino, Y. Kohmura, B. Johnson, C. Song, S. H. Risbud, and T. Ishikawa, “Quantitative image reconstruction of gan quantum dots from oversampled diffraction intensities alone,” Phys. Rev. Lett. 95, 085503 (2005). [CrossRef]   [PubMed]  

43. J. Miao, K. Hodgson, T. Ishikawa, C. Larabell, M. Legros, and Y. Nishino, “Imaging whole escherichia coli bacteria by using single-particle x-ray diffraction,” Proc. Natl. Acad. Sci. USA 100, 110–112 (2003). [CrossRef]   [PubMed]  

44. C. Song, K. Tono, J. Park, T. Ebisu, S. Kim, H. Shimada, S. Kim, M. Gallagher-Jones, D. Nam, T. Sato, T. Togashi, K. Ogawa, Y. Joti, T. Kameshima, S. Ono, T. Hatsui, S. Iwata, M. Yabashi, and T. Ishikawa, “Multiple application X-ray imaging chamber for single-shot diffraction experiments with femtosecond X-ray laser pulses,” J. Appl. Crystallogr. 47, 188–197 (2014). [CrossRef]  

45. H. Jiang, C. Song, C.-C. Chen, R. Xu, K. S. Raines, B. P. Fahimian, C.-H. Lu, T.-K. Lee, A. Nakashima, J. Urano, T. Ishikawa, F. Tamanoi, and J. Miao, “Quantitative 3d imaging of whole, unstained cells by using x-ray diffraction microscopy,” Proc. Natl. Acad. Sci. USA 107, 11234–11239 (2010). [CrossRef]   [PubMed]  

46. S. Kassemeyer, J. Steinbrener, L. Lomb, E. Hartmann, A. Aquila, A. Barty, A. V. Martin, C. Y. Hampton, S. Bajt, M. Barthelmess, T. R. Barends, C. Bostedt, M. Bott, J. D. Bozek, N. Coppola, M. Cryle, D. P. DePonte, R. B. Doak, S. W. Epp, B. Erk, H. Fleckenstein, L. Foucar, H. Graafsma, L. Gumprecht, A. Hartmann, R. Hartmann, G. Hauser, H. Hirsemann, A. Hömke, P. Holl, O. Jönsson, N. Kimmel, F. Krasniqi, M. Liang, F. R. Maia, S. Marchesini, K. Nass, C. Reich, D. Rolles, B. Rudek, A. Rudenko, C. Schmidt, J. Schulz, R. L. Shoeman, R. G. Sierra, H. Soltau, J. C. H. Spence, D. Starodub, F. Stellato, S. Stern, G. Stier, M. Svenda, G. Weidenspointner, U. Weierstall, T. A. White, C. Wunderer, M. Frank, H. N. Chapman, J. Ullrich, L. Strüder, M. J. Bogan, and I. Schlichting, “Femtosecond free-electron laser x-ray diffraction data sets for algorithm development,” Opt. Express 20, 4149–4158 (2012). [CrossRef]   [PubMed]  

47. F. R. N. C. Maia, “The Coherent X-ray Imaging Data Bank,” Nat. Methods 9, 854–855 (2012). [CrossRef]   [PubMed]  

48. R. T. Rockafellar, Convex analysis (Princeton university, 2015).

49. R. Phelps, Convex Functions, Monotone Operators and Differentiability (Springer, 1991).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1:
Fig. 1: Flowchart of GPS algorithm for stage 1. For simplicity, we denote ϵ = σ σ + 1 in Eq. (14). The input z0 is initialized with random phases and magnitudes are equal the measurements, while y0 is a zero vector. The proj S * and the convolution ∗ follows Eq. (19) and Eq. (20) respectively. zk and yk with the smallest RF will be fed as an input into the next stage where a Gaussian kernel G γ with larger γ is applied.
Fig. 2:
Fig. 2: First row: vesicle model with log-scaled diffraction pattern (left), zoom-in image (center) and residual (right). Second row: HIO: RF = 12.87%, Rreal = 21.14%. Third row: OSS: RF = 6.08%, Rreal = 3.59%. Fourth row: GPS-R RF = 5.90%, Rreal = 2.85%. Fifth row: GPS-F RF = 5.89% and Rreal = 0.7%. Third column: residual.
Fig. 3:
Fig. 3: Histogram (first row), and convergence (second row) of RF and Rreal on Vesicle data using HIO, OSS, GPS. GPS consistently produces smaller RF and Rreal than HIO or OSS. Moreover, GPS converges fastest with the fewest oscillations.
Fig. 4:
Fig. 4: The (scaled) means and variances of the best five of 500 independent reconstructions by HIO, OSS, GPS-F and GPS-R
Fig. 5:
Fig. 5: S. pombe yeast spore log-scaled diffraction pattern, size 500 × 500 (top). Means (first row) and variance (second row) are computed from the best 5 of 500 independent reconstructions. HIO: RF = 15.697% ± 0.526%, OSS: RF = 9.775% ± 0.202%, and GPS: RF = 8.672% ± 0.025%.
Fig. 6:
Fig. 6: Top: histogram of RF in 500 independent runs (top). Bottom: the convergence curves of a singe construction of RF on S. pombe yeast spore by GPS-RF, OSS, and HIO.
Fig. 7:
Fig. 7: Diffraction pattern of nanorice1 and nanorice2 253 × 253 (first column) and reconstructions using OSS: RF = 18.23%, 16.32% and GPS-F: RF = 17.40%, 15.83% respectively. GPS obtains less noise on the boundary and lower relative error RF.
Fig. 8:
Fig. 8: Histogram (first row) and convergence (second row) of OSS and GPS-F on nanorice1 (first column) and nanorice2 (second column). The results of HIO are omitted due to lack of convergence.

Tables (2)

Tables Icon

Algorithm 1 GPS-R: GPS with smoothing in real space.

Tables Icon

Algorithm 2 GPS-F: GPS with smoothing in Fourier space.

Equations (41)

Equations on this page are rendered with MathJax. Learn more.

find u n , such that u S T .
min u n f ( u ) + g ( u ) ,
min u , z n f ( u ) + g ( z ) subject to z = u .
g σ ( z ) = 1 2 σ | z | b 2 .
f G ( u ) : = inf v { f ( v ) + 1 2 v u G 1 2 } = inf v S 1 2 v u G 1 2
min u , z n f G ( u ) + g σ ( z ) subject to z = u .
( u , z ; y ) = f G ( u ) + g σ ( z ) + Re y , * z u ,
y f G ( u ) , y g σ ( z ) * z = u f G * ( y ) , y g σ ( z ) ,
min z max y g σ ( z ) f G * ( y ) + Re z , y ,
f G * ( y ) = f * ( y ) + 1 2 y G 2 ,
f * ( y ) = sup v S Re y , v = { 0 if Re ( y ) 0 on S , otherwise .
min z max y S * 1 2 σ | z | b 2 + 1 2 y G 2 + Re z , y .
{ z k + 1 = prox t g σ ( z k t y k ) y k + 1 = prox s f G * ( y k + s 1 ( 2 z k + 1 z k ) )
prox t g σ ( z ) = arg min v n 1 2 σ | v | b 2 + 1 2 t v z 2 = b exp ( i arg ( z ) ) + ( σ / t ) z 1 + σ / t ,
prox s f G * ( y ) = arg min v n f G * ( v ) + 1 2 s v y 2 = arg min v n f * ( v ) + 1 2 v G 2 + 2 2 s v y 2 = arg min v S * 1 2 v G 2 + 2 2 s v y 2 .
f G * ( y ) = f * ( y ) + γ 2 D y 2 ,
prox s f G * ( y ) ( I + s γ D D ) 1 proj S * ( y ) .
y k + 1 = G s γ proj S * ( y k + s 1 ( 2 z k + 1 z k ) ) ,
proj S * ( y ) i = { Re ( y i ) + i Im ( y i ) if i S , y i otherwise .
G γ u = Re { 1 ( W γ u ) } .
R F ( u k ) = i | | u k | i b i | i b i .
f G ( u ) = { i | u proj S ( u ) | i 2 2 γ r i 2 if u i 0 at the center , otherwise ,
| d d ξ u ^ ( ξ ) | 2 d ξ = | x u ( x ) | 2 d x ,
D u 2 = r u 2 ,
f G * ( y ) = f * ( y ) + γ 2 r y 2 = f * ( y ) + γ 2 D y 2 .
prox s f G * ( y ) = 1 1 + s γ r 2 proj S * ( y ) exp ( s γ r 2 ) proj S * ( y ) .
y k + 1 = W s γ 4 π 2 proj S * ( y k + s 1 ( 2 z k + 1 z k ) ) .
z i k + 1 = { ( prox t g σ ( z k t y k ) ) i if b i is known , ( z k t y k ) i otherwise .
b i = Poisson ( | u o | i 2 flux u o 2 ) + N ( 0 , σ ) ,
R n o i s e = i | b i | u o | i | i | u o | i ,
R r e a l = i | u i k u i o | i u i o .
h G ( u ) : = inf v n { h ( v ) + 1 2 v u G 1 2 } .
h t ( u ) : = inf v n { h ( v ) + 1 2 t v u 2 } .
h t ( u ) : = 1 2 t inf v X v u 2 = 1 2 t u proj X ( u ) 2
g σ ( z ) : = 1 2 σ inf | v | = b v z 2 .
g σ ( z ) = 1 2 σ b exp ( i arg ( z ) ) z 2 = 1 2 σ ( b | z | ) exp ( i arg ( z ) ) 2 = 1 2 σ | z | b 2
h ( u ) : = h ˜ ( R e ( u ) , Im ( u ) ) = h ˜ ( u ˜ ) ,
Re u , y = Re ( u ) , Re ( y ) + Im ( u ) , Im ( y ) = u ˜ , y ˜ .
h * ( y ) : = s u p u n { Re y , u h ( u ) } .
f G * ( y ) = f * ( y ) + ( 1 2 G 1 2 ) * ( y ) ,
f G ( u ) = f G * * ( u ) = ( f G * ( y ) ) * = sup y n { Re u , y f G * ( y ) } = sup y n { Re u , y f * ( y ) 1 2 y G 2 } = sup y S * { Re u , y 1 2 y G 2 } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.