Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Turbulence-immune computational ghost imaging based on a multi-scale generative adversarial network

Open Access Open Access

Abstract

There is a consensus that turbulence-free images cannot be obtained by conventional computational ghost imaging (CGI) because the CGI is only a classic simulation, which does not satisfy the conditions of turbulence-free imaging. In this article, we report a turbulence-immune CGI method based on a multi-scale generative adversarial network (MsGAN). Here, the conventional CGI framework is not changed, but the conventional CGI coincidence measurement algorithm is optimized by an MsGAN. Thus, the satisfactory ghost image can be reconstructed by training the network, and the visual effect can be significantly improved.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) can be traced to the pioneering work initiated by Shih et.al. [1], who exploited biphotons generated via spontaneous parametric down-conversion to realize the first entanglement-based ghost image, following the original proposals by Klyshko [2]. The framework of quantum entangled light source determines that the initial GI requires two light paths [1]. One of the beams is the reference beam, which never illuminates the object and is directly measured by a detector with spatial resolution. The other beam is the object beam, which, after illuminating the object, is measured by a bucket detector with no spatial resolution. By correlating the photocurrents from the two detectors, one retrieves the “ghost” image. In the debate of the physical essence of GI [3], pseudothermal ghost imaging [4,5] and pure thermal ghost imaging [6] have been successively confirmed, which is great progress for GI. However, the conventional GI is not suitable for practical applications because the structure of the two optical paths limits the flexibility of the optical system. Fortunately, Shapiro proposed a single optical path GI scheme: computational ghost imaging (CGI) [7,8]. Because it has only one optical path, the imaging frame is closer to classical optical imaging. Consequently, it has important potential applications in remote sensing [911], lidar [1216] and night vision imaging [17,18].

Atmospheric turbulence is a serious problem for satellite remote sensing and aircraft-to-ground-based classical imaging. Surprisingly, Meyers et. al. found that a turbulence-free image could be obtained by conventional dual-path GI in 2011 [1921]. This unique and practical property is an important milestone for optical imaging because any fluctuation index disturbance introduced in the optical path will not affect the image quality. Shih et. al. revealed that the turbulence-free effect was due to the two-photon interference [21,22]. Li et. al. summarized the necessary conditions for turbulence-free imaging [23]. However, CGI is a classical simulation of GI, so it does not satisfy the conditions of turbulence-free imaging [22]. At present, this conclusion is generally accepted. Even to this day, it is a challenging work to realize turbulence-free CGI. Fortunately, machine learning, e.g.,generative adversarial network provides a promising solution for this [2426].

In this article, we demonstrate that turbulence-immune CGI can be realized by a multi-scale generative adversarial network (MsGAN). In this scheme, the basic framework of conventional CGI is not changed. Correspondingly, we optimize the coincidence measurement algorithm of CGI by an MsGAN. In an atmospheric turbulence environment, the satisfactory ghost images can be reconstructed by training the network. In the following, we theoretically and experimentally illustrate this method.

2. Theory and results

We depict the scheme in Fig. 1. A quasi-monochromatic laser illuminates an object $T\left ( \rho \right )$; then, the reflected light carrying the object’s information is received and modulated by a spatial light modulator (SLM). A photomultiplier tube (PMT) collects the light intensity $E_{di}\left ( \rho,t\right )$. Correspondingly, the calculated light $E_{ci}\left ( \rho ^{^{\prime }},t\right )$ can be obtained by diffraction theory. By processing the two light signals with a conventional CGI algorithm, the object’s image can be reconstructed, i.e.,

$$\begin{aligned}G(\rho,\rho^{^{\prime}}) & =\frac{1}{n} {\displaystyle\sum_{i=1}^{n}} \left( \left\langle \left\vert E_{di}(\rho,t)\right\vert ^{2}\left\vert E_{ci}(\rho^{\prime},t)\right\vert ^{2}\right\rangle \right.\\ & -\left. \left\langle \left\vert E_{di}(\rho,t)\right\vert ^{2} \right\rangle \left\langle \left\vert E_{ci}(\rho^{\prime},t)\right\vert ^{2}\right\rangle \right) , \end{aligned}$$
where $\left \langle \cdot \right \rangle$ is an ensemble average. The subscript $i=1,2, \ldots n$ denotes the $i$th measurement, and $n$ is the total number of measurements.

 figure: Fig. 1.

Fig. 1. Setup of turbulence-immune computational ghost imaging. BS: beam splitter, SLM: spatial light modulator, PMT: photomultiplier tube, BI: blurred image, RI: reconstructed image. The objects (Rubik’s Cube) in the Figures of this article are taken from the laboratory, and there is no copyright problem.

Download Full Size | PDF

The flow chart of the MsGAN is shown in Fig. 2. The scheme mainly consists of three parts: (i) a conventional CGI algorithm to obtain distorted images; (ii) a generator G of MsGAN to generate random data into sample images through continuous training; (iii) a discriminator D of MsGAN to distinguish generated samples from real images.

 figure: Fig. 2.

Fig. 2. Network structure of the MsGAN.

Download Full Size | PDF

The network structure of generator G is shown in Fig. 3. The basic framework is a symmetric U-Net architecture. On this basis, multi-scale attention feature extraction units and a multi-level feature dynamic fusion unit are added to the U-Net architecture. The multi-scale attention feature extraction units use convolution kernels of different sizes to extract multi-scale feature information in larger receptive fields. The multi-level feature fusion unit can dynamically fuse feature images with different proportions by adjusting the weight and excavating the semantic information of different levels. Since U-Net will lose detailed features of the images in the process of encoding and decoding, the visual detail features of the reconstructed image can be improved by adding skip connections as hierarchical semantic guidance. The network mainly consists of a full convolution part and an up-sampling part. The full convolution part of the U-Net consists of a pre-training convolution module and multi-scale attention feature extraction units. The pre-trained convolution module uses the convolution layer and max-pooling layer of the Inception-ResNet-v2 backbone network.

 figure: Fig. 3.

Fig. 3. Network structure of the generator G. 1 represents the pre-trained convolution module; 2 represents a multi-scale attention feature extraction unit; 3 represents an up-sampling layer and 4 represents a multi-level feature dynamic fusion unit.

Download Full Size | PDF

The multi-scale attention feature extraction units are composed of multi-branch convolution layers and an attention layer (Fig. 4). The multi-branch convolution layers are composed of different sizes of juxtaposed dilated convolutions, which correspond to different sizes of receptive fields, to extract various features. Three branches with receptive fields of $3\times 3$, $5\times 5$, and $7\times 7$ simultaneously extract features of the input images. After the information feature graphs of different scales are obtained, the cascaded feature graphs are readjusted to the input size through the convolution operation.

 figure: Fig. 4.

Fig. 4. Structure of multi-scale attention feature extraction units.

Download Full Size | PDF

In feature extraction, an attention mechanism is introduced to generate different attention for each channel feature to distinguish between low-frequency parts (smooth or flat areas) and high-frequency parts (such as lines, edges, and textures) of images to pay attention to and learn the key content of the image. First, the global context information of each channel is used to compress the spatial information of each channel by global average pooling. The expression is

$$z_{c}=G_{p}\left( X_{c}\right) =\frac{1}{H\times W} {\displaystyle\sum_{i=1}^{H}} {\displaystyle\sum_{j=1}^{W}} X_{c}\left( i,j\right)$$
where $X_{c}$ is the aggregation convolution feature graph with the size of $H\times W\times C$, and $Z_{c}$ is the compressed global pooling layer with the size of $1\times 1\times C$. ReLU and Sigmoid activation functions are used to implement the gating principle to learn the nonlinear synergistic effect and mutual exclusion relationship between channels, and the attention mechanism can be expressed as
$$r_{c} =\sigma\left\{ Conv\left[ \delta\left( Conv\left( z_{c}\right) \right) \right] \right\} , $$
$$\hat{X}_{c} =r_{c}\times X_{c}, $$
where $\delta$ and $\sigma$ are the activation functions of ReLU and sigmoid, respectively, $r_{c}$ is the weight of excitation, $\hat{X}_{c}$ is the feature graph after the adjustment of the attention mechanism. The global pooling layer $z_{c}$ goes through the down-sampling convolutional layer and ReLU activation function, and the channel number is recovered through the up-sampling convolutional layer. Finally, it is activated by the sigmoid function to obtain the channel excitation weight $r_{c}$. The value of the aggregation convolution layer $X_{c}$ channel is multiplied by different weights to obtain the output $\hat{X}_{c}$ of the adaptive adjustment channel attention.

The up-sampling part of the U-Net architecture includes up-sampling layers and a multi-level feature dynamic fusion unit [27]. In the up-sampling part of the generator network, the feature graphs at different levels contain different instance information. A multi-level feature fusion unit is proposed to enhance the information transfer among feature maps at different levels. In addition, we propose a dynamic fusion network structure to solve the problem of feature conflict at different levels. The method assigns different weights to the spatial position of the feature map to screen the valid features and filter out contradictory information through learning. First, the feature images of different scales are adjusted to the same size by up-sampling, and spatial weights are set for feature images of different levels during fusion to find the optimal fusion strategy. It can be specifically expressed as:

$$F^{{\ast}}=\omega_{1}\times F^{1\uparrow}+\omega_{2}\times F^{2\uparrow} +\omega_{3}\times F^{3\uparrow}+\omega_{4}\times F^{4\uparrow},$$
where $\omega _{i}$ is the weight corresponding to feature graphs of different levels, $F^{i\uparrow }$ is the standard feature graph of the $i$th feature graph adjusted to a uniform size after up-sampling, $F^{\ast }$ is the final feature graph output by dynamic fusion of all levels of features through adaptive weight allocation. In the end, generator G introduces a skip connection directly from the input to the output, which forces the model to focus on learning residuals.

We input the feature images generated by generator G and the real images into discriminator D for discrimination. Discriminator D adopts the PatchGAN structure and consists of four convolution layers with $4\times 4$ convolution kernels [28,29].

$$\begin{aligned}L_{GAN} & =\mathbb{E}\left[ \left\Vert \log D\left( I_{gen}\right) \right\Vert _{2}^{2}\right]\\ & +\mathbb{E}\left[ \left\Vert \log\left( 1-D\left( I_{gt}\right) \right) \right\Vert _{2}^{2}\right] , \end{aligned}$$
where $I_{gt}$ denotes real images, $I_{gen}$ denotes generated images. On the content loss of image reconstruction, the mean square error loss of generated images and target images is selected to obtain a higher peak signal-to-noise ratio (PSNR). The mean square error loss is
$$L_{MSE}=\frac{1}{M\times N} {\displaystyle\sum_{i=1}^{M}} {\displaystyle\sum_{j=1}^{N}} \left\Vert I_{gt,ij}-I_{gen,ij}\right\Vert ^{2},$$
where $I_{gen}$ denotes generated images. The visual loss is introduced as follows
$$L_{perc}= {\displaystyle\sum_{i=1}^{L}} \frac{1}{H_{i}W_{i}C_{i}}\left( \phi_{i}\left( I_{gt}\right) -\phi _{i}\left( I_{gen}\right) \right) ^{2}.$$

Thus, the total loss function is defined as

$$L=\alpha L_{MSE}+\beta L_{perc}+\gamma L_{GAN}.$$

The experimental setup is schematically shown in Fig. 1. A standard monochromatic laser (30 mW, Changchun New Industries Optoelectronics Technology Co., Ltd. MGL-III-532) with wavelength $\lambda =532$nm illuminates a $50:50$ beam splitter. The reflected light illuminates an object. The objects in this article are taken from the laboratory, and there is no copyright problem. The light reflected by the object passes through the beam splitter, and the transmitted light is modulated by a two-dimensional amplitude-only ferroelectric liquid crystal spatial light modulator (Meadowlark Optics A512-450-850) with $512\times 512$ addressable $15\mu m\times 15\mu m$ pixels. Finally, a photomultiplier tube (PMT) collects the modulated light and outputs photocurrent signals. Correspondingly, the reference signal can be obtained through software. Atmospheric turbulence is introduced by an improved turbulence simulation device. We chose 550$^{\circ }$C as strong turbulence [19]. Turbulence with different intensities can be obtained by reducing the temperature (400$^{\circ }$C and 200$^{\circ }$C). In data processing, the hyperparameter of the loss function is set to $\alpha =0.5$, $\beta =0.01$, $\gamma =0.01$. Adam is used for parameter optimization in the training process, and batch_size is set to 1 [30,31]. In the data set, 173 and 35 real object images (only rubik’s Cube) are collected as the training set and test set, respectively.

The effect of turbulence on a classical image is easily observed in Figs. 5(A)(b-d), where three classical images were taken by a conventional CCD camera. Experimental results show that classical images are significantly blurred by turbulence. Moreover, with increasing turbulence intensity, the image distortion becomes more obvious. Figures 5(B)(b-d) are the experimental results of this method. From the human vision, the reconstructed computational ghost images have better image quality than the turbulence blurred images and are similar to the images without turbulence. To quantitatively evaluate the effect of this method, we use the structural similarity (SSIM) and PSNR to measure the image quality [32,33]. Figures 5(C) and 5(D) show that the SSIM and PSNR of the reconstructed images are significantly better than the blurred classical images. Indeed, SSIM and PSNR show that there is still a certain gap between the image obtained by this method and the complete turbulence-free image. However, from the perspective of human visual effect, the difference between the image obtained by this method and the turbulence-free image is acceptable. Consequently, this method is called turbulence-immune CGI. To prove the effectiveness of this method, we choose the other rubik’s cubes as the object to experiment. The results are presented in Fig. 6. These two experiments show that this method has the same effect on different objects. Moreover, the experimental results show that the SSIM and PSNR of reconstructed images obtained by our method are 35.6% and 71.3% higher than those of turbulent images in a strong atmospheric turbulence environment. In a weak turbulent environment, the SSIM and PSNR increased by 10.3% and 16.1% on average. This method is more effective in strong turbulent environment.

 figure: Fig. 5.

Fig. 5. A(a) and B(a) are classical image and computational ghost image without atmospheric turbulence, respectively. A(b-d): The blurred classical images caused by different intensities of atmospheric turbulence. B(b-d): The corresponding computational ghost images reconstructed by MsGAN. (C) and (D): The SSIM and PSNR values of the images with different intensities of atmospheric turbulence.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. A(a) and B(a) are classical image and computational ghost image without atmospheric turbulence, respectively. A(b-d): The blurred classical images caused by different intensities of atmospheric turbulence. B(b-d): The corresponding computational ghost images reconstructed by MsGAN. (C) and (D): The SSIM and PSNR values of the images with different intensities of atmospheric turbulence.

Download Full Size | PDF

3. Conclusion

The turbulence-immune computational ghost imaging was demonstrated in this article. Although CGI framework does not satisfy the conditions of turbulence-free imaging, the satisfactory ghost image can be reconstructed using the MsGAN method in an atmospheric turbulence environment. We hope that this method can provide a promising solution to overcome atmospheric turbulence in applications of CGI and single-pixel imaging.

Funding

National Natural Science Foundation of China (11574178, 11704221, 61675115); Taishan Scholar Foundation of Shandong Province (tsqn201812059).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. D. Klyshko, “Combine epr and two-slit experiments: Interference of advanced waves,” Phys. Lett. A 132(6-7), 299–304 (1988). [CrossRef]  

3. B. I. Erkmen and J. H. Shapiro, “Ghost imaging: From quantum to classical to computational,” Adv. Opt. Photonics 2(4), 405 (2010). [CrossRef]  

4. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]  

5. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “Twophoton coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

6. X. H. Chen, Q. Liu, K. H. Luo, and L. A. Wu, “Lensless ghost imaging with true thermal light,” Opt. Lett. 34(5), 695–697 (2009). [CrossRef]  

7. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

8. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

9. B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A 29(5), 782–789 (2012). [CrossRef]  

10. M. Y. Chen, H. Wu, R. Z. Wang, Z. Y. He, H. Li, J. Q. Gan, and G. P. Zhao, “Computational ghost imaging with uncertain imaging distance,” Opt. Commun. 445, 106–110 (2019). [CrossRef]  

11. D. Y. Duan, Z. X. Man, and Y. J. Xia, “Non-degenerate wavelength computational ghost imaging with thermal light,” Opt. Express 27(18), 25187–25195 (2019). [CrossRef]  

12. N. D. Hardy and J. H. Shapiro, “Computational ghost imaging versus imaging laser radar for three-dimensional imaging,” Phys. Rev. A 87(2), 023820 (2013). [CrossRef]  

13. W. L. Gong, C. Q. Zhao, H. Yu, M. L. Chen, W. D. Xu, and S. S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

14. C. L. Wang, X. D. Mei, L. Pan, P. W. Wang, W. Li, X. Gao, Z. W. Bo, M. L. Chen, W. L. Gong, and S. S. Han, “Airborne near infrared three-dimensional ghost imaging lidar via sparsity constraint,” Remote. Sens. 10(5), 732 (2018). [CrossRef]  

15. W. L. Gong, H. Yu, C. Q. Zhao, Z. W. Bo, M. L. Chen, and W. D. Xu, “Improving the imaging quality of ghost imaging lidar via sparsity constraint by time-resolved technique,” Remote. Sens. 8(12), 991 (2016). [CrossRef]  

16. C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017). [CrossRef]  

17. H. C. Liu and S. Zhang, “Computational ghost imaging of hot objects in long-wave infrared range,” Appl. Phys. Lett. 111(3), 031110 (2017). [CrossRef]  

18. D. Y. Duan and Y. J. Xia, “Pseudo color night vision correlated imaging without an infrared focal plane array,” Opt. Express 29(4), 4978–4985 (2021). [CrossRef]  

19. R. E. Meyers, K. S. Deacon, and Y. Shih, “Turbulence-free ghost imaging,” Appl. Phys. Lett. 98(11), 111115 (2011). [CrossRef]  

20. R. E. Meyers, K. S. Deacon, and Y. H. Shi, “Positivenegative turbulence-free ghost imaging,” Appl. Phys. Lett. 100(13), 131114 (2012). [CrossRef]  

21. Y. H. Shih, “The physics of turbulence-free ghost imaging,” Technologies 4(4), 39 (2016). [CrossRef]  

22. Y. H. Shih, An Introduction to Quantum Optics: Photon and Biphoton Physics (Series in Optics and Optoelectronics) (Taylor & Francis, 2011).

23. M. F. Li, L. Yan, R. Yang, J. Kou, and Y. S. Liu, “Turbulence-free intensity fluctuation self-correlation imaging with sunlight,” Acta Phys. Sin. 68(9), 094204 (2019). [CrossRef]  

24. C. Zhen, Y. S. Yang, Y. X. Li, and J. J. Zhong, “Atmospheric turbulence image restoration based on multi-scale generative adversarial network,” Comput. Eng. (to be published).

25. C. P. Lau, H. Souri, and R. Chellappa, “Atfacefan: single face image restoration and recognition from atmospheric turbulence,” arXiv p. 1910.03119 (2020).

26. Z. Gao, X. Cheng, K. Chen, A. Wang, Y. Hu, S. Zhang, and Q. Hao, “Computational ghost imaging in scattering media using simulation-based deep learning,” IEEE Photonics J. 12(5), 1–15 (2020). [CrossRef]  

27. V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv p. 1603.07285v2 (2016).

28. U. Demir and G. Unal, “Patch-based image inpainting with generative adversarial networks,” arXiv p. 1803.07422 (2018).

29. X. Mao, H. Lee, H. Tseng, S. Ma, and M. Yang, “Mode seeking generative adversarial networks for diverse image synthesis,” arXiv p. 1903.05628v6 (2019).

30. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv p. 1412.6980V9 (2017).

31. T. K. Belyalov and R. F. Khabibullin, “An algorithm for determining the optimal batch size,” J. Math. Sci. 50(5), 1797–1799 (1990). [CrossRef]  

32. W. Jiang, X. Li, X. Peng, and B. Sun, “Imaging high-speed moving targets with a single-pixel detector,” Opt. Express 28(6), 7889–7897 (2020). [CrossRef]  

33. X. Yang, Z. Yu, L. Xu, J. Hu, L. Wu, C. Yang, W. Zhang, J. Zhang, and Y. Zhang, “Underwater ghost imaging based on generative adversarial networks with high imaging quality,” Opt. Express 29(18), 28388–28405 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Setup of turbulence-immune computational ghost imaging. BS: beam splitter, SLM: spatial light modulator, PMT: photomultiplier tube, BI: blurred image, RI: reconstructed image. The objects (Rubik’s Cube) in the Figures of this article are taken from the laboratory, and there is no copyright problem.
Fig. 2.
Fig. 2. Network structure of the MsGAN.
Fig. 3.
Fig. 3. Network structure of the generator G. 1 represents the pre-trained convolution module; 2 represents a multi-scale attention feature extraction unit; 3 represents an up-sampling layer and 4 represents a multi-level feature dynamic fusion unit.
Fig. 4.
Fig. 4. Structure of multi-scale attention feature extraction units.
Fig. 5.
Fig. 5. A(a) and B(a) are classical image and computational ghost image without atmospheric turbulence, respectively. A(b-d): The blurred classical images caused by different intensities of atmospheric turbulence. B(b-d): The corresponding computational ghost images reconstructed by MsGAN. (C) and (D): The SSIM and PSNR values of the images with different intensities of atmospheric turbulence.
Fig. 6.
Fig. 6. A(a) and B(a) are classical image and computational ghost image without atmospheric turbulence, respectively. A(b-d): The blurred classical images caused by different intensities of atmospheric turbulence. B(b-d): The corresponding computational ghost images reconstructed by MsGAN. (C) and (D): The SSIM and PSNR values of the images with different intensities of atmospheric turbulence.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

G ( ρ , ρ ) = 1 n i = 1 n ( | E d i ( ρ , t ) | 2 | E c i ( ρ , t ) | 2 | E d i ( ρ , t ) | 2 | E c i ( ρ , t ) | 2 ) ,
z c = G p ( X c ) = 1 H × W i = 1 H j = 1 W X c ( i , j )
r c = σ { C o n v [ δ ( C o n v ( z c ) ) ] } ,
X ^ c = r c × X c ,
F = ω 1 × F 1 + ω 2 × F 2 + ω 3 × F 3 + ω 4 × F 4 ,
L G A N = E [ log D ( I g e n ) 2 2 ] + E [ log ( 1 D ( I g t ) ) 2 2 ] ,
L M S E = 1 M × N i = 1 M j = 1 N I g t , i j I g e n , i j 2 ,
L p e r c = i = 1 L 1 H i W i C i ( ϕ i ( I g t ) ϕ i ( I g e n ) ) 2 .
L = α L M S E + β L p e r c + γ L G A N .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.