Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep sound-field denoiser: optically-measured sound-field denoising using deep neural network

Open Access Open Access

Abstract

This paper proposes a deep sound-field denoiser, a deep neural network (DNN) based denoising of optically measured sound-field images. Sound-field imaging using optical methods has gained considerable attention due to its ability to achieve high-spatial-resolution imaging of acoustic phenomena that conventional acoustic sensors cannot accomplish. However, the optically measured sound-field images are often heavily contaminated by noise because of the low sensitivity of optical interferometric measurements to airborne sound. Here, we propose a DNN-based sound-field denoising method. Time-varying sound-field image sequences are decomposed into harmonic complex-amplitude images by using a time-directional Fourier transform. The complex images are converted into two-channel images consisting of real and imaginary parts and denoised by a nonlinear-activation-free network. The network is trained on a sound-field dataset obtained from numerical acoustic simulations with randomized parameters. We compared the method with conventional ones, such as image filters, a spatiotemporal filter, and other DNN architectures, on numerical and experimental data. The experimental data were measured by parallel phase-shifting interferometry and holographic speckle interferometry. The proposed deep sound-field denoiser significantly outperformed the conventional methods on both the numerical and experimental data. Code is available on GitHub (https://github.com/nttcslab/deep-sound-field-denoiser).

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical imaging has recently been used for high-spatial-resolution imaging of acoustic phenomena in airborne sound fields that conventional acoustic sensors cannot accomplish [1]. The acousto-optic effect [2], in which the refractive index of a medium is changed by sound, allows sound to be measured from the optical phase variation. Various optical methods have been used, for example, laser Doppler vibrometry (LDV) [2,3], parallel phase-shifting interferometry (PPSI) [1], and digital holography [47]. The applications include investigation of acoustic phenomena [810] and sound-field back-projection [2,3,1113]. Owing to their significant advantages, optical technologies are considered promising as a next-generation acoustic sensing modality.

The sound field measured by a high-speed camera or scanning laser beam can be represented as an image sequence. Each pixel value is proportional to the line integral of the sound pressure along the corresponding optical path with superimposed noise. Because the phase fluctuation of light caused by audible sound is tiny owing to its physical origin, noise reduction of sound-field images is a fundamental concern. The noises involved in optical imaging of sound field can include electronic noises of an image sensor, optical noises such as shot noise and laser intensity and phase noises, speckle noises, and environmental noises such as seismic vibration and atmospheric turbulence. To reduce these various noises and increase the signal-to-noise ratio, it is necessary to denoise the sound-field data. Whereas sound-field image denoising is typically conducted using filters, as discussed in section 2.1, no machine-learning-based sound-field image denoising method has been proposed so far.

In this paper, we propose a denoising method for sound-field images that is based a deep neural network (DNN) (Fig. 1). A sound-field image sequence is Fourier transformed along the time direction at each pixel, by which is obtained complex-amplitude sound-field images corresponding to the frequency bins of the discrete Fourier transform (FT). Then, each complex-amplitude image is converted into a two-channel image consisting of real and imaginary parts and denoised by using a trained DNN. To train the network, we generated training datasets by performing acoustic simulations with white and/or speckle noises. Randomizing the simulation parameters ensured variety in the training data. Numerical experiments confirmed that the proposed DNN-based method performs better than conventional methods. We also applied the method to data measured by PPSI and holographic speckle interferometry (HSI). It outperformed conventional methods on these experimental data without priori knowledge of the sound field.

 figure: Fig. 1.

Fig. 1. Overview of the deep sound-field denoiser. (a) Training process. A sound-field dataset is generated in a 2D acoustic simulation with randomized parameters. Each data is a complex-amplitude sound-field image of a harmonic frequency $\omega$. A nonlinear activation-free network (NAFNet) is trained using the clean and noisy pairs of the simulated sound fields. (b) Inference process. The time-sequential sound-field images are transformed into complex amplitude images, where each image is denoised by the trained network.

Download Full Size | PDF

2. Related works

2.1 Sound-field denoising

The physical properties of sound are commonly utilized for designing noise-reduction filters. These filters can be categorized into time-domain processing, spatial-domain processing, and spatiotemporal-frequency-domain processing.

Time-domain processing is typically the first choice. Because sound pressure varies over time, a high-pass filter with a very low cutoff frequency can eliminate static optical phase components and low-frequency fluctuations caused by air fluctuation and seismic vibration. Taking the difference between successive frames of an image sequence, which is a simple high pass filter, has been used as an easy denoising method for sound-field image sequences [8]. When the frequencies of a measured sound field are known, the noise-reduction performance can be improved by designing an appropriate temporal filter [14].

Spatial-domain processing can be applied independently of the time-domain processing. A spatial filter is applied to the sound-field image at each frame. Since sound is a spatially smooth variation and steep edges are usually absent, typical image processing filters, such as Gaussian and median filters, are effective [15].

Spatiotemporal-frequency-domain processing utilizes the fact that sound satisfies the equation: $k = \omega / c$, where $k$ is the acoustic wavenumber, $\omega$ is the acoustic angular frequency, and $c$ is the speed of sound. If we consider a two-dimensional space, this equation forms a cone in $k-\omega$ space [15]. Since all of the spatiotemporal components in the recorded images that do not exist on the cone are noise, they can be eliminated by filtering [15,16].

The methods used so far are all classical filters. We developed a DNN-based denoising of sound fields and confirmed that it outperforms these conventional methods.

2.2 Natural image denoising by DNNs

DNNs have been extensively applied to image-denoising tasks and have outperformed classical methods. Convolutional neural networks (CNN) [1722] and transformers [2326] have widely been used. Among the numerous DNNs, a nonlinear activation free network (NAFNet) [22] has a simple and efficient structure and has achieved peak signal-to-noise ratio (PSNR) of 40.30 dB on a smartphone image denoising dataset [27]. We chose this architecture for our sound-field denoiser.

2.3 DNNs for optical metrology

DNNs have been increasingly used in optical metrology [28]. DNNs have been used in many processes, including pre-processing (e.g., fringe denoising [29] and enhancement [30]), analysis (e.g., phase retrieval [31] and phase unwrapping [32]), and post-processing (e.g., phase denoising [33], error compensation [34], and digital refocusing [35]).

Several DNN-based methods have shown high performance in denoising fringe patterns and optical phase maps. For fringe denoising, a deep CNN consisting of 20 layers was proposed by Yan et al., where the training dataset was generated from Zernike polynomials and additive white Gaussian noise. Several methods have also applied DNNs to fringes corrupted by speckle noise [3641]. Similar ideas have been used to denoise optical phase maps [33,4246].

However, no research has used DNNs to denoise sound-field images measured by optical methods. Since the spatial and temporal features of sound-field images differ from those of interference fringes and typical optical phase maps, the previous methods may not be optimal for sound-field denoising. Our contribution is that we developed DNN-based sound-field denoising methods and a training dataset that considers the physical nature of sound.

3. Methods

3.1 Acousto-optic measurement data

Here, let us briefly review the principle of acousto-optic measurement [2]. The acousto-optic effect is the change in the refractive index of a medium caused by sound. If light propagates along the $z$-axis, the phase shift of the light propagating through a sound field in air is given by

$$\phi_s (x, y, t) = k_L \frac{n_0-1}{\gamma P_0} \int_{z_1}^{z_2} p(x, y, z, t) dz,$$
where $k_L$ is the wavenumber of light, $\gamma$ is the specific heat ratio, $n_0$ and $P_0$ are the refractive index and pressure of air in a static condition, respectively, and $p$ is the sound pressure. The typical values in air are $n_0 = 1.000279$, $\gamma = 1.40$ and $P_0 = 101325$, and $k_L$ is calculated from wavelength of light. The phase shift of light is proportional to the sound pressure along the laser path. When sound-field imaging is performed based on this principle, the observed data can be written as a three-dimensional array $\Phi _{\mathrm{noisy}}$ whose elements are of the form $\phi _s (x_i, y_j, t_m)$ , where $(i, j)$ is the pixel index and $m$ is the time index. Any processing method that can extract $\phi _s$ from noisy data can be applied.

3.2 DNN-based sound-field denoising

The overview of the inference process is shown in Fig. 1(b). First, the temporal Fourier analysis is used to get the complex-valued amplitude at a certain acoustical frequency. A time-domain FT is calculated at each of all pixels of the time-sequential sound-field images. This can be written as $\Psi _{\mathrm{noisy}} = \mathcal{F}_t[\Phi _{\mathrm{noisy}}]$, where $\mathcal{F}_t$ denotes a 1D FT along the temporal axis. $\Psi _{\mathrm{noisy}}$ is the complex-valued amplitude at the corresponding spatial position and the Fourier frequency. Then, for each Fourier frequency, the 2D complex-valued amplitude is converted into a two-channel image with real and imaginary parts. The two-channel complex-amplitude image is normalized and inputted to the neural network. The network is trained to output a clean complex-amplitude image from the input noisy complex-amplitude image. The output image is multiplied by the reciprocal of the normalization factor to maintain the magnitude of the sound field. After processing all frequencies independently with the same DNN, the denoised complex amplitude, $\Psi _{\mathrm{denoise}}$, is inverse Fourier transformed, and the denoised sound field $\Phi _{\mathrm{denoise}} = \mathcal {F}_t^{-1}[\Psi _{\mathrm{denoise}}]$ is obtained.

Since the proposed method uses DNNs to denoise two-channel input images, any network that is able to perform denoising on images can be used with it. Here, Unet-based networks are often used in optical metrology [28]. In particular, we chose NAFNet [22], which has excellent performance and can run with relatively small memory and training time.

3.3 Training data

Although optical sound measurements have been actively studied in recent years, no dataset exists for training a neural network on them. It is difficult to collect sound-field data under various conditions through experiments. Therefore, this study used acoustic numerical simulation to create a training dataset, as shown in Fig. 1(a).

A 2D sound-field simulation with randomized parameters was used. For generating the training dataset, we created the data as a 2D sound field instead of calculating line-integral of 3D sound field for reducing computational costs and complexity. Figure 2(a) shows a schematic illustration of the simulation. The inner rectangle is the measurement area, outside of which is the sound source area where point sources are randomly placed. To generate sound fields with diverse spatial characteristics from simple to complex, the number of point sources was varied from 1 to 5, and the position and relative amplitude of each source was randomly assigned. Each true sound field is a superposition of the sound waves generated by these point sources and can be calculated as

$$p_{\mathrm{image, true}} \left(\boldsymbol{r}, k\right) = A \sum^{N}_{i=1} a_i \frac{j}{4} H^{(2)}_0 \left(k |\boldsymbol{r_i} - \boldsymbol{r}| \right),$$
where $\boldsymbol {r}=(x,y)$, $k$ is the magnitude of acoustic wavenumber, $A$ is a constant determining the overall magnitude of the sound field, $N$ is the number of sound sources, $a_i$ and $\boldsymbol {r_i}=(x_i, y_i)$ are the relative amplitude and position of the $i$th sound source, respectively, and $H^{(2)}_0$ is a Hankel function of the second kind of order zero. The term $(j/4) H^{(2)}_0 \left (k |\boldsymbol {r_i} - \boldsymbol {r}| \right )$ is Green’s function of a 2D wave equation, which describes a sound field created by a point source [47]. Therefore, each term in the summation symbol represents a sound field by a point source of position $\boldsymbol {r_i}$ and amplitude $a_i$.

 figure: Fig. 2.

Fig. 2. (a) Sound-field data generation. Point sources are randomly generated within the sound source area, and the 2D true sound fields in the center area are generated using the Green’s function of the 2D Helmholtz equation. (b) Examples of the generated sound-field data. $N$ represents the number of sound sources. Two examples are shown for each $N$. (c) Histogram of SNR of the generated white noise images.

Download Full Size | PDF

The true sound fields were created by randomly selecting $k$, $a_i$, and $\boldsymbol {r}_i$ from uniform distributions. The measurement area was a square of side length 1, and the sound source area was ten times larger than that. The random parameters were generated from uniform distributions of $0.1 \leq a_i \leq 1$, $1.26 \leq k \leq 40.2$, $0.5 \leq |x_i| \leq 10$, and $0.5 \leq |y_i| \leq 10$. In this range of wavenumbers, the shortest wavelength is 0.156 times the size of the imaging field of view, while the longest wavelength is 5 times the size of the imaging field of view. The amplitude of the entire sound field was set to $A = 0.1$. These parameters were determined based on the authors’ experience with typical experimental conditions of this measurement technology. $a_1$ was set to 1 regardless of the number of sources to avoid all sources having small amplitudes. The simulated data was calculated by discretizing the measurement area into 128 $\times$ 128 pixels. The top row of Fig. 2(b) shows examples of the generated sound fields. It can be seen that the generated sound fields have different complexities, wavelengths, and directions of arrival.

Two types of noise were added to the training data: additive white Gaussian noise and speckle noise. Due to the lack of knowledge regarding the measurement noise of complex-valued sound fields in the frequency domain, white noise was used to represent such noise. The amplitudes of the white noise were randomly selected from a uniform distribution between 0 to $0.1 A$. The histogram of the SNR of the 50,000 white noise images produced is shown in Fig. 2(c); the mean and standard deviation of the SNR were approximately -12.1 dB and 10 dB, respectively. The method of generating the speckle noise data is shown in Section 1 of the Supplement 1. Examples of the noisy training data are shown in Fig. 2(b). Data with different amounts of noise were generated. Although the differences between white and speckle noise may be difficult to recognize, spatially correlated random patterns appear in the speckle noise images. Such speckle noise can occur, for example, in a sound field observation using electronic speckle pattern interferometry and a holographic interferometer equipped with Fresnel lenses [48].

3.4 Implementation details

This study used almost the same network as in the original NAFNet article [22], except for the number of image channels. The network consisted of 32 blocks with widths of 32, two image channels (real and imaginary), and a 128 $\times$ 128 pixel image size. The root mean square error was used as the loss, Adam was used as the optimizer. For the white noise dataset, the initial learning rate was set to 1e-3, while for the speckle noise dataset, it was set to 2e-4. The learning rate decreased exponentially by a factor of 0.95 per epoch. A total of 50,000 training data were created, 10,000 for each number of sources. The training batch size was 32, and the epoch was 50. It took approximately 10 hours to complete the training process using a single NVIDIA RTX A4000 GPU. The network trained on the white noise dataset is denoted by Ours (W), and the one trained on the speckle noise dataset is denoted by Ours (W+S). The data for evaluation consisted of 2,500 sound fields (500 for each number of sound sources) generated by simulation under the same conditions as those used for generating the training data. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were used as evaluation metrics.

3.5 Conventional methods

Six conventional denoising methods were used for comparison with the proposed method, i.e., 2D Gaussian filter, 2D median filter, non-local means (NLM) [49], block-matching and 3D filtering (BM3D) [50], windowed Fourier filtering (WFF) [51], and spatiotemporal band pass filter (ST BPF). The kernel sizes for the Gaussian and median filters were set to 7 pixels. NLM and BM3D used an estimated standard deviation from a noisy image. For NLM, the filtering parameter was experimentally determined to be twice the estimated standard deviation with the patch size of $7 \times 7$ and search area of $23 \times 23$. For BM3D, the default parameters of the Python package bm3d were used. The four filters were applied to the real and imaginary parts of the complex amplitude image, respectively. WFF has demonstrated its superior performances for denoising speckle noise [52]. WFF processed complex-valued images directly without splitting them into real and imaginary parts. The filtering threshold was set to three times the standard deviation of each image, as suggested in [53].

The ST BPF is a spatial frequency filter based on the wave equation [15]. In the wavenumber spectrum, sound components lie on the circumference of $k = (k_x^2+k_y^2)^{1/2} = \omega /c$. Therefore, noise can be reduced by removing the spatial frequencies that do not satisfy the equation. First, if the input signal is a time series of images, a 2D complex sound field for each frequency is obtained by taking a 1D FT. Next, a fourth-order image Butterworth band-pass filter was created for and applied to each 2D complex-amplitude image. The lower cutoff frequency was set to $0.5 k$, and the higher cutoff frequency was $1.2 k$, where $k$ is determined by the center frequency of each Fourier frequency bin. Note that since the image resolution was not very high, the bandwidth of the bandpass filter was set wide in order to avoid removing the broadened components by low-resolution 2D FT. The lower cutoff frequency was determined carefully to avoid erasing too many components near the origin in the wavenumber spectrum.

In addition, for comparison with the existing DNN-based method used for optical fringe denoising, the two DNN methods, DnCNN [54] and LRDUNet [41], were employed. DnCNN was originally proposed for natural image denoising [54] and subsequently applied for optical fringe denoising [29,33]. We have followed the original network architecture by Zhang et al. with 20 layers. The initial learning rate was set to 1e-4 with exponential decay by a factor of 0.95. LRDUNet was recently proposed by Javier et al. and showed superior performances for fringe denoising compared with conventional DnCNN and U-net [41]. The same network architecture as Javier’s paper was used. The initial learning rate was set to 1e-3 and decayed as for DnCNN. DnCNN and LRDUNet were trained by the same sound-field datasets with the proposed method. The training epoch was 50 for both networks.

4. Numerical results

4.1 Denoising of white noise data

Table 1 and Fig. 3 show the evaluation metrics and denoised sound-field images of the conventional and proposed methods for the white-noise data. The table shows that Ours (W) scored the highest in terms of PSNR and SSIM for all $N$ included in the training data. In Section 2 of Supplement 1, generalization results of Ours (W) for $N \in \{6, 7, 8, 9, 10 \}$ are shown. Among the conventional methods, BM3D had the highest PSNR for the overall score, and ST BPF had the highest SSIM. Figure 3 shows that the Gaussian filter smoothed the noisy wavefront, but it blurred short-wavelength sound waves. The median filter performed worse than the other methods. NLM, BM3D, and WFF restored sound fields relatively well when the noise was not severe, whereas they tended to lose almost all sound field information when the noise was significant, particularly in the fourth, sixth, seventh, and tenth rows. The ST BPF showed good overall results except for very low wavenumber, as shown in the second row. When the noise was significant, the denoised sound fields exhibited noticeably different patterns from the true data because noises had spatial scales equivalent to the wavelength of sound passed the spatiotemporal filters. The DnCNN model failed to learn an appropriate mapping from the noisy data to clean data. LRDUNet (W) and Ours (W) produced better noise reduction results than the conventional methods did, regardless of the sound-field parameters, such as the number of sound sources and acoustic wavelength, and the amount of noise. LRDUNet (W+S) and Ours (W+S) seemed to properly restore the wavefronts; nevertheless, its scores were significantly lower than those of LRDUNet (W) and Ours (W). Comparing them with true data, it is evident that both LRDUNet (W+S) and Ours (W+S) simplified the sound field too much. This may be because these models were trained to eliminate speckle noise from the dataset, which resulted in oversimplifying the data lacking speckle noise.

 figure: Fig. 3.

Fig. 3. Examples of denoised images for white-noise data. Two examples are shown for each $N$. The color bar range for all images is from -1 to 1.

Download Full Size | PDF

Tables Icon

Table 1. PSNR and SSIM of denoising results for white noise data; values are averages over the test dataset.

Figure 4 plots the PSNRs of each denoising method, except for the DnCNN, on the 2,500 test data as a function of (a) wavenumber and (b) PSNR of input noisy data. From Fig. 4(a), the dependence of the denoising performance on the wavenumber can be found. The Gaussian filter, NLM, and BM3D performed well for the low wavenumbers, but their performance deteriorated as the wavenumber increased; the PSNR values approach the baselines, random data and data with all zeros, shown by solid and dashed lines. The ST BPF had low scores for very low wavenumbers because the spatial frequency bandpass filter unintentionally eliminated the very low wavenumber components. Most of the data points of LRDUNet (W+S) and Ours (W+S) are aligned with the baseline.

 figure: Fig. 4.

Fig. 4. PSNR plotted as a function of (a) acoustic wavenumber and (b) input noisy data for white-noise data. Solid and dashed lines in (a) indicate the averaged PSNRs of the two baselines; the solid line is for random data from a uniform distribution on the interval (-1, 1), and the dashed line is data with all zeros.

Download Full Size | PDF

Figure 4(b) indicates the change in PSNR values before and after denoising. As in the noisy data plot, if PSNRs do not change by denoising, the data points align on the straight line of slope one, as shown by the black solid line, and if the PSNR values improve, the data points are plotted above the line. Gaussian filter, WFF, and ST BPF show a decrease in PSNR when the input noisy data has a high PSNR, i.e., clean data. This suggests that some signal components are inadvertently removed during the processing, leading to a decrease in PSNR. In contrast, NLM, BM3D, LRDUNet (W), and Ours (W) show improvements in most conditions. Particularly, LRDUNet (W) and Ours (W) demonstrate substantial enhancements, highlighting the effectiveness of learning-based methods.

4.2 Denoising of speckle noise data

Table 2 and Fig. 5 show the evaluation metrics and denoised sound-field images for the speckle noise data. The conventional filters, LRDUNet (W), and Ours (W) scored lower compared with their white-noise results, while LRDUNet (W+S) and Ours (W+S) scored higher. Figure 5 indicates that conventional filters, especially when dealing with considerable noise, are challenging to restore the sound fields accurately. In addition, the networks trained on the white noise dataset may appear to perform well, but there are observed changes in the shape of the wavefront and a reduction in the magnitude. By contrast, LRDUNet (W+S) and Ours (W+S) significantly removed the noise and restored sound fields, except for LRDUNet (W+S) in the tenth row. The scatter plots of the PSNRs are shown in Fig. 6. The PSNRs of the conventional filters, LRDUNet (W), and Ours (W) leveled off around 20 dB for almost all wave numbers, close to the zero data baseline. LRDUNet (W+S) and Ours (W+S) showed significant improvement for most of the data regardless of the wavenumber. These results confirm that the network properly learned the nonlinear transformation caused by speckle noise from the created training dataset.

 figure: Fig. 5.

Fig. 5. Examples of denoised images for the data with white and speckle noises. Two examples are shown for each $N$. The color bar range for all images is from -1 to 1.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. PSNR plotted as a function of (a) acoustic wavenumber and (b) input noisy data for speckle noise data. Solid and dashed lines in (a) indicate the averaged PSNRs of the two baselines; the solid line is for random data from a uniform distribution on the interval (-1, 1), and the dashed line is data with all zeros.

Download Full Size | PDF

Tables Icon

Table 2. PSNR and SSIM of denoising results for data with white and speckle noises; values are averages over the test dataset.

4.3 Denoising of transient and broadband sound field

To verify the validity of the proposed method for transient and broadband signals, it was necessary to create and test sound field data in the time domain. The simulation of the sound field was conducted using COMSOL multiphysics to simulate a transient Gaussian pulse in a free field. The created sound field is shown in the top row of Fig. 7(a). Note that the Gaussian pulse is a transient and broadband signal because the Fourier transform of Gaussian shape is also Gaussian in the frequency domain.

 figure: Fig. 7.

Fig. 7. Denoising of the transient sound field of the Gaussian pulse propagation. (a) True images and denoised images from the true data, (b) and (c) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (a). (d) Noisy data and denoised images from the noisy data, (e) and (f) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (d). The frames shown in (a) and (d) are from left to right: 10, 14, 18, 22, 26, 30, and 34.

Download Full Size | PDF

Figure 7(a) also shows the denoising results of a noiseless Gaussian pulse sound field by the Gaussian filter and Ours (W). As no noise is present in the data, the denoising results should be identical to the True data. The Gaussian filter causes the wavefront’s rise to become blurry, whereas Ours (W) remains almost identical to the True data. The waveforms extracted as an average of 4 $\times$ 4 pixels in the center of the image and their power spectra are shown in Fig. 7(b) and (c). As frequency increases, the deviation from the True data becomes more significant in the Gaussian filter, while Ours (W) maintains almost the same agreement with the True data in most frequency bands.

Figure 7(d)-(f) shows the results for noisy data with white noise. As shown in Fig. 7(d), Ours (W) removes the noise and restores the original sound field. The waveforms in Fig. 7(e) ensure that the temporal information is preserved after denoising by the DNN. Furthermore, from the spectra in Fig. 7(f), one can find that Ours (W) restores the frequency spectrum of the clean signal buried in the white noise. These analyses indicate that the proposed deep sound-field denoiser reduces the noise in the transient and broadband sound field effectively and does not lose much information about the original sound field.

5. Experiments

We denoised experimental data measured by two optical systems: PPSI [1], in which the primary noise source was white noise, and HSI using Fresnel lenses [48], in which speckle noise was superimposed.

5.1 Parallel phase-shifting interferometry

PPSI is a system that combines a Fizeau interferometer and a polarized high-speed camera, as shown in Fig. 8(a). It measures four phase-shifted interference fringe images simultaneously, which enables instantaneous and quantitative observation of sound fields. For details of the measurement technique, see, for example, [1].

 figure: Fig. 8.

Fig. 8. (a) Schematic diagram of the PPSI measurement system. A three-cycle burst wave with a center frequency of 12 kHz was emitted from the loudspeaker. (b) Sound pressure waveform measured by the microphone placed 20 cm from the loudspeaker’s diaphragm. (c) Denoising results of transient sound fields measured by PPSI.

Download Full Size | PDF

In this experiment, a 12-kHz burst wave generated from a loudspeaker (FOSTEX FT48D) was observed. The sound measured by a microphone placed 20 cm from the loudspeaker is shown in Fig. 8(b). The generated sound was a three-cycle 12 kHz burst wave with a peak sound pressure of 13 Pa at the microphone position. The frame rate of the high-speed camera was set to 50 kfps, the number of frames was 1000, the image resolution was 128 $\times$ 128, and the imaging area size was 80 mm $\times$ 80 mm. The optical phase map at each frame was calculated using a typical arctangent operation, followed by 1D unwrapping along the time direction for each pixel. Subsequently, a time-directional high-pass filter with a cutoff frequency of 500 Hz was applied to remove low-frequency noise components. We call this data the noisy data. The denoising was performed on the noisy data by using the same conventional filters and trained DNNs as in the previous section.

Figure 8(c) shows the time-series sound field images of the noisy and denoised data. In the noisy data, random noise and oblique noise patterns appeared in addition to sound waves propagating from the left outside of the image to the right. These oblique patterns should be phase shift errors caused by imperfections in the optical system. Excluding the median filter, it is noticeable that the apparent noise is effectively removed. The difference can be observed in small amplitude wavefronts at 60 $\mu$s and 360 $\mu$s. WFF and Ours (W) exhibit smoother restoration of these small amplitude components compared to other methods.

As a further example of a realistic sound field, we conducted denoising of the sound field radiated from human playing castanets, a type of percussion instrument. Figure 9(a) shows a photograph of the castanets. The frame rate of the high-speed camera was set to 20 kfps, the number of frames was 1000, the image resolution was 128 $\times$ 128, and the size of the captured area was 100 mm $\times$ 100 mm.

 figure: Fig. 9.

Fig. 9. (a) Photo of castanets. (b) Imaging results. (c) and (d) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (b).

Download Full Size | PDF

Figure 9(b) presented the visualization results. A portion of the castanets shadow is visible in the lower-left corner of each image. The waveforms extracted from the image center 4 $\times$ 4 pixels are plotted in Fig. 9(c), and the power spectra of the waveforms are shown in Fig. 9(d). Figure 9(d) shows that the spectral peak is observed at approximately 2.5 kHz. Since the wavelength of a 2.5 kHz sound wave is approximately 140 mm, only one wavelength or less is visible in the imaging area. It is important to note that, due to this long wavelength, the spatiotemporal low-pass filter (ST LPF) was used instead of the ST BPF. The noisy data show the pressure peak (red) and dip (blue) are spread out from the lower left castanets position circularly. Additionally, a noticeable diagonal linear pattern spans across the entire image. It is important to note that this pattern does not indicate sound propagation but spatial noise since it does not propagate through space. The results of the denoising process demonstrate that the Gaussian filter, LRDUet (W), and Ours (W) effectively smooth out this pattern. This means non-moving noise components are removed while propagating sound components remain preserved. Additionally, as depicted in Fig. 9(c), the discrepancies among the temporal waveforms by denoising methods are negligible. Similarly, in Fig. 9(d), the first signal component with a peak at 2.5 kHz and the second with a peak at 7.5 kHz remain nearly unaltered in the spectrum. These observations suggest that all denoising methods maintain the temporal and spectral data of the sound field, while specific techniques, such as ours (W), can remove spatial noises. Consequently, we can deduce that the proposed DNN methods are efficient for denoising intricate and practical sound fields.

5.2 Holographic speckle interferometry with Fresnel lens

An overview of the measurement using HSI is shown in Fig. 10(a). This experiment used a measurement system with Fresnel lenses, as proposed in [48]. It was proposed to establishas a lightweight and inexpensive large-aperture sound-field imaging system using Fresnel lenses. However, the measured sound-field images showed significant spatial distortion due to speckle noise. In the original paper, narrow spatial bandpass filters were used for noise reduction, but such narrow filters may not be so useful for practical applications. Here, we investigated the effectiveness of the proposed DNN-based denoising method.

 figure: Fig. 10.

Fig. 10. (a) Schematic diagram of the HSI measurement system. The sound field between the two Fresnel lenses is measured. The sSinusoidal waves of 5, 10, and 15 kHz was are emitted from the loudspeaker. (b) Denoising results of harmonic sound fields measured by HSI.

Download Full Size | PDF

Sinusoidal signals of 5, 10, and 15 kHz were radiated from the same loudspeaker used in the PPSI experiment. The amplitudes were adjusted so that the sound pressure level at the microphone located 20 cm in front of the loudspeaker diaphragm was 6.3 Pa (110 dB SPL) at all frequencies. The frame rate of the high-speed camera was 50 kfps, the number of frames was 1000, the image resolution was 128 $\times$ 128, and the size of the captured area was 100 mm $\times$ 100 mm. The phase maps of the speckle interference fringes were estimated using the 2D FT method [55], and a complex sound field at each frequency was extracted via 1D FT along the time direction.

Figure 10(b) shows the real parts of the noisy and denoised complex amplitudes. The noisy data contains the spherical sound wave propagates from top to bottom and low-spatial-frequency wavy patterns that modulate the spherical wavefront. These patterns originate from the recorded specklegrams of this method, as explained in [48]. For 5 kHz, ST BPF and the four DNNs restored smooth sound waves. For 10 and 15 kHz, one can see that Ours (W+S) provides the smoothest circular wavefronts compared to the other methods. Since the same loudspeaker as in the PPSI experiment was used, the harmonic wavefront should be smooth and circular. Therefore, it can be surmised that Ours (W+S) showed the best wavefront restoration performance in speckle sound-field imaging, consistent with the numerical data results.

6. Conclusions

We developed a DNN-based sound-field denoising method in which the trained network decomposes time-varying sound field data into 2D complex amplitude images and denoises each individual image. A 2D sound field simulation with random parameters was used to generate the training dataset. By taking into account the measurement process of the optical system, the network was successfully trained to remove not only white Gaussian noise but also speckle noise. We confirmed that the proposed method was effective on experimental data and that it outperformed conventional denoising methods.

There are questions to be tackled in future work. First, in this study, we employed DnCNN, LRDUNet, and NAFNet with fixed network sizes. Therefore, the effect of the choice of optical network architecture and its size should be investigated. Second, the simulation method and the number of training data should also be investigated. The generalization abilities against the wavenumber range, complexity of sound fields, and amount and types of noise must depend on the training dataset. Last but not least, it is important to extend the proposed method to different measurement situations, such as spatial 3D data, randomly sampled data, and data with occlusions to provide a versatile denoiser for optically measured sound field data.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. The code repository can be found in [56].

Supplemental document

See Supplement 1 for supporting content.

References

1. K. Ishikawa, K. Yatabe, N. Chitanont, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “High-speed imaging of sound using parallel phase-shifting interferometry,” Opt. Express 24(12), 12922–12932 (2016). [CrossRef]  

2. A. Torras-Rosell, S. Barrera-Figueroa, and F. Jacobsen, “Sound field reconstruction using acousto-optic tomography,” J. Acoust. Soc. Am. 131(5), 3786–3793 (2012). [CrossRef]  

3. Y. Oikawa, Y. Ikeda, M. Goto, T. Takizawa, and Y. Yamasaki, “Sound field measurements based on reconstruction from laser projections,” in IEEE International Conference on Acoustics, Speech, and Signal Processing 2005, vol. 4 (2005), pp. 661–664.

4. O. Matoba, H. Inokuchi, K. Nitta, and Y. Awatsuji, “Optical voice recorder by off-axis digital holography,” Opt. Lett. 39(22), 6549–6552 (2014). [CrossRef]  

5. Y. Takase, K. Shimizu, S. Mochida, T. Inoue, K. Nishio, S. K. Rajput, O. Matoba, P. Xia, and Y. Awatsuji, “High-speed imaging of the sound field by parallel phase-shifting digital holography,” Appl. Opt. 60(4), A179–A187 (2021). [CrossRef]  

6. S. K. Rajput, O. Matoba, M. Kumar, X. Quan, and Y. Awatsuji, “Sound wave detection by common-path digital holography,” Opt. Lasers Eng. 137, 106331 (2021). [CrossRef]  

7. S. Hassad, K. Ferria, L. Bouamama, and P. Picart, “Multi-view acoustic field imaging with digital color holography,” Front. Photon. 3, 929031 (2022). [CrossRef]  

8. K. Ishikawa, R. Tanigawa, K. Yatabe, Y. Oikawa, T. Onuma, and H. Niwa, “Simultaneous imaging of flow and sound using high-speed parallel phase-shifting interferometry,” Opt. Lett. 43(5), 991–994 (2018). [CrossRef]  

9. K. Ishikawa, K. Yatabe, and Y. Oikawa, “Seeing the sound of castanets: Acoustic resonances between shells captured by high-speed optical visualization with 1-mm resolution,” J. Acoust. Soc. Am. 148(5), 3171–3180 (2020). [CrossRef]  

10. R. Tanigawa, K. Yatabe, and Y. Oikawa, “Experimental visualization of aerodynamic sound sources using parallel phase-shifting interferometry,” Exp. Fluids 61(9), 206 (2020). [CrossRef]  

11. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Acousto-optic back-projection: Physical-model-based sound field reconstruction from optical projections,” J. Sound Vib 394, 171–184 (2017). [CrossRef]  

12. S. A. Verburg and E. Fernandez-Grande, “Acousto-optical volumetric sensing of acoustic fields,” Phys. Rev. Appl. 16(4), 044033 (2021). [CrossRef]  

13. S. A. Verburg, E. G. Williams, and E. Fernandez-Grande, “Acousto-optic holography,” J. Acoust. Soc. Am. 152(6), 3790–3799 (2022). [CrossRef]  

14. K. Yatabe, R. Tanigawa, K. Ishikawa, and Y. Oikawa, “Time-directional filtering of wrapped phase for observing transient phenomena with parallel phase-shifting interferometry,” Opt. Express 26(11), 13705–13720 (2018). [CrossRef]  

15. N. Chitanont, K. Yatabe, K. Ishikawa, and Y. Oikawa, “Spatio-temporal filter bank for visualizing audible sound field by schlieren method,” Appl. Acoust. 115, 109–120 (2017). [CrossRef]  

16. R. Tanigawa, K. Yatabe, and Y. Oikawa, “Guided-spatio-temporal filtering for extracting sound from optically measured images containing occluding objects,” in IEEE International Conference on Acoustics, Speech, and Signal Processing 2019, (2019), pp. 945–949.

17. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

18. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv, arXiv:1803.04189 (2018). [CrossRef]  

19. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (CVPR), (2019), pp. 2129–2137.

20. Z. Yue, Q. Zhao, L. Zhang, and D. Meng, “Dual adversarial network: Toward real-world noise removal and noise generation,” in Proceedings of the European Conference on Computer Vision (ECCV), (2020), pp. 41–58.

21. S. Zamir, A. Arora, S. Khan, M. Hayat, F. Khan, M. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (IEEE Computer Society, 2021), pp. 14816–14826.

22. L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple baselines for image restoration,” in Proceedings of the European Conference on Computer Vision (ECCV), (Springer Nature Switzerland, Cham, 2022), pp. 17–33.

23. H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (IEEE Computer Society, 2021), pp. 12294–12305.

24. J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2021), pp. 1833–1844.

25. Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), pp. 17683–17693.

26. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), pp. 5728–5739.

27. A. Abdelhamed, S. Lin, and M. S. Brown, “A high-quality denoising dataset for smartphone cameras,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 1692–1700.

28. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Li, P. Fan, J. Han, K. Qian, and Q. Chen, “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

29. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi, “Fringe pattern denoising based on deep learning,” Optics Communications 437, 148–152 (2019). [CrossRef]  

30. J. Shi, X. Zhu, H. Wang, L. Song, and Q. Guo, “Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3d measurement,” Opt. Express 27(20), 28929–28943 (2019). [CrossRef]  

31. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(02), 1 (2019). [CrossRef]  

32. K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust deep learning phase unwrapping,” Opt. Express 27(10), 15100–15115 (2019). [CrossRef]  

33. S. Montresor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020). [CrossRef]  

34. T. Nguyen, V. Bui, V. Lam, C. B. Raub, L.-C. Chang, and G. Nehmetallah, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25(13), 15043–15057 (2017). [CrossRef]  

35. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

36. W. Jeon, W. Jeong, K. Son, and H. Yang, “Speckle noise reduction for digital holographic images using multi-scale convolutional neural networks,” Opt. Lett. 43(17), 4240–4243 (2018). [CrossRef]  

37. F. Hao, C. Tang, M. Xu, and Z. Lei, “Batch denoising of espi fringe patterns based on convolutional neural network,” Appl. Opt. 58(13), 3338–3346 (2019). [CrossRef]  

38. B. Lin, S. Fu, C. Zhang, F. Wang, and Y. Li, “Optical fringe patterns filtering based on multi-stage convolution neural network,” Opt. Lasers Eng. 126, 105853 (2020). [CrossRef]  

39. A. Reyes-Figueroa, V. H. Flores, and M. Rivera, “Deep neural network for fringe pattern filtering and normalization,” Appl. Opt. 60(7), 2022–2036 (2021). [CrossRef]  

40. L. Wang, R. Li, F. Tian, and X. Fang, “Application of attention-dncnn for espi fringe patterns denoising,” J. Opt. Soc. Am. A 39(11), 2110–2123 (2022). [CrossRef]  

41. J. Gurrola-Ramos, O. Dalmau, and T. Alarcón, “U-net based neural network for fringe pattern denoising,” Opt. Lasers Eng. 149, 106829 (2022). [CrossRef]  

42. K. Yan, Y. Yu, T. Sun, A. Asundi, and Q. Kemao, “Wrapped phase denoising using convolutional neural networks,” Opt. Lasers Eng. 128, 105999 (2020). [CrossRef]  

43. K. Yan, L. Chang, M. Andrianakis, V. Tornari, and Y. Yu, “Deep learning-based wrapped phase denoising method for application in digital holographic speckle pattern interferometry,” Applied Sciences 10(11), 4044 (2020). [CrossRef]  

44. J. Li, C. Tang, M. Xu, Z. Fan, and Z. Lei, “Dbdnet for denoising in espi wrapped phase patterns with high density and high speckle noise,” Appl. Opt. 60(32), 10070–10079 (2021). [CrossRef]  

45. J. Li, C. Tang, M. Xu, and Z. Lei, “Uneven wrapped phase pattern denoising using a deep neural network,” Appl. Opt. 61(24), 7150–7157 (2022). [CrossRef]  

46. Q. Fang, H. Xia, Q. Song, M. Zhang, R. Guo, S. Montresor, and P. Picart, “Speckle denoising based on deep learning via a conditional generative adversarial network in digital holographic interferometry,” Opt. Express 30(12), 20666–20683 (2022). [CrossRef]  

47. E. G. Williams, “Green Functions and the Helmholtz Integral Equation,” in Fourier Acoustics, E. G. Williams, ed. (Academic Press, 1999), chap. 8.

48. K. Ishikawa, K. Yatabe, Y. Oikawa, Y. Shiraki, and T. Moriya, “Speckle holographic imaging of a sound field using fresnel lenses,” Opt. Lett. 47(21), 5688–5691 (2022). [CrossRef]  

49. A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2 (2005), pp. 60–65.

50. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

51. Q. Kemao, “Windowed fourier transform for fringe pattern analysis,” Appl. Opt. 43(13), 2695–2702 (2004). [CrossRef]  

52. S. Montresor and P. Picart, “Quantitative appraisal for noise reduction in digital holographic phase imaging,” Opt. Express 24(13), 14322–14343 (2016). [CrossRef]  

53. Q. Kemao and S. H. Soon, “Two-dimensional windowed Fourier frames for noise reduction in fringe pattern analysis,” Opt. Eng 44(7), 075601 (2005). [CrossRef]  

54. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

55. J. W. Goodman, Speckle and metrology in Speckle phenomena in optics: theory and applications, (Ben Roberts, 2007), chap. 9.

56. K. Ishikawa, “Code repository for deep sound-field denoiser,” GitHub (2023) [accessed 13 September 2023], https://github.com/nttcslab/deep-sound-field-denoiser.

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. The code repository can be found in [56].

56. K. Ishikawa, “Code repository for deep sound-field denoiser,” GitHub (2023) [accessed 13 September 2023], https://github.com/nttcslab/deep-sound-field-denoiser.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Overview of the deep sound-field denoiser. (a) Training process. A sound-field dataset is generated in a 2D acoustic simulation with randomized parameters. Each data is a complex-amplitude sound-field image of a harmonic frequency $\omega$. A nonlinear activation-free network (NAFNet) is trained using the clean and noisy pairs of the simulated sound fields. (b) Inference process. The time-sequential sound-field images are transformed into complex amplitude images, where each image is denoised by the trained network.
Fig. 2.
Fig. 2. (a) Sound-field data generation. Point sources are randomly generated within the sound source area, and the 2D true sound fields in the center area are generated using the Green’s function of the 2D Helmholtz equation. (b) Examples of the generated sound-field data. $N$ represents the number of sound sources. Two examples are shown for each $N$. (c) Histogram of SNR of the generated white noise images.
Fig. 3.
Fig. 3. Examples of denoised images for white-noise data. Two examples are shown for each $N$. The color bar range for all images is from -1 to 1.
Fig. 4.
Fig. 4. PSNR plotted as a function of (a) acoustic wavenumber and (b) input noisy data for white-noise data. Solid and dashed lines in (a) indicate the averaged PSNRs of the two baselines; the solid line is for random data from a uniform distribution on the interval (-1, 1), and the dashed line is data with all zeros.
Fig. 5.
Fig. 5. Examples of denoised images for the data with white and speckle noises. Two examples are shown for each $N$. The color bar range for all images is from -1 to 1.
Fig. 6.
Fig. 6. PSNR plotted as a function of (a) acoustic wavenumber and (b) input noisy data for speckle noise data. Solid and dashed lines in (a) indicate the averaged PSNRs of the two baselines; the solid line is for random data from a uniform distribution on the interval (-1, 1), and the dashed line is data with all zeros.
Fig. 7.
Fig. 7. Denoising of the transient sound field of the Gaussian pulse propagation. (a) True images and denoised images from the true data, (b) and (c) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (a). (d) Noisy data and denoised images from the noisy data, (e) and (f) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (d). The frames shown in (a) and (d) are from left to right: 10, 14, 18, 22, 26, 30, and 34.
Fig. 8.
Fig. 8. (a) Schematic diagram of the PPSI measurement system. A three-cycle burst wave with a center frequency of 12 kHz was emitted from the loudspeaker. (b) Sound pressure waveform measured by the microphone placed 20 cm from the loudspeaker’s diaphragm. (c) Denoising results of transient sound fields measured by PPSI.
Fig. 9.
Fig. 9. (a) Photo of castanets. (b) Imaging results. (c) and (d) temporal signals and power spectra extracted from the center 4 $\times$ 4 pixels of the images in (b).
Fig. 10.
Fig. 10. (a) Schematic diagram of the HSI measurement system. The sound field between the two Fresnel lenses is measured. The sSinusoidal waves of 5, 10, and 15 kHz was are emitted from the loudspeaker. (b) Denoising results of harmonic sound fields measured by HSI.

Tables (2)

Tables Icon

Table 1. PSNR and SSIM of denoising results for white noise data; values are averages over the test dataset.

Tables Icon

Table 2. PSNR and SSIM of denoising results for data with white and speckle noises; values are averages over the test dataset.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

ϕ s ( x , y , t ) = k L n 0 1 γ P 0 z 1 z 2 p ( x , y , z , t ) d z ,
p i m a g e , t r u e ( r , k ) = A i = 1 N a i j 4 H 0 ( 2 ) ( k | r i r | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.