Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive terahertz image super-resolution with adjustable convolutional neural network

Open Access Open Access

Abstract

During the real-aperture-scanning imaging process, terahertz (THz) images are often plagued with the problem of low spatial resolution. Therefore, an accommodative super-resolution framework for THz images is proposed. Specifically, the 3D degradation model for the imaging system is firstly proposed by incorporating the focused THz beam distribution, which determines the relationship between the imaging range and the corresponding image restoration level. Secondly, an adjustable CNN is introduced to cope with this range dependent super-resolution problem. By simply tuning an interpolation parameter, the network can be adjusted to produce arbitrary restoration levels between the trained fixed levels without extra training. Finally, by selecting the appropriate interpolation coefficient according to the measured imaging range, each THz image can be coped with its matched network and reach the outstanding super-resolution effect. Both the simulated and real tested data, acquired by a 160 ∼ 220 GHz imager, have been used to demonstrate the superiority of our method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Due to the unique penetrating and nonionizing characteristics together with the miniaturized and low-cost properties, terahertz (THz) real-aperture-scanning imager offers new solutions for art-works conservation [13], industrial products quality control [46], packaged integrated circuits (IC) nondestructive inspection [7,8], and standoff personnel screening [911], etc. However, the intrinsic long-wavelength and the range-dependent imaging process have become great hindrances for high-resolution far-field THz imaging [4,5,711]. Therefore, tremendous amount of efforts has been dedicated to enhance the spatial resolution of THz images using image processing techniques.

Traditionally, diversified analytic methods have been proposed for THz image resolution enhancement, including both the non-blind methods and the blind methods. Utilizing the simulated [7,8,12], indirectly measured [1315], or directly measured [1517] point-spread-function (PSF) of the THz imaging system, Lucy–Richardson [1214,1618] and Wiener [15] deconvolution methods have been widely used for reducing the impact of the PSF and thus raise the spatial resolution. While, utilizing only the prior information of the blurred image itself, the total variation (TV) [12] and the normalized sparsity measurement [19] blind deconvolution methods were also used to estimate the PSF and improve the spatial resolution. However, these analytic methods are often suppressed by the rapid noise amplification [20]. Furthermore, the image size remains the same after the deconvolutional processing. Thus, the resolution enhancement ability is limited.

Recently, learning-based method, which directly learns an end-to-end mapping between low- and high-resolution images, has also been proposed to implement THz image deconvolution and super-resolution [21]. Due to the powerful mapping ability of the convolutional neural network (CNN) on image restoration problems and the efficient training implements on modern GPUs [22,23], the method could achieve better enhancement results than the analytic algorithms [21].

However, there is an issue that prevents the above CNN based learning method from widely used for THz images. When using lenses or reflectors to focus the THz beam, which is a common strategy to raise the lateral resolution for real aperture THz imaging systems [1,35,7,911], the PSF continuously changes with imaging range [7,8,11]. Thus, the degree of blur is range dependent for the obtained THz images. Nevertheless, the CNN networks are usually trained with discrete fixed levels, which would lead to either the mismatched restoration problems (the restoration level of the network is mismatched with the THz images, leading to over-sharpening or over-smoothed restoration effects) [20,24,25] or the enormous computational burden of training many discrete restoration levels [21].

In this paper, in order to solve this problem, an accommodative super-resolution framework for THz images is proposed. Specifically, by incorporating the focused THz beam distribution, the 3D degradation model for the THz real-aperture-scanning imager is firstly proposed. Thus, the relationship between the imaging range and the corresponding restoration level of the image is determined. Secondly, an adjustable CNN is specifically used to cope with this range dependent super-resolution problem. By simply tuning an interpolation parameter, the network can be adjusted to produce arbitrary and continuous restoration levels between the trained fixed levels without the extra training process. Finally, by selecting the appropriate network interpolation coefficient according to the measured imaging range, each THz image can be coped with its matched network and reach the best super-resolution effect. The illustration of the method is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Geometry of a typical real-aperture-scanning imaging system (bottom) and the illustration of the accommodative super-resolution method.

Download Full Size | PDF

Overall, the contributions of this paper are as follows:

  • 1) Considering the focused beam distribution and the precise ranging ability of common THz systems, the 3D degradation model for the real-aperture-scanning imaging process is proposed. Thus, utilizing the measured range information, the restoration level of each THz image can be determined. So that each image can be specifically treated.
  • 2) In order to cope with this continuous range dependent restoration problem, an adjustable CNN is introduced. By simply tuning an input coefficient, the network can be adjusted to the corresponding matched restoration level without the extra training process, and thus produce the best super-resolution effect.

2. Accommodative super-resolution framework

2.1 3D degradation model for the imaging process

For a typical real-aperture THz imaging system, lenses or reflectors are usually used to focus the beam and the undertest object is often placed near the focal plane in order to raise the lateral resolution, then the target is imaged by the raster-scanned THz beam [1,37,911]. Furthermore, in conjunction with the third dimensional range information provided by the radar capability, the THz tomography images can thus be acquired [35,10]. A reflective tomography imaging sketch is shown in Fig. 1 (bottom). The x-y plane represents the raster-scanned coordinate (lateral plane) of the imaging system, z-axis is the path of traveling for the focused THz beam and z = 0 is the focal plane.

This imaging process is mathematically modeled by [21]:

$$i(x,y,z) = [{PSF(x,y,z) \otimes o(x,y,z)} ]D_s^ \downarrow + n. $$

Considering the focused THz beam distribution (shown with the blue beam in Fig. 1), the ideal object function o is firstly sampled by the PSF (normalized THz beam distribution on the imaging plains). And the two-dimensional raster-scan is mathematically modeled by the convolution operation ${\otimes}$. However, due to the raster scan step of the imaging system, the sampling interval on the lateral plane is limited, representing by the down-sampler $D_s^ \downarrow$. Therefore, the output images are the smoothed and down-sampled version of the ideal object function o, as illustrated in the imaging process of Fig. 1 (top). Finally, the acquired low-resolution (LR) degraded image i is influenced by the added system noise n.

Furthermore, due to the focused beam distribution, the PSF changes with the range z, leading to the blurred extent (spatial resolution) range variable. When the thickness of the target undertest is non-neglectable, the object functions are inevitably located at different ranges, then the spatial resolution of the THz tomography images are range dependent, hampering the testing to a great extent, as illustrated in Fig. 1 (top) with different imaging ranges (0, z1, and z2).

However, it is hard to accurately obtain the 3D PSF distribution experimentally [15,16]. Thus, PSF of the typical THz imaging system is usually approximated by a TEM00 mode Gaussian beam [7,8]:

$$PSF(x,y,z) = \left( {\frac{2}{{\pi \omega {{(z)}^2}}}} \right)\exp \left( { - 2\left( {\frac{{{x^2} + {y^2}}}{{\omega {{(z)}^2}}}} \right)} \right), $$
where $\omega (z)$ is the spot radius of the beam at distance z:
$$\omega (z) = {\omega _0}{\left( {1 + {{\left( {\frac{{\lambda z}}{{\pi \omega_0^2}}} \right)}^2}} \right)^{1/2}}, $$
and ${\omega _0}$ is the spot radius at the beam waist (focal plane), which is determined by the working frequency, the antennas and the configuration of the lenses/ reflectors [7,8,15].

By the Gaussian beam approximation, it can be seen from Eq. (13) that when the system parameters (scan step and the noise level) are determined, the restoration level of the acquired image $i(x,y,z)$ can be determined by the imaging range z. And, utilizing the bandwidth information of some THz systems, such as the frequency-modulated continuous wave (FMCW) [1,4,5,912] or time-domain spectroscopy (TDS) [2,3] system, the range z of each object function can be precisely measured during the imaging process (illustrated in circuit testing in Sec. 3.2).

Therefore, by the proposed 3D degradation model and by the measured range information z, the restoration levels of the obtained images i can be determined. So that, each image can be specifically treated, which makes the accommodative image super-resolution possible. It should be mentioned that for the simplicity of the problem, the system noise n is regarded as range independent in this paper.

2.2 Network architecture

In recent years, diverse CNNs have been proposed to solve the optical image restoration problems (denoise [23,25], deblur (deconvolution) [20,23,24], and super-resolution [21,26,27], etc.) and reach state of the art quality. In this paper, utilizing the powerful mapping ability of the CNN and the adjustable feature of the AdaFM layer [20,25], we have proposed an adjustable deep residual CNN to solve the accommodative super-resolution problem for THz images. Specifically, the deep residual basic CNN is firstly used to map the ultimate fixed restoration levels. Then the AdaFM layers in the network are interpolated to the produce the smooth and continuous restoration levels between the learned fixed levels without extra training. Finally, the interpolated parameter of the network is linearly matched with the measured range z in order to adaptively restore the THz images.

2.2.1 Basic network

Based on the 3D degradation model for the imaging process, the mapping task for the network can be created as:

$$\textrm{LR}({\omega (z ),n,{D_s}} )= [{PSF({\omega (z )} )\otimes \textrm{HR}} ]D_s^ \downarrow + n, $$
where z is the offset imaging distance from the focal plane, controlling the spot radius $\omega (z)$ of the PSF (imaging beam), that is, the smoothing extent of the LR image. n is the noise level and Ds is the down-sampling interval. It can be seen from Eq. (4) that during the mapping process from $\textrm{LR}({\omega (z),n,{D_s}} )$ to HR, the smoothing effect caused by the PSF and the impact of the noise can be eliminated (deconvolution and denoise). Moreover, the sampling interval can also be reduced, producing larger image size and recovering the aliased high frequency components (due to the Nyquist criterion) in the images [28]. Therefore, by this super-resolution processing (in this paper, super-resolution is the combination of denoise, deconvolution, and sampling interval reduction), the network can raise the spatial-resolution of the image and lower its noise.

Considering the super-resolution mapping task in Eq. (4), together with the mapping ability and the ease of training of the network, we use a deep residual CNN [26,27,29] as our basic network Nb. The architecture of the network is shown in Fig. 2 (left).

 figure: Fig. 2.

Fig. 2. The architecture of the basic network Nb (left) and the adaptive network Na (right).

Download Full Size | PDF

For the basic network Nb, the highest restoration level $L1({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$, representing the smoothest LR images, was used to create the dataset1 ($\textrm{LR}({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$ and HR). Where zmax represents the maximum offset imaging distance from the focal plane. And in this paper, Ds was set to 2, halving the sampling interval of the image (also called super-resolution scale ×2 in computer vision [26]). It is noted that, only by slightly adjusting the pixel shuffle layer in the network, other Ds values can also be similarly applied.

All the parameters $\Theta $ in the basic network Nb are optimized through the training process, where the differences (loss function) between the reconstructed images ${\textrm{N}_\textrm{b}}[{\textrm{LR}({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} ),\Theta} ]$ and the corresponding HR images are iteratively minimized. After that, the network deterministically matches to the fixed restoration level $L1({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$.

2.2.2 Adjustable network

Inspired by the fact that AdaFM layers in CNN can be interpolated to handle continuous denoise levels without extra training [25], and that by simply adjusting an input parameter, CNN can linearly produce the customized intermediate products between two target domains [30], we introduce the AdaFM layers and the input parameter to our network in order to smoothly adjust the image restoration level of CNN [20]. Thus, THz images with different restoration levels can be specifically treated.

The basic AdaFM layer is formulated as:

$$\textrm{AdaFM}(X) = G \ast X + B, $$
where X are the input feature maps (W, H, 64), * denotes for the group convolution operation (the convolution works in two-dimension instead of the normally three [20,25], as illustrated in Fig. 3), $G({3,3,64} )$ are the filters and $B({1,1,64} )$ represent the biases. According to Eq. (5), by inserting an AdaFM layer after a convolutional layer, the statistics of the convolutional layer can be manipulated. In other words, each feature map output by the convolutional layer can be further adjusted by the following AdaFM layer.

 figure: Fig. 3.

Fig. 3. Schematic diagram of convolution operations (Ref. [20], Fig. 4). (a) Convolution in the convolutional layer; (b) group convolution in the AdaFM layer.

Download Full Size | PDF

Therefore, by adding the AdaFM layers to the trained basic network Nb, the new adaptive network Na, as shown in Fig. 2 (right), can be adapted to another new restoration level. Then, by interpolating the added AdaFM layers in the network, the adjustable network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ is produced and can handle continuous restoration levels between the learned two fixed levels. The detailed working process of the adjustable network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ is shown in the following steps:

Step 1: The basic network Nb that we discussed above is firstly trained based on dataset1 ($\textrm{LR}({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$ and HR). So that the network Nb can deterministically match to the fixed restoration level $L1({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$ (corresponding to the most blurred restoration level).

Step 2: Based on the trained basic network Nb in step 1, by inserting the AdaFM layers after specific convolutional layers, the adaptive network Na is created, as shown in Fig. 2 (right). During training, all the parameters in the trained network Nb are fixed and only the new parameters in the AdaFM layers are optimized based on the dataset2 ($\textrm{LR}({\omega (0),n,2} )$ and HR). Then, the adaptive network Na is transformed to match to the new fixed restoration level $L2({\omega (0),n,2} )$ (corresponding to the least blurred restoration level).

Step 3: By adding an additional input parameter λ, we can easily interpolate the parameters of the filters and biases in these AdaFM layers and then produce the adjustable network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ without extra training:

$$G_{}^\ast (\lambda ) = I + \lambda ({G - I} ),\textrm{}B_{}^\ast (\lambda ) = \lambda B,\textrm{}0 \le \lambda \le 1, $$
where ${G^{\ast}}(\lambda )$ and ${B^{\ast}}(\lambda )$ are the interpolated filters and biases in these AdaFM layers. In this way, the relatedness of the network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ to the two fixed restoration levels (L1 and L2) can be accordingly adjusted.

So far, the adjustable network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ is created. Merely by tweaking the interpolation parameter λ, the network can produce arbitrary and continuous restoration results between a start $L1({\omega (\textrm{|}{z_{\max}}\textrm{|}),n,2} )$ and an end $L2({\omega (0),n,2} )$ level. However, the relationship between the parameter λ and the restoration level of the network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ may not be linear, as illustrated in Fig. 4. By tuning λ in the range of [0,1], we thus obtain a sequence of points flow (restoration levels) from Start (L1) to Stop (L2). But the path varies a lot according to the restoration levels of the endpoints and the gap between them, as illustrated by the blue dashed lines in Fig. 4. Therefore, even the restoration level of the THz image is determined by the measured range z, it is still hard to select the matched interpolation parameter λ (create the network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ with the matched restoration level $L({\omega (z),n,2} )$).

 figure: Fig. 4.

Fig. 4. Relationship between the interpolation coefficient and the restoration level.

Download Full Size | PDF

To create the linear relationship between the interpolation parameter and the restoration level $L({\omega (z),n,2} )$ of the network (illustrated by the red dashed line in Fig. 4), the current relationship between λ and the restoration level of the network $\textrm{N}_\textrm{a}^\mathrm{\lambda}$ should be firstly explored. A group of matched interpolation parameters ${\lambda ^i}$ can be selected by testing LR and HR image pairs with different imaging ranges ${z^i}$ (restoration levels $L(\omega ({z^i}),n,2)$). Then, the relationship is explored by fitting these typical points $\textrm{\{}{z^i},{\lambda ^i}\textrm{\}}_{i = 0}^{M - 1}$ with the polynomial function $\lambda = F(z)$, where M is the number of the testing points. Finally, by linearly mapping the testing range $[{0,|{{z_{\max}}} |} ]$ to the new interpolation parameter γ (0≤γ≤1): $z = \gamma \cdot |{{z_{\max}}} |$, the corresponding relationship between λ and the new interpolation parameter γ can be determined by $\lambda = F({\gamma \cdot |{{z_{\max}}} |} )= T(\gamma )$. Thus, by modifying Eq. (6):

$$G_{}^\ast (\gamma ) = I + T(\gamma ) \cdot ({G - I} ),\textrm{}B_{}^\ast (\gamma ) = T(\gamma ) \cdot B,\textrm{}0 \le \gamma \le 1, $$
the new adjustable network $\textrm{N}_\textrm{a}^\mathrm{\gamma}$ can be created.

Then, the modified interpolation parameter γ can be used to linearly and continuously tweaking the restoration level $L({\omega (\gamma \cdot \textrm{|}{z_{\max}}\textrm{|}),n,2} )$ of the adjustable network $\textrm{N}_\textrm{a}^\mathrm{\gamma}$, which makes the accommodative super-resolution possible. It is noted that, we only chose a relatively simple structure CNN as our basic network, other powerful CNNs can also be used instead to explore the better mapping ability.

2.3 Accommodative super-resolution for the FMCW THz imager

We used a reflection FMCW THz imager, which is a real-aperture two-dimensional scanning imaging system operating at the frequency from 160 GHz ∼ 220 GHz, to demonstrate our method. With the antenna half power beam width of 7.9° (H plane)/ 8.1° (E plane) at 190 GHz, two Teflon plano-convex lenses with 2 inches diameter, 75 mm focal length and a reflector parabolic mirror with 2 inches diameter, 150 mm focal length were used to focus the THz beam. Thus, the spot radius at the beam waist can be calculated: ${\omega _0} = 3.78\textrm{}mm$. The imaging system is illustrated in Fig. 5. We measured the PSF of our system by scanning a r = 3.25 mm metal ball at different ranges, which is compromising between the measured accuracy (the metal ball should be small enough) and the signal to noise ratio (SNR) (the reflected energy should be strong enough) [8,15]. The measured PSF (x, y, z = 0) is shown in Fig. 6(d). As can be seen, the measured PSF suffered from the noise, which would degrade the dataset creation. However, the cross-section comparisons of the simulated PSF (calculated from Eq. (2) and (3) based on the system parameters) and the measured PSF of our system are shown in Fig. 6, which demonstrates the effectiveness of the TEM00 mode Gaussian beam assumption. Thus, in the following of this paper, the simulated PSF was used in the degradation model in Eq. (1). It should be mentioned that the PSF in Fig. 6 was simulated with the center frequency of 190 GHz, the spectrum divergence of the PSF was not included in this paper [7,8].

 figure: Fig. 5.

Fig. 5. Our FMCW imaging system.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Comparison of the PSFs. (a) The XOZ plane (y=0) of the simulated PSF; (b) the XOZ plane (y=0) of the measured PSF; (c) the XOY plane (z=0) of the simulated PSF; (d) the XOY plane (z=0) of the measured PSF.

Download Full Size | PDF

Since it is difficult to obtain the THz image pairs (LR and HR) to train the network, we used the standard DIV2K dataset’s HR images [31] as our HR images. The flowchart of the super-resolution framework is shown in Fig. 7, and the working process and the training details are introduced in details in the following steps:

 figure: Fig. 7.

Fig. 7. The flowchart of the accommodative super-resolution framework.

Download Full Size | PDF

Dataset generation: Based on the 3D degradation model for the imaging process, the degraded LR images were produced by Eq. (4). In order to cope with the images with different imaging range, L1 LR images were blurred with PSF ($\omega (\textrm{|}{z_{\max}}\textrm{|}) = 7\textrm{}mm$, where |zmax| = 42 mm) and L2 LR images were blurred with PSF ($\omega = 3.2mm$, corresponding to 85% of the $\omega (0) = {\omega _0} = 3.78\textrm{}mm$), with the aim of fully covering the range of interest. The down-sampler Ds was chosen as 2. All the measured values of our system were normalized by the response of a perfectly orthogonal metal plate at the focal plane (strongest response) to the range of [0, 255], which is the same range as DIV2K images. And the measured thermal noise floor of our system was normalized to about 1.45. Thus, the gaussian white noise with the standard derivation of 1.5 was added to the LR images in the dataset, in order to learn the noise reduction ability during the LR to HR mapping procedure. Then the dataset1 (HR and $\textrm{LR}(7,1.5,2)$) and dataset2 (HR and $\textrm{LR}(3.2,1.5,2)$) were made to train the networks Nb and Na, respectively.

Training and testing: Dataset1 was used to train the basic deep residual network Nb. Then, by fixing all the parameters in the trained network Nb, the adaptive network Na with the subsequently inserted AdaFM layers, was trained by dataset2.

The filter size was set to 3 for all the filters and convolution kernels, while the feature size was set to 64. The shortcuts for residual learning were parameter free. And all the networks were trained with the ADAM optimizer by setting the learning rate to 10−4 and the 1 loss as the loss function [20,25,26,32]. The network was implemented by the Pytorch framework and using a NVIDIA GeForce GTX 1080Ti GPU, the training process for the basic network Nb took about 7 h and 27 min while the training process for the adaptive network Na took about 4 h and 31 min. The testing time for the 100 images in the DIV2K validation set [31] was 51 s.

Spatial resolution enhancement: Eventually, after the interpolation and the polynomial function fitting, the adjustable network $\textrm{N}_\textrm{a}^\gamma$ could be used to enhance the resolution of the real THz images ranging from -zmax to zmax.

3. Experiment results

3.1 Validation set and synthetic image evaluation

In order to quantitatively and qualitatively evaluate the fixed level super-resolution ability of our network, all the 100 HR images in the DIV2K validation set [31] were used as the tested HR images. And the same degradation model Eq. (4) was used to produce the L1 validation set1 (HR and $\textrm{LR}(7,1.5,2)$) and L2 validation set2 (HR and $\textrm{LR}(3.2,1.5,2)$). The evaluation metrics peak signal-to-noise ratio (PSNR) [33] and the structural similarity (SSIM) (the constants in SSIM were set by: C1 = 2.552 and C2 = 7.652) [32,34] were introduced to assess the image quality from different perspectives. The benchmark SRCNN (3 layers) [22] and SRResNet (16 ResBlocks) [35] were also tested for comparison. The evaluation results are shown in Table 1 and the best indexes are shown in bold.

Tables Icon

Table 1. The evaluation results for the DIV2K based validation set

It can be seen that for the L1 test, the basic network Nb reaches the similar results with the same depth benchmark SRResNet, which is far better than the simple SRCNN. And for L2 test, the adaptive network Na (derived from the basic network Nb, and only optimized the additional 23,680 parameters in the AdaFM layers based on dataset2) can reach the comparable results with the network Na (trained all the 1,538,945 parameters in Na from scratch based on dataset2) and the SRResNet, which proves the validity of our transform training mode.

To further illustrate the super-resolution ability of our network, a synthetic image was used for evaluation. The analytic algorithms, including the bicubic interpolation method, the widely used Lucy–Richardson algorithm [12,13,17,18] and the normalized sparsity measurement blind-deconvolution method [19], are shown for contrast. In addition, the SRCNN and SRResNet are also displayed as comparison. It should be mentioned that to make a fair comparison, the bicubic interpolation was also added after the Lucy–Richardson algorithm and the blind-deconvolution method to reach the same pixels.

The comparison results of different methods with the same degradation level $L1(7,1.5,2)$ is shown in Fig. 8. And the 10 times statistical evaluation indexes are shown in Table 2. As can be seen, for this fixed level task, the deep CNNs (SRResNet and our network) can produce outstanding results with higher spatial-resolution. Moreover, the learning-based methods tend to reduce the noise at the same time.

 figure: Fig. 8.

Fig. 8. Comparison of different methods on the synthetic image. (a) The degraded synthetic image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRCNN; (f) SRResNet; (g) basic network Nb; (h) the original synthetic HR image.

Download Full Size | PDF

Tables Icon

Table 2. The comparison results of different methods for the synthetic image

Based on the above synthetic scenario, by setting a group of image pairs with different imaging ranges, the relationship between the spot radius $\omega (z)$ (restoration level) and the interpolation factor λ of the network was tested, as shown in Fig. 9. And the fitted quartic polynomial function is shown with the white line in Fig. 9 (the PSNR results and the SSIM results equally share the weight). Then, the corresponding relationship between λ and the new interpolation parameter γ in Eq. (7) can be derived by:

$$T(\gamma ) = T(\frac{{|z |}}{{|{{z_{\max}}} |}}) = 1.49{\gamma ^4} - 3.42{\gamma ^3} + 2.05{\gamma ^2} - 0.59\gamma + 0.46. $$

And this fitted equation is also used in the following of this paper to restore the real THz images.

 figure: Fig. 9.

Fig. 9. The relationship between the spot radius $w(z)$ and the interpolation factor λ of the network. (a) The PNSR results of the synthetic images; (b) the SSIM results.

Download Full Size | PDF

3.2 Experiments for real THz images

A steel test board, as shown in Fig. 10, was scanned at the range of z = -2 mm and z = 25 mm, respectively. The scanning step (sampling interval) was set to 1 mm. When the board located at 2 mm ahead of the focal plane, the raw THz image with the size of 115×100 (the interval of the pixels is 1 mm) is shown in Fig. 11(a). And when the restoration level of our adjustable network $\textrm{N}_\textrm{a}^\gamma$ changes from L2 to L1, the super-resolution results with the image size of 230×200 (the interval of the pixels is 0.5 mm) are shown in Fig. 11. Part of the letter “B” and the 4 mm vertical strips are enlarged for better comparison, as shown in the green and black boxes in Fig. 11. And all the values of the figures were normalized by the metal plate response, as discussed in Sec 2.3. It shows that in Fig. 11, when the restoration level increases, the results change from over-smoothed to over-sharpening. Based on the measured range z of the board, the interpolation coefficient was calculated by Eq. (8), producing the matched restoration result, as shown in Fig. 11(c). To further evaluate the results in a more intuitive way, Fig. 12(a) shows the transects along X with different restoration levels (the location is labeled with the white dotted line in Fig. 11(a)). And the black arrows in Fig. 12(a) illustrate the length (3 mm, corresponding to 6 pixels) of the hollowed-out strips based on the ground truth of the board (illustrated at the 2/3 of the peak value). Though some over-sharpening results show better visual effects in some areas (suffered from the over-sharpening artifacts at the same time), like shown in Fig. 11(d) and (e), they also tend to over-focus the hollowed-out strips, which deviates from the ground truth, as shown in Fig. 12(a). Figure 12(b) shows the transects illustrated with the black dotted line in Fig. 11(a), and hollowed-out ground truth lengths (4 mm, corresponding to 8 pixels) are also labeled with the black arrows. It shows that the matched restoration results can be produced with the calculated interpolation coefficient.

 figure: Fig. 10.

Fig. 10. The test board. The widths of the hollowed-out strips are 4 mm, 3 mm, 2 mm, 1.5 mm, 1 mm, and 0.5 mm, which are the same with their intervals. And the width of the letters is 3 mm.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The super-resolution results of the test board at -2 mm with different restoration levels. (a) The measured THz image; (b) Super-resolution result when the restoration level of the network correspond to $L2(\omega = 3.2,1.5,2)$ (Na); (c) restoration level: $L(\omega = 3.9,1.5,2)$ (γ = 0.05, |z|=2 mm); (d) restoration level: $L(\omega = 5.3,1.5,2)$ (γ = 0.60, |z|=25 mm); (e) restoration level: $L(\omega = 6.1,1.5,2)$ (γ = 0.80, |z|=33.6 mm); (f) restoration level: $L1(\omega = 7,1.5,2)$ (γ = 1, |z|=42 mm, Nb). The predominant noise labeled by the red arrow is caused by the poor transmitted phase noise of the radar system, which will be explained later.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The transects comparison along X. (a) The comparison results with different restoration levels $L(\omega ,1.5,2)$ located at the white dotted line in Fig. 11; and (b) located at the black dotted line in Fig. 11; (c) the comparison results with different methods located at the white dotted line in Fig. 11; and (d) located at the black dotted line in Fig. 11.

Download Full Size | PDF

The results when the target located at 25 mm are shown in Fig. 13. As the offset imaging range grows, the PSF gets broadener and the image gets more smoothed. However, based on Eq. (8), the network with higher restoration level was used to handle the image. The matched result is shown in Fig. 13(d). It should be noted that, due to the real asymmetric THz beam, the resolution of the same size strips with different orientations is different. Thus, real PSF should be explored to create the dataset, which deserves further study.

 figure: Fig. 13.

Fig. 13. The super-resolution results of the test board at 25 mm with different restoration levels. (a) The measured THz image; (b) Super-resolution result when the restoration level of the network correspond to $L2(\omega = 3.2,1.5,2)$ (Na); (c) restoration level: $L(\omega = 3.9,1.5,2)$ (γ = 0.05, |z|=2 mm); (d) restoration level: $L(\omega = 5.3,1.5,2)$ (γ = 0.60, |z|=25 mm); (e) restoration level: $L(\omega = 6.1,1.5,2)$ (γ = 0.80, |z|=33.6 mm); (f) restoration level: $L1(\omega = 7,1.5,2)$ (γ = 1, |z|=42 mm, Nb).

Download Full Size | PDF

Figure 14 shows the results of different methods when the target located at -2 mm. The SRResNet was trained with dataset1 (L1 restoration level), thus the method produces unmatched over-sharpening result, which over-focuses the hollowed-out strips and introduces harmful artifacts. In order to get better result, the fixed network should be retrained from scratch based on the matched restoration level, which is very time consuming (with the same computer configuration, every training process from scratch takes about 8 h and 9 min for the SRResNet). However, only by tuning the interpolated coefficient, our method can produce the matched network without the extra training. Figure 14(f) shows the matched super-resolution result of our method (same as Fig. 11(c)). And compared with the analytic methods, our method can restore the shapes of the objects more properly, producing more details and reducing the noise at the same time. While the analytic methods tend to amplify the noise. Figure 12(c) and (d) show the transects along X with different methods (the same location with Fig. 12(a) and (b), thus the lines of our method are the same as the corresponding yellow lines in Fig. 12(a) and (b)). It can be seen that more abrupt border can be produced (but not over-focused, as shown in Fig. 12(a) and (b)) and the noise can be reduced (the sink in the middle peak in Fig. 12(c) can be restored) by our method, which also demonstrates the method’s effectiveness and superiority. It is noted that, without the pre-knowledge of the PSF, the blind-deconvolution method produces the result that is severely deviated from the ground truth. Figure 15 shows the comparison results when the target located at 25 mm.

 figure: Fig. 14.

Fig. 14. The comparison of the results when the board located at -2 mm. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.05 ($\omega = 3.9$).

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. The comparison of the results when the board located at 25 mm. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.60 ($\omega = 5.3$).

Download Full Size | PDF

Finally, a circuit with the plastic casing, shown in Fig. 16, was tested with the scanning step of 1 mm. The B-scan image (illustrated with the red dashed lines in Fig. 16) is shown in Fig. 17. Based on the precise ranging ability, the interpolation coefficient of our method for the inner circuit could be calculated by Eq. (8). Thus, the matched restoration result can be produced. The imaging results of the circuit is shown in Fig. 18. As can be seen that our method produces more details of the circuit. The structure of crystal oscillator together with its surrounding capacitances, illustrated with the red box in Fig. 16, can be more clearly reconstructed, as shown with the enlarged red boxes in Fig. 18. Furthermore, the length and width of the crystal oscillator were evaluated from the results in the red boxes, and our method produced the nearest results with the ground truth, as shown in Table 3.

 figure: Fig. 16.

Fig. 16. The plastic casing and its inner structure.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. The B-scan cross section of the circuit with the casing.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. The images of the circuit by different methods. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.02 ($\omega = 3.8$). The predominant noise labeled by the red arrows is caused by the poor transmitted phase noise of the radar system.

Download Full Size | PDF

Tables Icon

Table 3. The evaluated dimensions of the crystal oscillator

However, it should be mentioned that when there are strongly reflected targets in the imaging region, such as the USB interface and the metal base in the circuit test (as shown in Fig. 16), or the metal clamp in the above board test (several millimeters ahead of the board, as show in Fig. 5), the side lobes of the strongly reflected targets may introduce interferences in the interested imaging plane due to the poor transmitted phase noise of the radar system [9,36], as shown by the red arrows in Fig. 11(a) and Fig. 18(a). The super-resolution methods regard them as the measured strips and mistakenly restore them. Because of the strong mapping ability of the CNN, the learning-based methods tend to over-restore them and introduce severe artifacts.

4. Conclusion

In this paper, an accommodative super-resolution framework for real-aperture-scanning THz images is proposed. Specifically, the 3D degradation model for the THz imaging system is firstly proposed by incorporating the focused THz beam distribution, which determines the relationship between the imaging range and the corresponding image restoration level. Thus, with the help of the measured imaging range, each acquired image can be specifically treated. Secondly, an adjustable CNN is introduced to cope with this range dependent super-resolution problem. Simply by tuning the interpolation parameter, the network can be adjusted to produce arbitrary and continuous restoration levels between the trained fixed levels without the extra training process. Finally, by selecting the appropriate interpolation coefficient according to the measured imaging range, each THz image can be coped with its matched CNN and reach the outstanding super-resolution effect. Both the simulated data and the real tested data, acquired by a 160 ∼ 220 GHz FMCW real-aperture-scanning imager, have been used to demonstrate the superiority and the effectiveness of our method, qualitatively and quantitatively.

Funding

National Natural Science Foundation of China (61527805, 61731001).

Disclosures

The authors declare no conflicts of interest.

References

1. C. Koch Dandolo, J. Guillet, X. Ma, F. Fauquet, M. Roux, and P. Mounaix, “Terahertz frequency modulated continuous wave imaging advanced data processing for art painting analysis,” Opt. Express 26(5), 5358–5367 (2018). [CrossRef]  

2. J. Dong, J. B. Jackson, M. Melis, D. Giovanacci, G. C. Walker, A. Locquet, J. W. Bowen, and D. S. Citrin, “Terahertz frequency-wavelet domain deconvolution for stratigraphic and subsurface investigation of art painting,” Opt. Express 24(23), 26972–26985 (2016). [CrossRef]  

3. E. M. Stübling, A. Rehn, T. Siebrecht, Y. Bauckhage, L. Öhrström, P. Eppenberger, J. C. Balzer, F. Rühli, and M. Koch, “Application of a robotic THz imaging system for sub-surface analysis of ancient human remains,” Sci. Rep. 9(1), 3390 (2019). [CrossRef]  

4. H. Quast, A. Keil, and T. Löffler, “Investigation of foam and glass fiber structures used in aerospace applications by all-electronic 3D Terahertz imaging,” in International Conference on Infrared Millimeter & Terahertz Waves, (IEEE, 2010), 1–2.

5. F. Friederich, K. H. May, B. Baccouche, C. Matheis, and N. Savage, “Terahertz Radome Inspection,” Photonics 5(1), 1–10 (2018). [CrossRef]  

6. N. Karpowicz, H. Zhong, C. Zhang, K. I. Lin, J. S. Hwang, J. Xu, and X. C. Zhang, “Compact continuous-wave subterahertz system for inspection applications,” Appl. Phys. Lett. 86(5), 054105 (2005). [CrossRef]  

7. K. Ahi, “Mathematical Modeling of THz Point Spread Function and Simulation of THz Imaging Systems,” IEEE Trans. Terahertz Sci. Technol. 7(6), 747–754 (2017). [CrossRef]  

8. K. Ahi, S. Shahbazmohamadi, and N. Asadizanjani, “Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging,” Opt. Laser. Eng. 104, 274–284 (2018). [CrossRef]  

9. K. B. Cooper, R. J. Dengler, N. Llombart, B. Thomas, and P. H. Siegel, “THz Imaging Radar for Standoff Personnel Screening,” IEEE Trans. Terahertz Sci. Technol. 1(1), 169–182 (2011). [CrossRef]  

10. J. Optics ExpressGrajal, A. Badolato, G. Rubio-Cidre, L. Úbeda-Medina, B. Mencia-Oliva, A. Garcia-Pino, B. Gonzalez-Valdes, and O. Rubiños, “3-D high-resolution imaging radar at 300 GHz with enhanced FoV,” IEEE Trans. Microwave Theory Tech. 63(3), 1097–1107 (2015). [CrossRef]  

11. D. A. Robertson, D. G. Macfarlane, and T. Bryllert, “220 GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies,” inProc. SPIE9830, (2016), 983009.

12. T. M. Wong, M. Kahl, P. H. Bolívar, and A. Kolb, “Computational image enhancement for frequency modulated continuous wave (FMCW) THz image,” J. Infrared, Millimeter, Terahertz Waves 40(7), 775–800 (2019). [CrossRef]  

13. S. H. Ding, Q. Li, R. Yao, and Q. Wang, “High-resolution terahertz reflective imaging and image restoration,” Appl. Opt. 49(36), 6834–6839 (2010). [CrossRef]  

14. P. Knobloch, C. Schildknecht, T. Kleine-Ostmann, and M. Koch, “Medical THz imaging: an investigation of histo-pathological samples,” Phys. Med. Biol. 47(21), 3875–3884 (2002). [CrossRef]  

15. D. C. Popescu and A. D. Hellicar, “Point spread function estimation for a terahertz imaging system,” EURASIP J. Adv. Signal Process. 2010(1), 575817 (2010). [CrossRef]  

16. Q. Li, Q. Yin, R. Yao, S. Ding, and Q. Wang, “Continuous-wave terahertz scanning image resolution analysis and restoration,” Opt. Eng. 49(3), 037007 (2010). [CrossRef]  

17. Y. Li, L. Li, A. Hellicar, and Y. J. Guo, “Super-resolution reconstruction of terahertz images,” inProc. SPIE6949, (2008), 69490J.

18. L. M. Xu, W. H. Fan, and J. Liu, “High-resolution reconstruction for terahertz imaging,” Appl. Opt. 53(33), 7891 (2014). [CrossRef]  

19. D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), 233-240.

20. Y. Li, W. Hu, S. Chen, W. Zhang, R. Guo, J. He, and L. Ligthart, “Spatial Resolution Matching of Microwave Radiometer Data with Convolutional Neural Network,” Remote Sens. 11(20), 2432 (2019). [CrossRef]  

21. Z. Long, T. Wang, C. You, Z. Yang, K. Wang, and J. Liu, “Terahertz image super-resolution based on a deep convolutional neural network,” Appl. Opt. 58(10), 2731–2735 (2019). [CrossRef]  

22. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]  

23. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), 2808–2817.

24. N. Efrat, D. Glasner, A. Apartsin, B. Nadler, and A. Levin, “Accurate blur models vs. image priors in single image super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2013), 2832–2839.

25. J. He, C. Dong, and Y. Qiao, “Modulating image restoration with continual levels via adaptive feature modification layers,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), 11056–11064.

26. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2017), 1132–1140.

27. J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 1646–1654.

28. W. Hu, Y. Li, W. Zhang, S. Chen, X. Lv, and L. Ligthart, “Spatial resolution enhancement of satellite microwave radiometer data with deep residual convolutional neural network,” Remote Sens. 11(7), 771 (2019). [CrossRef]  

29. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (IEEE, 2016), 770–778.

30. R. Gong, W. Li, Y. Chen, and L. Van Gool, “DLOW: domain flow for adaptation and generalization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2019), 2477–2486.

31. E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), 126–135.

32. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging 3(1), 47–57 (2017). [CrossRef]  

33. N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9(4), 636–650 (2000). [CrossRef]  

34. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

35. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), 105–114.

36. G. Rubio-Cidre, A. Badolato, L. Úbeda-Medina, J. Grajal, B. Mencia-Oliva, and B.-P. Dorta-Naranjo, “DDS-based signal-generation architecture comparison for an imaging radar at 300 GHz,” IEEE Trans. Instrum. Meas. 64(11), 3085–3098 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Geometry of a typical real-aperture-scanning imaging system (bottom) and the illustration of the accommodative super-resolution method.
Fig. 2.
Fig. 2. The architecture of the basic network Nb (left) and the adaptive network Na (right).
Fig. 3.
Fig. 3. Schematic diagram of convolution operations (Ref. [20], Fig. 4). (a) Convolution in the convolutional layer; (b) group convolution in the AdaFM layer.
Fig. 4.
Fig. 4. Relationship between the interpolation coefficient and the restoration level.
Fig. 5.
Fig. 5. Our FMCW imaging system.
Fig. 6.
Fig. 6. Comparison of the PSFs. (a) The XOZ plane (y=0) of the simulated PSF; (b) the XOZ plane (y=0) of the measured PSF; (c) the XOY plane (z=0) of the simulated PSF; (d) the XOY plane (z=0) of the measured PSF.
Fig. 7.
Fig. 7. The flowchart of the accommodative super-resolution framework.
Fig. 8.
Fig. 8. Comparison of different methods on the synthetic image. (a) The degraded synthetic image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRCNN; (f) SRResNet; (g) basic network Nb; (h) the original synthetic HR image.
Fig. 9.
Fig. 9. The relationship between the spot radius $w(z)$ and the interpolation factor λ of the network. (a) The PNSR results of the synthetic images; (b) the SSIM results.
Fig. 10.
Fig. 10. The test board. The widths of the hollowed-out strips are 4 mm, 3 mm, 2 mm, 1.5 mm, 1 mm, and 0.5 mm, which are the same with their intervals. And the width of the letters is 3 mm.
Fig. 11.
Fig. 11. The super-resolution results of the test board at -2 mm with different restoration levels. (a) The measured THz image; (b) Super-resolution result when the restoration level of the network correspond to $L2(\omega = 3.2,1.5,2)$ (Na); (c) restoration level: $L(\omega = 3.9,1.5,2)$ (γ = 0.05, |z|=2 mm); (d) restoration level: $L(\omega = 5.3,1.5,2)$ (γ = 0.60, |z|=25 mm); (e) restoration level: $L(\omega = 6.1,1.5,2)$ (γ = 0.80, |z|=33.6 mm); (f) restoration level: $L1(\omega = 7,1.5,2)$ (γ = 1, |z|=42 mm, Nb). The predominant noise labeled by the red arrow is caused by the poor transmitted phase noise of the radar system, which will be explained later.
Fig. 12.
Fig. 12. The transects comparison along X. (a) The comparison results with different restoration levels $L(\omega ,1.5,2)$ located at the white dotted line in Fig. 11; and (b) located at the black dotted line in Fig. 11; (c) the comparison results with different methods located at the white dotted line in Fig. 11; and (d) located at the black dotted line in Fig. 11.
Fig. 13.
Fig. 13. The super-resolution results of the test board at 25 mm with different restoration levels. (a) The measured THz image; (b) Super-resolution result when the restoration level of the network correspond to $L2(\omega = 3.2,1.5,2)$ (Na); (c) restoration level: $L(\omega = 3.9,1.5,2)$ (γ = 0.05, |z|=2 mm); (d) restoration level: $L(\omega = 5.3,1.5,2)$ (γ = 0.60, |z|=25 mm); (e) restoration level: $L(\omega = 6.1,1.5,2)$ (γ = 0.80, |z|=33.6 mm); (f) restoration level: $L1(\omega = 7,1.5,2)$ (γ = 1, |z|=42 mm, Nb).
Fig. 14.
Fig. 14. The comparison of the results when the board located at -2 mm. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.05 ( $\omega = 3.9$ ).
Fig. 15.
Fig. 15. The comparison of the results when the board located at 25 mm. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.60 ( $\omega = 5.3$ ).
Fig. 16.
Fig. 16. The plastic casing and its inner structure.
Fig. 17.
Fig. 17. The B-scan cross section of the circuit with the casing.
Fig. 18.
Fig. 18. The images of the circuit by different methods. (a) The measured THz image; (b) Bicubic interpolation; (c) Lucy–Richardson deconvolution; (d) normalized sparsity measurement blind-deconvolution; (e) SRResNet; (f) our network with the corresponding interpolation coefficient γ = 0.02 ( $\omega = 3.8$ ). The predominant noise labeled by the red arrows is caused by the poor transmitted phase noise of the radar system.

Tables (3)

Tables Icon

Table 1. The evaluation results for the DIV2K based validation set

Tables Icon

Table 2. The comparison results of different methods for the synthetic image

Tables Icon

Table 3. The evaluated dimensions of the crystal oscillator

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

i ( x , y , z ) = [ P S F ( x , y , z ) o ( x , y , z ) ] D s + n .
P S F ( x , y , z ) = ( 2 π ω ( z ) 2 ) exp ( 2 ( x 2 + y 2 ω ( z ) 2 ) ) ,
ω ( z ) = ω 0 ( 1 + ( λ z π ω 0 2 ) 2 ) 1 / 2 ,
LR ( ω ( z ) , n , D s ) = [ P S F ( ω ( z ) ) HR ] D s + n ,
AdaFM ( X ) = G X + B ,
G ( λ ) = I + λ ( G I ) , B ( λ ) = λ B , 0 λ 1 ,
G ( γ ) = I + T ( γ ) ( G I ) , B ( γ ) = T ( γ ) B , 0 γ 1 ,
T ( γ ) = T ( | z | | z max | ) = 1.49 γ 4 3.42 γ 3 + 2.05 γ 2 0.59 γ + 0.46.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.