Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast generation of computer-generated holograms using wavelet shrinkage

Open Access Open Access

Abstract

Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

© 2017 Optical Society of America

1. Introduction

Computer-generated holograms (CGHs) [1] can generate arbitrary wavefronts from computer-synthesized interference pattern and are used in a wide range of optical applications, including three-dimensional (3D) displays, holographic tweezers, optical elements, and beam generation. These applications require real-time interactive operations; therefore, fast CGH calculation algorithms are significantly required.

CGHs are calculated by superimposing complex amplitudes emitted from object points. The complex amplitude on a CGH plane can be expressed by the following equation:

u(xh,yh)=jNajexp(i2πλrhj)=jNajuzj(xhxj,yhyj),
where i is the imaginary unit, N is the number of object points, (xh, yh) are the coordinates of the CGH plane, (xj, yj, zj) and aj are the coordinates and amplitude of the j-th object point, λ is the wavelength, and uzj is the point spread function (PSF) at a distance of zj. rhj is the distance between a three-dimensional (3D) object point and a CGH coordinate and is calculated as follows:
rhj=(xhxj)2+(yhyj)2+zj2,zj+((xhxj)2+(yhyj)2)/(2zj).
The computational complexity of Eq. (1) is O(NNh2) where the resolution of the CGH is Nh × Nh pixels. The superposition process is highly time-consuming even when using the latest computers.

A number of fast CGH calculation algorithms [2] have been proposed; e.g., look-up table (LUT) methods [3–5], recurrence relations [6–8], the image hologram method [9], and wavefront recording plane (WRP) methods [10–13]. LUT methods can reduce the calculation cost of the calculation of trigonometric functions and square roots, such as that illustrated in Eq. (1), by precalculating these functions and storing the precalculated data in LUTs. However, the computational complexity still remains O(NNh2).

The image hologram method succeeds in drastically reducing the computational complexity. In this method, the 3D object points are placed near the CGH plane so that the radius Wj in the CGH plane of the j-th PSF becomes small. The computational complexity in this case is O(2N) where is the average radius of the PSFs on the CGH with respect to all object points [2]. The average radius is determined by

W¯=1NjNWj=1NjNzjtanθ,
where the maximum divergence angle θ of the PSFs is θ = sin−1(λ/(2p)), where p is the sampling pitch of the CGH plane. It is clearly shown that the computational complexity decreases when zj is small. However, this method can only be used in the vicinity of the CGH plane.

WRP methods expand the image hologram method to make it applicable at an arbitrary distance from the CGH. These methods comprise two steps: (1) recording object points on a virtual plane placed near the 3D object using Eq. (1) and (2) calculating the numerical diffraction from the virtual plane to the CGH plane. The computational complexity is then given by O(W¯2N)+O(Nh2logNh). These WRP methods can accelerate the CGH calculation at an arbitrary distance. However, the abovementioned first step still incurs a heavy computational cost.

To further accelerate the CGH calculation, we propose a fast CGH acceleration algorithm using wavelet shrinkage – WAvelet ShrinkAge-Based superpositIon (WASABI). Wavelet shrinkage [14, 15] eliminates small wavelet coefficient values of the PSF uzj thereby resulting in an approximated PSF calculated from a few representative wavelet coefficients. The use of these representative wavelet coefficients accelerates the superposition of PSFs in the wavelet domain. Several methods using wavelet transform have been proposed for digital holography [16–19]; however, to best of our knowledge, this is first attempt at developing a wavelet-based CGH acceleration method. This paper is structured as follows: Section 2 describes the WASABI method, Section 3 demonstrates the images reconstructed using the proposed method and the method’s computational performance, and Section 4 presents our conclusions.

2. Proposed method WASABI

The superposition calculation illustrated in Eq. (1) is time-consuming. The objective of using WASABI is to superpose the PSF as quickly as possible. The WASABI calculations are performed as follows:

  1. Precomputation: we precompute the PSF uzj (xh, yh) and then transform the space-domain PSF into a wavelet-domain PSF. The number of wavelet coefficients is reduced by wavelet shrinkage.
  2. Superposition: the reduced wavelet coefficients are superposed in the wavelet domain.
  3. Inverse transformation: the conversion back from the wavelet domain to the space domain is performed.

2.1. Precomputation

In the precomputation step, a PSF, uzj (xh, yh), is calculated from an object point with (0, 0, zj) and is then transformed from the space domain PSF into the wavelet domain using a fast wavelet transform (FWT) [14]. The computational complexity of the forward and inverse FWTs is O(Nh2).

According to wavelet theory, a PSF, uzj (xh, yh), is decomposed using scaling coefficients sm,n() as follows:

uzj(xh,yh)=m=n=sm,n(0)ϕ(xhm)ϕ(yhn),
where m, n ∈ ℤ, denotes the level of the wavelet decomposition, and ϕ is the scaling function. According to the wavelet theory, we do not need to know the mathematical form of a scaling function. Instead, we need to know the two-scale sequences in Eq. (6). Mallat suggested that the zero level of the scaling coefficients can be considered as the original signal [20] such that uzj (xh, yh) can be expressed as
uzj(xh,yh)sm,n(0).

FWT decomposes uzj (xh, yh) into scaling coefficients sm,n() and wavelet coefficients wLH,m,n(), wHL,m,n(), wHH,m,n() by,

sm,n(+1)=k2k1pk12mpk22nsm,n(),wLH,m,n(+1)=k2k1pk12mqk22nsm,n(),wHL,m,n(+1)=k2k1qk12mpk22nsm,n(),wHH,m,n(+1)=k2k1qk12mqk22nsm,n(),
where p and q are two-scale sequences. We recursively repeat Eq. (6) until the desirable level = L is achieved. Several two-scale sequences have been proposed previously. In this paper, we use Daubechies4 [14] for the two-scale sequences and chose L = 2 because we empirically found that these settings result in good-quality reconstructed images.

After the FWT, the scaling and wavelet coefficients are generally distributed as shown in Fig. 1. Figure 1(c) shows the wavelet decomposition of a PSF (Fig. 1(b)) whose calculation conditions are zj = 100mm, a pixel pitch of 10μm and a wavelength of 633nm. The number of raw scaling and wavelet coefficients is the same as the number of pixels in the CGH plane, Nh2. If the next step (the superposition step) uses the raw coefficients, we cannot expect an acceleration of the CGH calculation; therefore, we use wavelet shrinkage, which eliminates small wavelet coefficient values to express the approximated PSF using a few representative wavelet coefficients.

 figure: Fig. 1

Fig. 1 Distribution of the scaling and wavelet coefficients : (a) Distribution of the scaling and wavelet coefficients, (b) a PSF whose calculation conditions are zj = 100mm, a pixel pitch of 10μm and a wavelength of 633nm and (c) the wavelet decomposition of Fig. 1(b).

Download Full Size | PDF

After sorting the amplitude of the scaling and wavelet coefficients |sm,n()| and |wm,n()| where |a|=Re{a}×Re{a}+Im{a}×Im{a}, in the descending order of amplitude we select Nr coefficients from the larger amplitudes. In this paper, Nr is defined as

Nr=2πWj2×r.
where Wj is the radius of a PSF uzj, and r is the selection rate among the larger amplitudes. The selected coefficients are stored in vector vz, which is described as follows:
vz=[cz,0,αz,0,,cz,Nr1,αz,Nr1],
where
cz,k{sm,n(),wLH,m,n(),wHL,m,n(),wHH,m,n()},
takes any one of sm,n(), wLH,m,n(), wHL,m,n() and wHH,m,n() for a PSF uzj ; αz,k is a shift weight of cz,k used in the superposition step and is expressed as
αz,k=2.
As shown in Fig. 1, the level has different values depending on (m, n).

The number of vectors is Nz, where Nz is the number of z-direction slices for the 3D objects, and the vectors are stored in an LUT for the numerical calculation.

2.2. Superposition and inverse transformation

In the superposition step, we superpose the reduced coefficients related to all the object points in the wavelet domain. For example, if we calculate an object points with (x1, y1, z1), we first read the reduced coefficients of uz1 from the vectors vz. The reduced coefficients cz,k at a position (m, n) are then shifted to 2 x1 = αz,k x1 and 2 y1 = αz,k y1 in the horizontal and vertical directions, and the shifted coefficients are added to the wavelet space ψ(m, n). The mathematical expression of the superposition is as follows:

ψ(m,n)=j=0Najk=0Nr1czj,kδ(mαzj,kxj,nαzj,kyj),
where δ(m, n) is the Dirac delta function. Note that the shifted czj,k should be in the same region as the original czj,k. For example, in Fig. 1, if the point “a” of czj,k is shifted to point “b,” the shifted czj,k can be superposed using Eq. (11) because the region of these points is the same. Conversely, if the point “a” of czj,k is shifted to point “c” or “d,” the shifted czj,k should not be superposed because the region of these points differs from that of point “a.” The computational complexity of Eq. (11) is O(rW̄2N), is clearly less than that obtained with Eq. (1).

In the inverse transformation step to obtain the complex amplitude in the CGH plane, the superposed coefficients ψ(m, n) are transformed from the wavelet domain to the space domain using inverse FWT. We extract new wavelet coefficients sm,n(), wLH,m,n(), wHL,m,n() and wHH,m,n() from ψ(m, n); then the inverse FWT process reconstructs ithe lower level of scaling coefficients from the higher level of scaling and wavelet coefficients as

sm,n()=k2k1pm2k2pn2k1sm,n(+1)+pm2k2qn2k1wLH,m,n(+1)+qm2k2pn2k1wHL,m,n(+1)+qm2k2qn2k1wHH,m,n(+1).
Eventually, from Eq. (5), we can consider the zero level of scaling coefficients sm,n(0) obtained using Eq. (12) as
u(xh,yj)=jNajuzj(xhxj,yhyj,zj)sm,n(0).
The computational complexity of the inverse FWT is O(Nh2). Finally, we convert the complex amplitude to the amplitude CGH h(xh, yh) using h(xh, yh) = Re{u(xh, yh)}, or to kinoform θ(xh, yh) using θ(xh, yh) = tan−1 {Im{uh(xh, yh)}/Re{uh(xh, yh)}}.

After precomputation, we do not need to perform the precomputation step again during the CGH calculation. We only repeat the superposition and inverse transformation steps with respect to new 3D objects; therefore, the computational complexity of the proposed method is expressed as

O(rW¯2N)+O(Nh2).
When we set r = 1%, we can reduce the computational complexity of the superposition up to two orders of magnitude.

Figures 2(a)–2(d) show an original PSF and the approximated PSFs using WASABI with zj=0.1 m and r =20%, 10%, and 5%. Figure 3(a) shows the intensity profiles reconstructed from the original PSF of Fig. 2(a) and the approximated PSF of Fig. 2(d) with r = 5%. The intensity profiles are normalized to 1. The plots agreed well.

 figure: Fig. 2

Fig. 2 Point spread functions: (a) the original PSF and (b) approximated PSFs with r = 20%, (c) r = 10%, and (d) r = 5%. Interference patters constructed from two object points using (e) the original PSF and (f) approximated PSFs with r = 20%, (g) r = 10% and (h) r = 5%.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Intensity profiles of Fig. 2(a) and Fig. 2(d) reconstructed from the PSFs. Intensity profiles reconstructed from the interference pattern shown in Fig. 2(e) and the approximated interference pattern shown in Fig. 2(h) with r = 5%.

Download Full Size | PDF

Figures 2(e)–2(h) show the interference patterns of two object points using Eq. (1) and WASABI with r =20%, 10%, and 5%. The result demonstrates the generation of the interference for the two points via the wavelet domain. Figure 3(b) shows the intensity profiles reconstructed from the original PSF of Fig. 2(e) and the approximated PSF of Fig. 2(h) with r = 5%.

3. Results

In this section, we compare the computational performance of WASABI and the quality of images reconstructed using it with those of the conventional look-up table method [4]. The calculation conditions involved a wavelength of 633 nm, a CGH sampling interval of 10μm, and a CGH resolution of 2, 048 × 2, 048 pixels. The depth range of the 3D images was 5mm, equally divided into 10 steps (Nz = 10). We used a computer with Intel Core i7 6700 as a CPU, a memory of 64 GB, a Microsoft Windows 8.1 operating system, and a Microsoft Visual Studio C++ 2013 compiler. All of the calculations were parallelized across eight CPU threads.

Figure 4(a) shows the reconstructed images of a dinosaur composed of 11,646 object points at two different distances of 0.5 m and 0.05 m. The images in the top row of Fig. 4(a) were obtained using the conventional method. The reconstructed images in the middle row were obtained via WASABI with r=5%. The peak signal-to-noise ratios (PSNRs) of the reconstructed images are 30.2 dB at a distance of 0.5 m and 29.7dB at a distance of 0.05 m, respectively. The reconstructed images in the bottom row were obtained via WASABI with r=1%. The PSNRs of the reconstructed images are 29.3 dB at a distance of 0.5 m and 28.9dB at a distance of 0.05 m, respectively. The PSNRs of WASABI with r=1% are lower than those obtained in the r=5%case. However, in comparison to WASABI with r=5%, the acceleration of the calculation time is more in WASABI with r=1%.

 figure: Fig. 4

Fig. 4 Reconstructed images at two different distances of 0.5 m and 0.05 m : (a) the dinosaur composed of 11,646 object points (b) the merry-go-round composed of 95,949 points and (c) the merry-go-round composed of 978,416 points.

Download Full Size | PDF

Table 1 shows the calculation times for the conventional method and WASABI. At a distance of 0.5 m, the calculation times for the conventional method, WASABI with r=5%, and WASABI with r=1% are approximately 33, 5.6 and 0.93 s, respectively. WASABI with r=1% improves the calculation speed approximately about 36-fold compared with the conventional method. Conversely, at a distance of 0.05 m, the improvement caused by WASABI with r=1% is only approximately 2.7-fold because of the superposition and inverse FWT steps of WASABI.

Tables Icon

Table 1. Calculation times for the dinosaur image (11,646 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

According to the definition of the computational complexity illustrated in Eq. (14), the first step is proportional to the number of object points and the area of the PSF, and the second step is depended only on the CGH size and not on the number of object points and the area of the PSF. Because the 3D model dinosaur is composed of a small number of object points (11,646) and the area of the PSF is small because of the short distance of 0.05 m, the calculation time for the superposition step is only approximately 10 ms. Meanwhile, the calculation time for the inverse FWT is approximately 144 ms. Therefore, under these conditions, WASABI does not result in significant acceleration of the superposition step.

In the case of a sufficiently large number of object points and/or a long distance between a 3D object and the CGH, the computational complexity of WASABI can be calculated as

O(rW¯2N)+O(Nh2)O(rW¯2N).
Under these conditions, the acceleration provided by WASABI improves significantly.

Figure 4(b) shows the reconstructed images of a merry-go-round composed of approximately 100,000 points. The reconstructed images in the top row of Fig. 4(b) were obtained using the conventional method at two different distances of 0.5 m and 0.05 m. The reconstructed images in the middle row were obtained using WASABI with r=5%. The PSNRs of the reconstructed images to the conventional method are 30.6 dB at a distance of 0.5 m and 30.2 dB at a distance of 0.05 m, respectively. The reconstructed images in the bottom row were obtained via WASABI with r=1%, and the PSNRs of the reconstructed images are 29.6 dB at a distance of 0.5 m and 29.4 dB at a distance of 0.05 m, respectively. The PSNRs obtained using WASABI with r=1% are lower than those obtained using WASABI with r=5%; however, the calculation time is further accelerated. For the distance of 0.5 m, as shown in Table 2, the calculation time for the conventional method, WASABI with r=5%, and WASABI with r=1% are approximately 281, 45, and 6.9 s, respectively. WASABI with r=1% provides approximately 41-fold improvement in speed over the conventional method. At a distance of 0.1 m, WASABI can calculate the CGHs from one million object points at approximately 2.8 frames per second (fps) using the CPU. In addition, at a distance of 0.05 m, WASABI can calculate the CGHs at approximately 5 fps using the CPU.

Tables Icon

Table 2. Calculation time for the merry-go-round image (95,949 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

In the case of a greater number of object points, WASABI shows further increased acceleration. Figure 4(c) shows the reconstructed images of a fountain composed of approximately one million points. At the distance of 0.5 m in Table 3, the calculation times of the conventional method, WASABI with r=5% and WASABI with r=1% are approximately 2,835, 361 and 57 s, respectively. WASABI with r=1% shows approximately 50-fold improvement in speed over the conventional method. At a distance of 0.05 m, WASABI can calculate the CGHs from one million object points at approximately 1.7 fps using the CPU. All of these calculations were performed using our numerical wave optics library, CWO++ [21].

Tables Icon

Table 3. Calculation time for the fountain image (978,416 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

4. Conclusion

We proposed a CGH acceleration algorithm using a wavelet shrinkage-based superposition method called WASABI, in which approximated PSFs are expressed by a few representative wavelet coefficients. We showed that this method can accelerate the superposition of many PSFs in the wavelet domain. Our future work will involve the application of the proposed method to the WRP method [10–13] to further acceleration of CGH calculations for long distance between 3D objects and a CGH plane. In addition, we will attempt to apply the proposed method to cylindrical and spherical CGH calculation, and holographic stereogram-based calculation.

Funding

This work was partially supported by JSPS KAKENHI Grant Numbers 16K00151 and 25240015.

Acknowledgments

We thank to Dr. Hirotaka Nakayama for providing 3D models and thank to Dr. Takashi Kakue for kind support.

References and links

1. T. C. Poon, ed., Digital Holography and Three-Dimensional Display - Principles and Applications (Springer, 2006). [CrossRef]  

2. T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Ind. Informat. 12, 1611–1622 (2016). [CrossRef]  

3. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]  

4. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47, D55–D62 (2008). [CrossRef]   [PubMed]  

5. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23, 9852–9857 (2015). [CrossRef]   [PubMed]  

6. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39, 6587–6594 (2000). [CrossRef]  

7. H. Yoshikawa, “Fast computation of Fresnel holograms employing difference,” Opt. Rev. 8, 331–335, (2001). [CrossRef]  

8. T. Shimobaba and T. Ito, “An efficient computational method suitable for hardware of computer-generated hologram with phase computation by addition,” Comput. Phys. Commun. 138, 44–52 (2001). [CrossRef]  

9. T. Yamaguchi and H. Yoshikawa, “Computer-generated image hologram,” Chin. Opt. Lett. 9, 120006 (2011). [CrossRef]  

10. T. Shimobaba, N. Masuda, and T. Ito., “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 343133–3135 (2009). [CrossRef]   [PubMed]  

11. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010). [CrossRef]   [PubMed]  

12. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19, 15205–15211 (2011). [CrossRef]   [PubMed]  

13. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23, 22149–22161 (2015). [CrossRef]   [PubMed]  

14. C. K. Chui, ed., An Introduction to Wavelets (Academic Press, 2014).

15. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Trans. Inf. Theory 41, 613–627 (1995). [CrossRef]  

16. L. Onural, “Diffraction from a wavelet point of view,” Opt. Lett. 18, 846–848 (1993). [CrossRef]   [PubMed]  

17. M. Liebling, T. Blu, and M. Unser, “Fresnelets: new multiresolution wavelet bases for digital holography,” IEEE Trans. Image Process. 12, 29–43, (2003). [CrossRef]  

18. M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004). [CrossRef]  

19. J. Weng, J. Zhong, and C. Hu, “Digital reconstruction based on angular spectrum diffraction with the ridge of wavelet transform in holographic phase-contrast microscopy,” Opt. Express 16, 21971–21981 (2008). [CrossRef]   [PubMed]  

20. S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell. 11, 674–693 (1989). [CrossRef]  

21. T. Shimobaba, J. Weng, T. Sakurai, N. Okada, T. Nishitsuji, N. Takada, A. Shiraki, N. Masuda, and T. Ito, “Computational wave optics library for C++: CWO++ library,” Comput. Phys. Commun. 183, 1124–1138 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Distribution of the scaling and wavelet coefficients : (a) Distribution of the scaling and wavelet coefficients, (b) a PSF whose calculation conditions are zj = 100mm, a pixel pitch of 10μm and a wavelength of 633nm and (c) the wavelet decomposition of Fig. 1(b).
Fig. 2
Fig. 2 Point spread functions: (a) the original PSF and (b) approximated PSFs with r = 20%, (c) r = 10%, and (d) r = 5%. Interference patters constructed from two object points using (e) the original PSF and (f) approximated PSFs with r = 20%, (g) r = 10% and (h) r = 5%.
Fig. 3
Fig. 3 Intensity profiles of Fig. 2(a) and Fig. 2(d) reconstructed from the PSFs. Intensity profiles reconstructed from the interference pattern shown in Fig. 2(e) and the approximated interference pattern shown in Fig. 2(h) with r = 5%.
Fig. 4
Fig. 4 Reconstructed images at two different distances of 0.5 m and 0.05 m : (a) the dinosaur composed of 11,646 object points (b) the merry-go-round composed of 95,949 points and (c) the merry-go-round composed of 978,416 points.

Tables (3)

Tables Icon

Table 1 Calculation times for the dinosaur image (11,646 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

Tables Icon

Table 2 Calculation time for the merry-go-round image (95,949 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

Tables Icon

Table 3 Calculation time for the fountain image (978,416 object points) using the conventional method [4] and WASABI with r = 5% and r = 1%.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

u ( x h , y h ) = j N a j exp ( i 2 π λ r h j ) = j N a j u z j ( x h x j , y h y j ) ,
r h j = ( x h x j ) 2 + ( y h y j ) 2 + z j 2 , z j + ( ( x h x j ) 2 + ( y h y j ) 2 ) / ( 2 z j ) .
W ¯ = 1 N j N W j = 1 N j N z j tan θ ,
u z j ( x h , y h ) = m = n = s m , n ( 0 ) ϕ ( x h m ) ϕ ( y h n ) ,
u z j ( x h , y h ) s m , n ( 0 ) .
s m , n ( + 1 ) = k 2 k 1 p k 1 2 m p k 2 2 n s m , n ( ) , w L H , m , n ( + 1 ) = k 2 k 1 p k 1 2 m q k 2 2 n s m , n ( ) , w H L , m , n ( + 1 ) = k 2 k 1 q k 1 2 m p k 2 2 n s m , n ( ) , w H H , m , n ( + 1 ) = k 2 k 1 q k 1 2 m q k 2 2 n s m , n ( ) ,
N r = 2 π W j 2 × r .
v z = [ c z , 0 , α z , 0 , , c z , N r 1 , α z , N r 1 ] ,
c z , k { s m , n ( ) , w L H , m , n ( ) , w H L , m , n ( ) , w H H , m , n ( ) } ,
α z , k = 2 .
ψ ( m , n ) = j = 0 N a j k = 0 N r 1 c z j , k δ ( m α z j , k x j , n α z j , k y j ) ,
s m , n ( ) = k 2 k 1 p m 2 k 2 p n 2 k 1 s m , n ( + 1 ) + p m 2 k 2 q n 2 k 1 w L H , m , n ( + 1 ) + q m 2 k 2 p n 2 k 1 w H L , m , n ( + 1 ) + q m 2 k 2 q n 2 k 1 w H H , m , n ( + 1 ) .
u ( x h , y j ) = j N a j u z j ( x h x j , y h y j , z j ) s m , n ( 0 ) .
O ( r W ¯ 2 N ) + O ( N h 2 ) .
O ( r W ¯ 2 N ) + O ( N h 2 ) O ( r W ¯ 2 N ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.