Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Higher-order wide-angle split-step spectral method for non-paraxial beam propagation

Open Access Open Access

Abstract

We develop a higher-order method for non-paraxial beam propagation based on the wide-angle split-step spectral (WASSS) method previously reported [Clark and Thomas, Opt. Quantum. Electron., 41, 849 (2010)]. The higher-order WASSS (HOWASSS) method approximates the Helmholtz equation by keeping terms up to third-order in the propagation step size, in the Magnus expansion. A symmetric exponential operator splitting technique is used to simplify the resulting exponential operators. The HOWASSS method is applied to the problem of waveguide propagation, where an analytical solution is known, to demonstrate the performance and accuracy of the method. The performance enhancement gained by implementing the HOWASSS method on a graphics processing unit (GPU) is demonstrated. When highly accurate results are required the HOWASSS method is shown to be substantially faster than the WASSS method.

© 2013 Optical Society of America

1. Introduction

Beam propagation methods (BPM) are a large class of numerical methods for solving the scalar Helmoltz equation, and are popular for simulating guided waves and laser beams, as they are typically both fast and efficient. Early methods made use of the paraxial approximation, which greatly simplifies the problem by reducing the propagation equation to first order. However, these methods were severely limited in their application. Any beam profile containing spatial frequencies with angles greater than a few degrees, with respect to the propagation axis, incur significant phase errors. Many methods have been developed to drop the paraxial approximation and include wide-angle waves. In several of these, the Helmholtz equation is formally rewritten as a first-order differential equation which includes the square root of an operator. The square root operator is either approximated using real or complex Padé approximants and a finite-difference or iterative method is used to solve the equation [1, 2], or the analytical solution is found which results in an exponential of the square root of an operator that is approximated with a Padé approximant [3]. Recently, Sharma extended an operator-splitting technique used on the paraxial wave equation to the non-paraxial wave equation (Helmholtz equation) [4]. The splitting allows diffraction and the refractive index variations to be handled separately. Various numerical methods can be used once the operator has been split, such as collocation or finite-difference [4, 5].

In a recent publication, we described a numerical beam propagation method that represents the beam profile in the basis of the eigenvectors of the Laplacian operator and uses a symmetric operator splitting technique to account for the refractive index variations, known as the wide-angle split-step spectroscopic (WASSS) method [6]. In general, the method provided a two-fold speedup to the finite-difference method reported by Sharma [4] This improvement could be increased by use of a fast Fourier transform algorithm. Here we develop a higher-order WASSS (HOWASSS) method that extends the approximation to higher-order, providing a more efficient method when high accuracies are required. We apply the HOWASSS method to the problem of waveguide propagation to demonstrate the performance and accuracy of the method. An additional performance enhancement is obtained by implementing the method on a graphics processing unit (GPU) using compute unified device architecture (CUDA) technology from NVIDIA™.

2. Formulation

Beam propagation in a medium with a non-uniform refractive index is described by the scalar Helmholtz equation,

2z2ψ(z,r)+r2ψ(z,r)+k02n¯2ψ(z,r)+k02(n2(z,r)n¯2)ψ(z,r)=0,
where ψ(z, r) is the complex scalar electric field, z is a Cartesian coordinate in the direction of propagation, r are the transverse coordinates, and r2 is the transverse Laplace operator. As usual, k0 is the free space wavenumber, n(z, r) is the refractive index distribution, and 2 = min [n2 (z, x)]. For simplicity we will only consider the two-dimensional (2D) case, however generalization to three dimensions (3D) follows quite trivially. In two dimensions, the electric field is denoted ψ(z, x). Note that x is not necessarily a Cartesian coordinate but could, for example, be a radial coordinate. Expanding ψ(z, x) in terms of the eigenfunctions of the transverse Laplace operator we write
ψ(z,x)=iai(z)ϕi(x),
where
x2ϕi(x)=λi2ϕi(x)
ϕi*(x)ϕj(x)dx=δi,j.
Multiplying the 2D form of Eq. (1) by ϕj*(x), integrating over the transverse coordinate, and applying Eqs. (3) and (4) we are left with
2z2aj(z)=(k02n¯2λj2)aj(z)iai(z)k02(n2(z,x)n¯2)ϕj*(x)ϕi(x)dx.

To discretize the problem space, let Nx and Nz be the number of discrete grid points in x and z respectively, and let i, j ∈ [1, Nx], and l ∈ [1, Nz]. We let xi = iΔx + x0 and zl = lΔz+ z0, where Δx and Δz are the corresponding step sizes. The matrix N(z) is defined by

Ni,j(z)=k02(n2(z,xi)n¯2)δi,j.
The discrete eigentransform matrix, S[6], transforms a vector into the spectral basis, while S−1 transforms vectors back into the spatial basis. In the case of Cartesian coordinates with hard boundary conditions this matrix becomes the discrete Fourier transform matrix, and in cylindrical coordinates with azimuthal symmetry and hard boundary conditions, S takes the form of a discrete Hankel transform [7]. We also must define the constant matrix M with elements
Mi,j=k02n¯2λi2δi,j.
Notice that if λi > k0 then Mi,j becomes imaginary. If we allow eigenvalues such that λi > k0, then the trigonometric functions appearing in P will become hyperbolic (see Eq. (25)), making the method numerically unstable. For this reason, we will limit Nx to satisfy λi < k0. However, we must include at least enough eigenfrequencies to sufficiently represent the initial conditions, otherwise large errors will be incurred. Using these definitions Eq. (5) becomes
2z2a(z)=M2a(z)SN(z)S1a(z).
Now if we define
A(z)=[a(z)M1za(z)]
H(z)=[0MMM1SN(z)S10],
we can write the Helmholtz equation as a first order vector differential equation
zA(z)=H(z)A(z).

The exact solution to Eq. (11) is given by a Magnus expansion [8],

A(z)=exp(Ω1(z,z0)+Ω2(z,z0)+)A(z0).
The first two terms of the expansion are
Ω1(z,z0)=z0zH(t)dt
Ω2(z,z0)=12z0zz0t1[H(t1),H(t2)]dt2dt1.
Here [H(t1), H(t2)] is the usual commutator. Note that Eq. (11) is essentially the time-dependent Schrödinger equation, in which context the Magnus expansion is more well-known as the time-ordered exponential operator [9]. However, we will refer to Eq. (12) as a Magnus expansion to be consistent with discussions concerning initial value problems of the form of Eq. (11).

H(z) can be written as the sum of two matrices, one that is constant and one that depends on z,

H(z)=H1+H2(z)=[0MM0]+[00M1SN(z)S10].
Using this and the trapezoid rule to approximate the integral in Eq. (13) we obtain,
Ω1(zl+1,zl)=H1Δz+(H2(zl+1)+H2(zl))Δz2+𝒪((Δz)3).
To approximate Ω2 (zl+1, zl) we first need to compute the commutator. After making use of Eq. (15) and simplifying we find
[H(z1),H(z2)]=[S(N(z1)N(z2))S100M1S(N(z2)N(z1))S1M].
Note that [H2 (z1), H2 (z2)] = 0. We apply the trapezoid method twice to approximate the double integral in Eq. (14) and obtain
Ω2(zl+1,zl)=[H(zl+1),H(zl)](Δz)28+𝒪((Δz)3).
The next term in the Magnus series will contribute a factor of (Δz)3 as it involves a triple integral. Keeping terms up to (Δz)3
A(zl+1)exp(H1Δz+(H2(zl+1)+H2(zl))Δz2+[H(zl+1),H(zl)](Δz)28)A(zl).
Because this contains the exponential of a dense matrix, which is difficult to handle numerically, we will split this exponential into a form that will be easier to work with via the symmetric exponential splitting technique
exp((A+B)Δz)=exp(AΔz2)exp(BΔz)exp(AΔz2)+𝒪((Δz)3).
First, we split the Δz terms in the exponential from the (Δz)2 terms. Then we split the terms involving H1 from those involving H2, and we are left with
A(zl+1)PQ(zl+1)Q(zl)PC(zl+1,zl)PQ(zl+1)Q(zl)PA(zl),
where
P=exp(H1Δz4)
Q(z)=exp(H2(z)Δz4)
C(z1,z2)=exp([H(z1),H(z2)](Δz)28).
Each operator P, Q(z), and C(z1, z2) can be expressed by a 2 × 2 block matrix where the matrices within each operator are diagonal. This allows the exponential operators to be simply evaluated by expanding the exponential into its power series
P=[cos(MΔz4)sin(MΔz4)sin(MΔz4)cos(MΔz4)]
Q(z)=[I0M1SN(z)S1Δz4I]
C(z1,z2)=[Sexp((N(z1)N(z2))(Δz)28)S100M1Sexp((N(z2)N(z1))(Δz)28)S1M].
Now all matrices inside of functions are diagonal making them quite simple to numerically evaluate. In this form the physical interpretation of these operators is most transparent. P is the operator corresponding to a propagation of the pulse through a homogenous medium with an index of refraction of . Q(z) takes into account the difference n(z,x) − , while C(z1, z2) depends on the change in the refractive index over the step taken. Thus, in this sense we would like to draw the analogy that Q(z) acts like the constant term of a Taylor’s series expansion, and C(z1, z2) acts in a manner similar to the first derivative term in such a series. We can save some additional matrix-vector multiplications by combining Q(zl+1) Q (zl)
Q(zl+1)Q(zl)=[I0M1S(N(zl+1)+N(zl))S1Δz4I].

3. Numerical example

To test our method we will simulate beam propagation through a two-dimensional (Cartesian) symmetric Epstein-layer waveguide tilted at an angle θ from the positive z axis [5]. Hard boundary conditions where ψ(z, x0) = 0 and ψ(z, xf) = 0 are assumed. The eigenfunctions of the transverse Laplace operator are then given by

ϕi(x)=sin(iπ(xfx0)x).
The resulting eigentransform matrix is given by the discrete Fourier matrix
Si,j=2Nx+1sin(π(i+1)(j+1)Nx+1)=Si,j1.
This particular Fourier matrix was chosen because the hard boundary conditions at x0 and xf are automatically satisfied. We do not need to include these points in our grid, defined by Δx = (xfx0)/(Nx + 1) and Δz = (zfz0)/Nz. The refractive index profile is
n(z,x)=n¯2+2n¯(Δn)sech2(2(x˜cos(θ)zsin(θ))w),
where is a shifted coordinate to align the waveguide in the center of our computational region to make the hard boundary conditions as negligible as possible, Δn is the height of the refractive index shift, w is the width of the waveguide. The shifted coordinate is given by = x − (1/2)(xfx0) + (1/2)(zfz0) tan(θ). The initial electric field is given by the ze-roth order mode of the Epstein-layer waveguide
ψ(0,x)=sechW(2x˜cos(θ)w)exp(iK0x˜sin(θ)),
Here, i is the unit imaginary number, not to be confused with the counting index used elsewhere. W and K0 are given by
W=12(1+2w2k02n¯Δn1)
K0=(2Ww)2+(k0n¯)2.
The exact solution for this tilted Epstein-layer waveguide is [10]
ψe(z,x)=sechW(2(x˜cos(θ)zsin(θ))w)exp(iK0(x˜sin(θ)+zcos(θ))).
To measure the error we use the correlation factor,
Error(z)=|1(x0xfψ*(z,x)ψ(z,x)dx)2(x0xfψe*(z,x)ψe(z,x)dx)2|,
as this provides a measure for both the profile shape and amplitude of the beam [5, 6].

C and CUDA versions of both the WASSS and HOWASSS methods were implemented. The code was compiled using nvcc version 4.2 for CUDA code and gcc version 4.6.3 for the C code. The code was run on a workstation equipped with an Intel Xeon X5960 CPU, which is a hyper-threaded six-core CPU clocked at 3.47 GHz with 48 GB of RAM, and a NVIDIA QUADRO 6000 GPU, which has 448 CUDA cores at 574 MHz core clock speed (750 MHz memory clock speed) and 6 GB of dedicated GPU DDR5 RAM.

The simulation was run using the parameters listed in Table 1. The value of k0 was selected to ensure that we could include 1000 transverse modes while still keeping M real-valued. When the waveguide was aligned (θ = 0°) roughly an order of magnitude decrease in the error was observed with the HOWASSS method over the WASSS method (see Fig. 1). However, the improvement was more pronounced when we tilted the waveguide at an angle of 50 degrees, especially for large step sizes (see Fig. 2). Figure 3 shows the calculated electric field next to the exact solution to further illustrate the accuracy of the HOWASSS method. The simulations shown in Figs. 1, 2, and 3 were run using double precision accuracy numbers on the GPU. Both methods were capable of producing accurate results using single precision arithmetic, until about Δz = 0.05μm. At this point, round-off error began to affect the results of the HOWASSS method due to the increased number of matrix-vector multiplications required in each time step. The WASSS method does not suffer as quickly from this problem because it is computationally more simple. The limit where round-off error begins to affect the double precision code was not observed for either method.

Tables Icon

Table 1. Parameters used in the numerical calculations

 figure: Fig. 1

Fig. 1 Error as a function of propagation distance for an aligned waveguide with Nx = 1000 for the (a) WASSS and (b) HOWASSS methods. Note that we have sampled the error every 0.5 mm, and not at every z step calculated, because the rapid oscillations would make the graph difficult to read otherwise.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Error as a function of propagation distance for a waveguide rotated 50 degrees with Nx = 1000 for the (a) WASSS and (b) HOWASSS methods.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Plot of |ψ(z, x)| with Nx = 1000, Nz = 2000, and θ = 50° for the (a) HOWASSS and (b) exact solutions for a waveguide tilted at 50 degrees.

Download Full Size | PDF

4. Discussion

4.1. Stability

By virtue of being a higher order method, the HOWASSS method has improved stability over the WASSS method (see Fig. 4). As we increase the tilt of the waveguide, the WASSS method performs more poorly as the angle increases, and at large step sizes even shows signs of being a bit unstable. However, the HOWASSS method actually becomes more accurate at larger angles. Traditionally, beam propagation methods struggle to obtain accurate results when there are rapid changes in the index of refraction. To simulate a rapidly changing index of refraction we increase the change in the index between the waveguide and the surrounding medium, Δn, while keeping the width of the waveguide fixed. We see that at a 50° waveguide tilt the WASSS method actually becomes unstable with a large stepsize, while the HOWASSS is able to remain stable for at least an additional order of magnitude increase in Δn. However, both methods do loose accuracy as the change in the refractive index becomes steeper, but by decreasing Δz accurate results can still be obtained in a reasonable run time. This same analysis was done for a 0° tilted waveguide and the results were quite similar and so are not shown here.

 figure: Fig. 4

Fig. 4 (a) The maximum error obtained as a function of waveguide tilt angle showing that the HOWASSS method actually gains a small amount of accuracy at larger angles. (b) The maximum error obtained as a function of waveguide depth, Δn.

Download Full Size | PDF

4.2. Speed

Computationally, the HOWASSS method requires the multiplication of matrices with vectors. If we restrict Nx so that the matrix M is real then, for the example in Section 3, all the matrices will be real, however the vector A will be complex. In general, the eigenfunctions and the refractive index could be complex. However, for simplicity we will assume these to be real.

Diagonal matrix multiplication is equivalent to element-by-element vector multiplication. Hence, multiplying an Nx × Nx diagonal matrix by an Nx element complex vector requires 2Nx operations. Multiplying an Nx × Nx dense matrix by an Nx element complex vector is equivalent to performing 2Nx dot products, each requiring Nx multiplications and Nx − 1 additions, for a total of 2Nx(2Nx1)=4Nx22Nx operations. Applying P requires a total of 4 diagonal matrix-vector multiplications for a total of 8Nx operations. Applying Q (zl+1) Q (zl) requires 2 diagonal matrix-vector multiplications, 2 dense matrix-vector multiplications, 2 vector-vector additions, and 1 scalar-vector multiplication for a total of 8Nx2+6Nx=2Nx(4Nx3) operations. Applying C(zl+1, zl) requires 4 dense matrix-vector multiplications, 1 vector-vector addition, and 2 element-by-element exponential operations (assumed to only be 1 operation per element) for a total of 16Nx22Nx=2Nx(8Nx1) operations. P is applied four times, Q(zl+1) Q (zl) is applied twice, and C(zl+1, zl) is applied once each step, giving a total of 32Nx2+42Nx=2Nx(16Nx+21) operations. Following the same logic for the WASSS method, we find that it requires 8Nx2+6Nx=2Nx(4Nx+3) operations. Note that this operation count differs from the one reported by Clark and Thomas [6] because we are counting both multiplications and additions in this calculation.

Furthermore, to leading order in Nx, the HOWASSS method is approximately a factor of 4 slower for a given Δz (see Fig. 5). However, the initial hypothesis was that a significant improvement in accuracy may lead to a more efficient algorithm. To test this hypothesis, we executed the HOWASSS and WASSS methods using the GPU implementation with various step sizes, holding all other aspects of the problem constant. Figure 6 summarizes the result. For a propagation length of 100 micrometers, and a waveguide tilt angle of 50 degrees, we show the compute time per micron of propagation as a function of the maximum absolute error. For two different values of Nx, the efficiency at constant error is better for the HOWASSS method, as indicated by shorter compute times. In fact, for error values smaller than about 10−4, the HOWASSS method is much better, with efficiency rapidly exceeding an order of magnitude for absolute error values of less than 10−6.

 figure: Fig. 5

Fig. 5 (a) Run times of HOWASSS and WASSS methods on the GPU and the single-core CPU for 1000 propagation steps. (b) Comparison of HOWASSS method run times using different number of cores on the CPU and using the GPU for 1000 propagation steps.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Comparison of the HOWASSS method and WASSS method compute time per propagation distance with Nx = 1000 and Nx = 2000. Both methods were run on the GPU.

Download Full Size | PDF

The data presented in this paper was generated using double precision arithmetic. This does not result in a significant difference in the run time when the code is run on a CPU. However, GPUs are intrinsically designed to deal with single-precision arithmetic. In order to compute one double-precision number the GPU must perform two single-precision calculations and some additional overhead to combine the result, leading to, at a minimum, a factor of two speed-up when single-precision numbers are used. This speed-up also depends on the specific graphics card used, as some have less capability than others when it comes to double precision arithmetic. Rounding errors can affect the HOWASSS method if Nz is large, around 2000 for the specific example considered in this paper. If the application does not require extreme accuracy, then a substantial speed-up can be achieved by using single-precision arithmetic on the GPU.

4.3. Parallelization

Both the WASSS and HOWASSS methods readily lend themselves to parallelization. For our implementation we chose to use both OpenMP, to make use of multi core CPUs, and NVIDIA compute unified device architecture (CUDA) to make use of the processing power of the graphics processing unit (GPU). Modern GPUs possess orders of magnitude more computational power than the typical CPU. However, to completely utilize this power the algorithm must possess a very high level of parallelism to completely saturate the many processing units of the GPU.

Unfortunately, neither the WASSS nor the HOWASSS method are ideal for utilizing the full power of the GPU because many of the matrix-vector multiplications involve diagonal matrices. Faster than their dense counterparts, these computationally reduce to simple element-by-element vector multiplication. While this is a perfectly parallel operation, it does not offer the large number of independent calculations needed to saturate the GPU. These element-by-element vector multiplications suffer additionally from the fact that they require two memory reads and one memory write to compute only one multiplication. On the GPU reading and writing to memory is much slower than math operations. Given this, there is still a significant speed-up when the code is run on the GPU versus on a single core of the CPU (see Fig. 5). This speed-up was accomplished with little attention payed to optimization of the code. With further optimization an additional factor of two or more would be likely. Additionally, extending this technique to even higher-orders might yield additional improvements.

4.4. Generalizations to other coordinate systems, boundary conditions, and higher dimensions

Both the WASSS and HOWASSS methods can be generalized to different coordinate systems or different boundary conditions. In fact the only complication is that the eigenfunctions and eigenvalues must be known. For a more detailed discussion on this topic see Clark and Thomas [6].

5. Conclusion

We have presented a method that is more efficient than previous non-paraxial beam propagation methods. The method casts the analytic solution of the Helmholtz equation as a Magnus expansion. Keeping terms up to (Δz)3 in the Magnus expansion we use a symmetric operator splitting technique in order to analytically reduce the exponential matrices into a more simple form. The solution to the Helmholtz equation is then approximated via straightforward matrix multiplication. We have demonstrated the method in a simple geometry with 2D Cartesian coordinates with hard boundary conditions. The results obtained show our higher-order approach significantly improves the overall efficiency, when measured as compute time per distance of propagation, in cases where high accuracy results are required. The method can be easily extended to more generalized coordinate systems, higher dimensions, and various boundary conditions, provided that the eigenfunctions and eigenvalues of the transverse Laplace operator can be found for that geometry.

Acknowledgments

The authors thank the Air Force Research Laboratory, and acknowledge support through the Human Effectiveness Directorate contract FA8650-08-D-6930 for this effort. This work was partially supported by National Science Foundation Grants ECCS-1250360, DBI-1250361, and CBET-125036. B. H. H. acknowledges the support of the High Energy Laser Joint Technology Office for their sponsorship of the Directed Energy Professional Society’s Summer Research Internship Program.

References and links

1. G. R. Hadley, “Multistep method for wide-angle beam propagation,” Opt. Lett. 17, 1743–1745 (1992) [CrossRef]   [PubMed]  .

2. K. Q. Le, R. Godoy-Rubio, P. Bienstman, and G. R. Hadley, “The complex Jacobi iterative method for three-dimensional wide-angle beam propagation,” Opt. Express 16, 17021–17030 (2008) [CrossRef]   [PubMed]  .

3. Y. Y. Lu and P. L. Ho, “Beam propagation method using a [(p−1)/p] Padé approximant of the propagator,” Opt. Lett. 27, 683–685 (2002) [CrossRef]  .

4. A. Sharma and A. Agrawal, “New method for nonparaxial beam propagation,” J. Opt. Soc. Am. B 21, 1082–1087 (2004) [CrossRef]  .

5. A. Sharma and A. Agrawal, “Non-paraxial split-step finite-difference method for beam propagation,” Opt. Quantum. Electron. 38, 19–34 (2006) [CrossRef]  .

6. C. D. Clark and R. Thomas, “Wide-angle split-step spectral method for 2D or 3D beam propagation,” Opt. Quantum. Electron. 41, 849–857 (2010) [CrossRef]  .

7. M. Guizar-Sicairos and J. C. Gutiérrez-Vega, “Computation of quasi-discrete Hankel transforms of integer order for propagating optical wave fields,” J. Opt. Soc. Am. A 21, 53–58 (2004) [CrossRef]  .

8. W. Magnus, “On the exponential solution of differential equations for a linear operator,” Comm. Pure Appl. Math. 7, 649–673 (1954) [CrossRef]  .

9. M. Bauer, R. Chetrite, K. Ebrahimi-Fard, and F. Patras, “Time-ordering and a generalized Magnus expansion,” Lett. Math. Phys. 103, 331–350 (2012) [CrossRef]  .

10. M. J. Adams, An Introduction to Optical Waveguides (Wiley, 1981).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Error as a function of propagation distance for an aligned waveguide with Nx = 1000 for the (a) WASSS and (b) HOWASSS methods. Note that we have sampled the error every 0.5 mm, and not at every z step calculated, because the rapid oscillations would make the graph difficult to read otherwise.
Fig. 2
Fig. 2 Error as a function of propagation distance for a waveguide rotated 50 degrees with Nx = 1000 for the (a) WASSS and (b) HOWASSS methods.
Fig. 3
Fig. 3 Plot of |ψ(z, x)| with Nx = 1000, Nz = 2000, and θ = 50° for the (a) HOWASSS and (b) exact solutions for a waveguide tilted at 50 degrees.
Fig. 4
Fig. 4 (a) The maximum error obtained as a function of waveguide tilt angle showing that the HOWASSS method actually gains a small amount of accuracy at larger angles. (b) The maximum error obtained as a function of waveguide depth, Δn.
Fig. 5
Fig. 5 (a) Run times of HOWASSS and WASSS methods on the GPU and the single-core CPU for 1000 propagation steps. (b) Comparison of HOWASSS method run times using different number of cores on the CPU and using the GPU for 1000 propagation steps.
Fig. 6
Fig. 6 Comparison of the HOWASSS method and WASSS method compute time per propagation distance with Nx = 1000 and Nx = 2000. Both methods were run on the GPU.

Tables (1)

Tables Icon

Table 1 Parameters used in the numerical calculations

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

2 z 2 ψ ( z , r ) + r 2 ψ ( z , r ) + k 0 2 n ¯ 2 ψ ( z , r ) + k 0 2 ( n 2 ( z , r ) n ¯ 2 ) ψ ( z , r ) = 0 ,
ψ ( z , x ) = i a i ( z ) ϕ i ( x ) ,
x 2 ϕ i ( x ) = λ i 2 ϕ i ( x )
ϕ i * ( x ) ϕ j ( x ) d x = δ i , j .
2 z 2 a j ( z ) = ( k 0 2 n ¯ 2 λ j 2 ) a j ( z ) i a i ( z ) k 0 2 ( n 2 ( z , x ) n ¯ 2 ) ϕ j * ( x ) ϕ i ( x ) d x .
N i , j ( z ) = k 0 2 ( n 2 ( z , x i ) n ¯ 2 ) δ i , j .
M i , j = k 0 2 n ¯ 2 λ i 2 δ i , j .
2 z 2 a ( z ) = M 2 a ( z ) SN ( z ) S 1 a ( z ) .
A ( z ) = [ a ( z ) M 1 z a ( z ) ]
H ( z ) = [ 0 M M M 1 SN ( z ) S 1 0 ] ,
z A ( z ) = H ( z ) A ( z ) .
A ( z ) = exp ( Ω 1 ( z , z 0 ) + Ω 2 ( z , z 0 ) + ) A ( z 0 ) .
Ω 1 ( z , z 0 ) = z 0 z H ( t ) d t
Ω 2 ( z , z 0 ) = 1 2 z 0 z z 0 t 1 [ H ( t 1 ) , H ( t 2 ) ] d t 2 d t 1 .
H ( z ) = H 1 + H 2 ( z ) = [ 0 M M 0 ] + [ 0 0 M 1 SN ( z ) S 1 0 ] .
Ω 1 ( z l + 1 , z l ) = H 1 Δ z + ( H 2 ( z l + 1 ) + H 2 ( z l ) ) Δ z 2 + 𝒪 ( ( Δ z ) 3 ) .
[ H ( z 1 ) , H ( z 2 ) ] = [ S ( N ( z 1 ) N ( z 2 ) ) S 1 0 0 M 1 S ( N ( z 2 ) N ( z 1 ) ) S 1 M ] .
Ω 2 ( z l + 1 , z l ) = [ H ( z l + 1 ) , H ( z l ) ] ( Δ z ) 2 8 + 𝒪 ( ( Δ z ) 3 ) .
A ( z l + 1 ) exp ( H 1 Δ z + ( H 2 ( z l + 1 ) + H 2 ( z l ) ) Δ z 2 + [ H ( z l + 1 ) , H ( z l ) ] ( Δ z ) 2 8 ) A ( z l ) .
exp ( ( A + B ) Δ z ) = exp ( A Δ z 2 ) exp ( B Δ z ) exp ( A Δ z 2 ) + 𝒪 ( ( Δ z ) 3 ) .
A ( z l + 1 ) PQ ( z l + 1 ) Q ( z l ) PC ( z l + 1 , z l ) PQ ( z l + 1 ) Q ( z l ) PA ( z l ) ,
P = exp ( H 1 Δ z 4 )
Q ( z ) = exp ( H 2 ( z ) Δ z 4 )
C ( z 1 , z 2 ) = exp ( [ H ( z 1 ) , H ( z 2 ) ] ( Δ z ) 2 8 ) .
P = [ cos ( M Δ z 4 ) sin ( M Δ z 4 ) sin ( M Δ z 4 ) cos ( M Δ z 4 ) ]
Q ( z ) = [ I 0 M 1 SN ( z ) S 1 Δ z 4 I ]
C ( z 1 , z 2 ) = [ S exp ( ( N ( z 1 ) N ( z 2 ) ) ( Δ z ) 2 8 ) S 1 0 0 M 1 S exp ( ( N ( z 2 ) N ( z 1 ) ) ( Δ z ) 2 8 ) S 1 M ] .
Q ( z l + 1 ) Q ( z l ) = [ I 0 M 1 S ( N ( z l + 1 ) + N ( z l ) ) S 1 Δ z 4 I ] .
ϕ i ( x ) = sin ( i π ( x f x 0 ) x ) .
S i , j = 2 N x + 1 sin ( π ( i + 1 ) ( j + 1 ) N x + 1 ) = S i , j 1 .
n ( z , x ) = n ¯ 2 + 2 n ¯ ( Δ n ) sech 2 ( 2 ( x ˜ cos ( θ ) z sin ( θ ) ) w ) ,
ψ ( 0 , x ) = sech W ( 2 x ˜ cos ( θ ) w ) exp ( i K 0 x ˜ sin ( θ ) ) ,
W = 1 2 ( 1 + 2 w 2 k 0 2 n ¯ Δ n 1 )
K 0 = ( 2 W w ) 2 + ( k 0 n ¯ ) 2 .
ψ e ( z , x ) = sech W ( 2 ( x ˜ cos ( θ ) z sin ( θ ) ) w ) exp ( i K 0 ( x ˜ sin ( θ ) + z cos ( θ ) ) ) .
Error ( z ) = | 1 ( x 0 x f ψ * ( z , x ) ψ ( z , x ) d x ) 2 ( x 0 x f ψ e * ( z , x ) ψ e ( z , x ) d x ) 2 | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.