Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convergence of the iterative T-matrix method

Open Access Open Access

Abstract

Among the various methods for computing the T-matrix in electromagnetic and acoustic scattering problems is an iterative approach that has been shown to be particularly suited for particles with small-scale surface roughness. This method is based on an implicit T-matrix equation. However, the convergence properties of this method are not well understood. Here, a sufficient condition for the convergence of the iterative T-matrix algorithm is derived by applying the Banach fixed point theorem. The usefulness of the criterion is illustrated by applying it to predicting, as well as to systematically improving the convergence of the iterative method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the electromagnetic scattering problem one considers an incident field ${\textbf E}_\textrm {inc}$, which, after introducing a scattering target, is modified to a field ${\textbf E}_\textrm {tot}$. The scattered field in the region outside the target is the difference in the fields, ${\textbf E}_\textrm {sca}={\textbf E}_\textrm {tot}-{\textbf E}_\textrm {inc}$. The field inside the target is referred to as the internal field ${\textbf E}_\textrm {int}$. A common approach to scattering problems is to expand the (complex) fields in a complete set of wave functions, and to determine the unknown complex expansion coefficients of the scattered and internal fields in terms of the known expansion coefficients of the incident field. Since the boundary conditions on the surface of the target are linear, one obtains linear relations among the expansion coefficients. One is specifically interested in the linear relation that gives the expansion coefficients of the scattered field in terms of those of the incident field. This relation is expressed by the T-matrix.

The traditional approach to computing the T-matrix is Waterman’s null-field method [1]. Other methods have been developed, based on, e.g., the separation-of-variables method [2], the point-matching method [3], and the invariant-imbedding T-matrix method [4,5]. For particles with a basic regular geometry and an impressed regular or irregular small-scale surface structure an approach has been proposed [6,7] that is based on solving an iterative equation (similar to a Lippmann-Schwinger equation) for computing the T-matrix. A short review of the method is given in the next section. A detailed explanation is given in [8].

The convergence properties of this method are, as yet, poorly understood. In previous applications [7] one has simply taken a pragmatic approach and tested the convergence by numerical experiments. However, this can be quite impractical, because it requires us to go through a potentially long iterative process before discovering whether or not the algorithm converges. It would be a significant improvement of the method if we had a convergence prediction before starting the iteration. If the prediction is negative, one can choose a different method, such as a group theoretical method based on the irreducible representations of the particle’s symmetry group [911]. It is also mathematically unsatisfactory to develop a numerical method without formulating any criteria for its convergence. The aim of this note is to fill this gap. We will show that a convergence criterion can not only be used for making predictions, but also for systematically improving the chances of convergence of the iterative method.

2. Iterative T-matrix approach

Several methods for numerically solving electromagnetic scattering problems are based on expanding the incident, scattered, and internal fields according to

$${\mathbf{E}}^\textrm{inc}(k_0{\mathbf{r}}) = \sum_{n=1}^{N_\textrm{cut}}\sum_{m=-n}^{n} \sum_{q=1}^{2} a_{n,m,q} {\boldsymbol{\mathrm{\psi}}}_{n,m,q}^{(1)}(k_0{\mathbf{r}})$$
$${\mathbf{E}}^\textrm{sca}(k_0{\mathbf{r}}) = \sum_{n=1}^{N_\textrm{cut}}\sum_{m=-n}^{n} \sum_{q=1}^{2} p_{n,m,q} {\boldsymbol{\mathrm{\psi}}}_{n,m,q}^{(3)}(k_0{\mathbf{r}})$$
$${\mathbf{E}}^\textrm{int}(k{\mathbf{r}}) = \sum_{n=1}^{N_\textrm{cut}}\sum_{m=-n}^{n} \sum_{q=1}^{2} c_{n,m,q} {\boldsymbol{\mathrm{\psi}}}_{n,m,q}^{(1)}(k{\mathbf{r}}),$$
where $k_0$ and $k$ denote the wavenumbers in the surrounding medium and inside the scatterer, respectively, and $\mathbf {r}$ is the position vector. The subscripts $n$, $m$, and $q$ denote, respectively, the degree, order, and mode, while the superscript indicates the kind of the wavefunctions ${\boldsymbol{\mathrm{\psi}}}$. Wavefunctions of the first kind are regular at the origin, while wavefunctions of the third kind are outgoing functions that satisfy the radiation condition. Note that in numerical computations the infinite sums over the degree $n$ have to be truncated at a finite expansion order $N_\textrm {cut}$.

The linearity of the boundary conditions entails linear relations among the expansion coefficients, which in matrix-vector notation take the form

$${\mathbf{a}} = {\mathbf{Q}}\cdot{\mathbf{c}}$$
$${\mathbf{p}} = -Rg{\mathbf{Q}}\cdot{\mathbf{c}}$$
$${\mathbf{p}} = {\mathbf{T}}\cdot{\mathbf{a}}.$$
In Waterman’s T-matrix method (also known as the null-field method or extended boundary condition method) one derives surface-integral expressions for computing the matrices ${\mathbf {Q}}$ and $Rg{\mathbf {Q}}$. What we actually want is the T-matrix, which allows us to compute the scattered field from the incident field. Using the last three equations, one obtains
$${\mathbf{T}}=-Rg{\mathbf{Q}}\cdot{\mathbf{Q}}^{-1}.$$
Thus the T-matrix can be computed from the matrices ${\mathbf {Q}}$ and $Rg{\mathbf {Q}}$.

A potential disadvantage of the last equation is that it relies on inversion of the matrix ${\mathbf {Q}}$, which can become numerically ill-conditioned. Several methods have been contrived to alleviate this problem, such as the use of expansion functions defined in spheroidal coordinates [2], the expansion of the fields about several expansion points [12], as in the method of discrete sources [13], or the use of group-theoretical methods [9,10]. Here we will consider the iterative T-matrix method [68], which proceeds as follows. Consider a reference geometry with Q-matrix ${\mathbf {Q}}_0$, as well as a geometry that can be seen as a small perturbation of the reference case with Q-matrix ${\mathbf {Q}}$. We formally introduce the difference of the two Q-matrices,

$$\Delta{\mathbf{Q}} = {\mathbf{Q}} - {\mathbf{Q}}_0 .$$
We can recast Eq. (7) into the form
$${\mathbf{T}}\cdot({\mathbf{Q}}_0+\Delta{\mathbf{Q}}) =-Rg{\mathbf{Q}},$$
which yields
$${\mathbf{T}}=-(Rg{\mathbf{Q}}+ {\mathbf{T}}\cdot\Delta{\mathbf{Q}})\cdot{\mathbf{Q}}_0^{-1}.$$
This equation is formally equivalent to Eq. (7). However, Eq. (10) involves inversion of the matrix ${\mathbf {Q}}_0$, while Eq. (7) requires inversion of the matrix ${\mathbf {Q}}$. If the former matrix inversion problem is numerically more stable than the latter, then Eq. (10) may help us to reduce ill-conditioning problems. As an example, consider a Chebyshev particle given by the surface parameterisation
$$r(\theta,\phi)=r_0 (1+\epsilon \cos \ell\theta \cos \ell\phi),$$
where $r_0$ denotes the radius of an unperturbed sphere, $\theta$ and $\phi$ are the polar and azimuth angles, $\epsilon$ is the deformation parameter, and $\ell$ is the order of the Chebyshev polynomial. The reference geometry is the unperturbed sphere with radius $r_0$. Its Q-matrix ${\mathbf {Q}}_0$ is diagonal. Hence, the matrix inversion is trivial. By contrast, the matrix ${\mathbf {Q}}$ of the Chebyshev particle is not diagonal. Its inversion has to be computed numerically, which may be an ill-conditioned problem. In such case, the use of Eq. (10) can circumvent the ill-conditioning problems that are present in Eq. (7).

Equation (10) has one major drawback. It is only an implicit equation for computing the T-matrix, while Eq. (7) is an explicit equation. Equation (10) can be solved by the following iteration scheme.

$$ {\mathbf{T}}_0 = -Rg{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}$$
$$ {\mathbf{T}}_{n} = -(Rg{\mathbf{Q}}+{\mathbf{T}_{n-1}}\cdot{\Delta \mathbf{Q}})\cdot{\mathbf{Q}}_0^{-1}. $$
The first equation is a first-guess for the T-matrix, which we obtain by setting ${\mathbf {T}}={\mathbf 0}$ on the rhs of Eq. (10). The second equation expresses the iterative application of Eq. (10) by which, starting from the first guess ${\mathbf {T}}_0$, our estimate of the T-matrix is successively improved.

Can we make a prediction whether or not this iteration scheme will converge? In the following section we derive a sufficient condition for the existence of a unique solution of the method. The main tool for the derivation will be the Banach fixed-point theorem, which we briefly review in the following section.

3. Sufficient condition for the convergence of the iterative T-matrix approach

3.1 Derivation based on the Banach fixed point theorem

For the following derivation we need to borrow some basic concepts from functional analysis.

Contraction mapping. Let $(X,g)$ denote a metric space, where $X$ is a vector space and $g:\, X\times X\rightarrow \mathbb {R}$ denotes a metric defined on $X$. A mapping $\hat {C}:\, X\rightarrow X$ is called a contraction mapping, if there exists a real number $\alpha <1$ such that

$$g(\hat{C}x, \hat{C}y)\le\alpha g(x,y)\;\;\;\forall x,y\in X.$$
Note that $\hat {C}$ is not required to be linear.

Contraction theorem or Banach fixed point theorem. Let $(X,g)$ denote a complete, non-empty set $X$ with metric $g$, and let $\hat {C}:\, X\rightarrow X$ denote a contraction on $X$. Then there exists one and only one point $x\in X$ for which $\hat {C}x=x$. (Such a point is called a fixed point of the mapping $\hat {C}$.)

The proof of this theorem can be found in the standard literature on functional analysis (e.g. [14,15]).

Given an equation of the form $\hat {C}x=x$ (such as the implicit T-matrix equation), we are now in a position to make a prediction about the existence and uniqueness of a solution to this equation. We have to check whether or not $\hat {C}$ is a contraction. This is the main idea for deriving a convergence criterion for the implicit T-matrix equation.

The T-matrix has elements $T_{n,m,q,n',m',q'}$, where $n,n'=1,\ldots ,N_{\mathrm {cut}}$, $m=-n,\ldots ,n$, $m'=-n',\ldots ,n'$, and $q,q'=1,2$. The T-matrix is an element of a vector space ${\cal M}_N$ of $(N\times N)$ matrices, where $N=2N_{\mathrm {cut}}(N_{\mathrm {cut}}+2)$. In this finite dimensional space, we can simply define the matrix norm by

$$\left\lVert{\mathbf{T}}\right\rVert=\max \left| T_{n,m,q,n',m',q'}\right|.$$
The norm induces a metric
$$g(\mathbf{T}_1,\mathbf{T}_2)=\left\lVert{\mathbf{T}_1-\mathbf{T}_2}\right\rVert.$$
The normed vector space $({\cal M}_N,\left \lVert {\cdot }\right \rVert )$ is complete; thus, $({\cal M}_N,\left \lVert {\cdot }\right \rVert )$ satisfies the prerequisite of the Banach fixed point theorem. (A metric space is said to be complete, if any Cauchy sequence in the space converges. A complete normed space is called a Banach space. The elements of ${\cal M}_N$ are mappings $\mathbf {T}:\,\mathbb {R}^{N}\rightarrow \mathbb {R}^{N}$, and $\mathbb {R}^{N}$ is complete. Further, linear operators on finite dimensional spaces, such as the elements of ${\cal M}_N$, are bounded. A basic theorem of functional analysis states that any space of bounded linear operators $\mathbf {T}:\,X\rightarrow Y$ is a Banach space if $Y$ is a Banach space (see, e.g., [14]). It follows that $({\cal M}_N,\left \lVert {\cdot }\right \rVert )$ is a Banach space, i.e., it is complete.)

Now we consider the mapping

$$\hat{C}:\,{\cal M}_N\rightarrow {\cal M}_N,\; \mathbf{T}\mapsto \hat{C}\mathbf{T} = -( Rg{\mathbf{Q}}+ {\mathbf{T}}\cdot\Delta{\mathbf{Q}})\cdot{\mathbf{Q}}_0^{-1}.$$
This is manifestly not a linear mapping. The implicit T-matrix Eq. (10) can now be written as
$$\mathbf{T}= \hat{C}\mathbf{T}.$$
If $\hat {C}$ is a contraction, then, by the Banach fixed point theorem, it possesses a unique fixed point; i.e., the implicit Eq. (18) has a unique solution. To investigate the contraction property, we have to consider the expression
$$g(\hat{C}\mathbf{T}_1,\hat{C}\mathbf{T}_2)=\left\lVert{(\mathbf{T}_1-\mathbf{T}_2)\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert$$
for any two matrices $\mathbf {T}_1$, $\mathbf {T}_2\in {\cal M}_N$. $\hat {C}$ is a contraction if and only if there exists an $\alpha <1$ so that
$$\left\lVert{(\mathbf{T}_1-\mathbf{T}_2)\cdot\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert\le\alpha\left\lVert{\mathbf{T}_1-\mathbf{T}_2}\right\rVert$$
for all $\mathbf {T}_1$, $\mathbf {T}_2\in {\cal M}_N$. Since $(\mathbf {T}_1-\mathbf {T}_2)\in {\cal M}_N$, we may simply replace $(\mathbf {T}_1-\mathbf {T}_2)$ with $\mathbf {T}$, so that the contraction property is fulfilled if and only if
$$\left\lVert{\mathbf{T}\cdot\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert\le\alpha\left\lVert{\mathbf{T}}\right\rVert\;\;\forall\mathbf{T}\in{\cal M}_N.$$
If $\left \lVert {\mathbf {T}}\right \rVert =0$, then this condition is trivially fulfilled. If $\left \lVert {\mathbf {T}}\right \rVert \ne 0$, we obtain
$$\frac{\left\lVert{\mathbf{T}\cdot\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert}{\left\lVert{\mathbf{T}}\right\rVert}\le\alpha\;\;\forall\mathbf{T}\in{\cal M}'_N.$$
where ${\cal M}'_N=\{\mathbf {T}\in {\cal M}_N: \left \lVert {\mathbf {T}}\right \rVert \ne 0\}$. This is true for all $\mathbf {T}\in {\cal M}'_N$ if and only if it is true for the smallest upper bound of ${\left \lVert {\mathbf {T}\cdot \Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert }/{\left \lVert {\mathbf {T}}\right \rVert }$, i.e.
$$\sup_{\mathbf{T}\in{\cal M}'_N} \frac{\left\lVert{\mathbf{T}\cdot\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert}{\left\lVert{\mathbf{T}}\right\rVert}\le\alpha.$$
The expression on the left hand side defines a norm, which we will refer to as the supremum norm $\left \lVert {\cdot }\right \rVert _s$ of the matrix $\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}$,
$$\left\lVert{\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert_s=\sup_{\mathbf{T}\in{\cal M}'_N} \frac{\left\lVert{\mathbf{T}\cdot\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert}{\left\lVert{\mathbf{T}}\right\rVert}.$$
Thus the inequality (23) becomes
$$\left\lVert{\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert_s\le\alpha.$$
In our case we can exploit the fact that in finite dimensions all norms are equivalent — see, e.g., [14]). (Two norms $\left \lVert {\cdot }\right \rVert _1$ and $\left \lVert {\cdot }\right \rVert _2$ on a vector space $X$ are said to be equivalent if there exist numbers $a>0$ and $b>0$ such that $a\left \lVert {x}\right \rVert _1 \le \left \lVert {x}\right \rVert _2 \le b\left \lVert {x}\right \rVert _1\;\;\;\forall x\in X$.) Hence we can simply replace the supremum norm in (25) with the maximum norm defined in (15). Thus we find that $\hat {C}$ is a contraction if and only if
$$\exists\alpha <1\; :\;\left\lVert{\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert\le\alpha.$$
By the Banach fixed point theorem, the contraction property is only sufficient, but not necessary for the existence of a unique solution. Thus our final result is that (26) provides us with a sufficient condition for the existence of a unique solution to the iterative T-matrix equation.

3.2 Derivation based on a direct comparison test

Consider the matrix equation

$$\mathbf{A} = \mathbf{Q}\cdot \mathbf{Z}.$$
We can derive an implicit solution for $\mathbf {Z}$ by substituting $\mathbf {Q}$=$\mathbf {Q}_0$+$\Delta \mathbf {Q}$ — see Eq. (8). After rearragning some terms, this yields
$$\mathbf{Z} = \mathbf{Q}_0^{-1}\cdot (\mathbf{A} - \Delta\mathbf{Q} \cdot \mathbf{Z}).$$
If we set $\mathbf {Z}$=$\mathbf {Q}^{-1}$, then $\mathbf {A}$=$\mathbf {1}$, and the last equation becomes
$$\mathbf{Q}^{-1} = \mathbf{Q}_0^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q} \cdot \mathbf{Q}^{-1}).$$
Thus we obtain an implicit equation for inverting the Q-matrix, which could be solved by an iterative method analogous to that for the T-matrix given in Eqs. (12)–(13). We can also rearrange Eq. (29) in the form
$$(\mathbf{1} + \mathbf{Q}_0^{-1}\cdot \Delta\mathbf{Q}) \cdot \mathbf{Q}^{-1} = \mathbf{Q}_0^{-1},$$
from which we obtain the formal solution
$$\mathbf{Q}^{-1} = (\mathbf{1} - \mathbf{D})^{-1}\cdot \mathbf{Q}_0^{-1},$$
where we introduced
$$\mathbf{D} = - \mathbf{Q}_0^{-1} \cdot \Delta\mathbf{Q}.$$
The inverse matrix in Eq. (31) can be written as a limit of a geometric series, i.e.
$$(\mathbf{1} - \mathbf{D})^{-1} = \mathbf{1} + \mathbf{D} + \mathbf{D}^{2} + \ldots .$$
What can we say about the existence of this limit? Suppose there exists an $\alpha \ge 0$ such that $\left \lVert {\mathbf {D}}\right \rVert \le \alpha$. Then
$$\left\lVert{\mathbf{1} + \mathbf{D} + \mathbf{D}^{2} + \ldots}\right\rVert \le \left\lVert{\mathbf{1}}\right\rVert + \left\lVert{\mathbf{D}}\right\rVert + \left\lVert{\mathbf{D}^{2}}\right\rVert + \ldots \le \sum_{n=0}^{\infty} \alpha^{n}.$$
The geometric series on the rhs converges if and only if $\alpha < 1$. Further, the series expansion in (33) converges if the geometric series on the rhs of (34) converges. The latter criterion, known as the direct comparison criterion, provides a sufficient condition for the convergence of the series expansion in (33). Backsubstituting the definition of $\mathbf {D}$, we obtain the sufficient convergence criterion
$$\left\lVert{\mathbf{D}}\right\rVert = \left\lVert{\mathbf{Q}_0^{-1} \cdot \Delta\mathbf{Q}}\right\rVert \le \alpha < 1.$$
This is very similar to the criterion we obtained by use of the Banach fixed point theorem in (26), except that the order in the matrix product $\Delta \mathbf {Q} \cdot \mathbf {Q}_0^{-1}$ is now reversed.

4. Applications of the convergence criterion

4.1 Testing convergence by use of the contraction criterion

As an illustration of the method, let us consider a 3D-Chebyshev particle characterised by the surface parameterisation in Eq. (11). We choose $\ell =160$, a size parameter $x=2\pi r_0/\lambda =40$ (where $\lambda$ is the wavelength of light), a refractive index of $m=3+0.1$i (typical for hematite at visible wavelengths), and five different deformation parameters, namely, $\epsilon =$0.0175, 0.02, 0.025, 0.03, and 0.04. This is a well-studied test case that has been considered in [11], where the reciprocity condition has been used [16] to show that the iterative T-matrix approach converges for $\epsilon \le$0.03, but not for $\epsilon =$0.04. In fact, for $\epsilon =$0.0175 it was found that the reciprocity error has dropped below 0.1 % after only three iterations. For $\epsilon =$0.03, the reciprocity error was less than 1.5 % after 39 iteration, while for $\epsilon =$0.04 the reciprocity error increased with the number of iterations, thus indicating divergence of the algorithm. As an illustration, Fig. 1 shows optical properties computed for a deformation parameter of $\epsilon =$0.03. The panels show the linear polarisation $-F_{12}/F_{11}$ (top), as well as $F_{22}/F_{11}$ (bottom), where $F_{ij}$, $i,j=1,\ldots ,4$ denote the elements of the Stokes scattering matrix. The insert in the bottom panel shows the linear depolarisation ratio $\delta _L$ as a function of scattering angle in the backscattering hemisphere, where

$$\delta_L = \frac{F_{11}-F_{22}}{F_{11}+2F_{12}+F_{22}}.$$
Between 30–40 iterations are required to achieve convergence in this case. The computations have been performed with the open-source T-matrix program Tsym [17].

 figure: Fig. 1.

Fig. 1. Degree of linear polarisation $-F_{12}/F_{11}$ (top), scattering matrix element $F_{22}/F_{11}$ (bottom), and linear depolarisation ratio $\delta _L$ (bottom insert) of randomly oriented 3D-Chebyshev particles with size parameter $x=40$, Chebyshev order $\ell$=160, refractive index $m=3+0.1$i, and deformation parameter $\epsilon =0.03$, computed by performing 6, 15, 27, and 39 iterations of Eq. (13).

Download Full Size | PDF

The left hand side of the inequality in (26) has been implemented into Tsym and applied to the five test cases. Table 1 shows that the matrix norm is smaller than 1 for $\epsilon =$0.0175, 0.2, 0.25 and 0.03. Thus the algorithm is guaranteed to converge. The table also shows the number of iterations $n_{\mathrm {max}}$ that are required to reach convergence in Eq. (13). This can be taken as a measure for the speed of convergence. Evidently, the magnitude of $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert$ provides us with a clear indication of the speed of convergence.

Tables Icon

Table 1. $\left \lVert {\Delta{\mathbf Q}\cdot{\mathbf Q}_0^{-1}}\right \rVert$ from Eq. (26) and number of iterations $n_{\mathrm{max}}$ for Chebyshev particles with size parameter $x=40$, Chebyshev order $\ell$=160, refractive index $m=3+0.1$i, and with different deformation parameters $\epsilon$.

For $\epsilon =$0.04 we have $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert >1$. Since the contraction criterion is not necessary for convergence, we cannot make any predictions about the convergence of the iterative T matrix equation. The numerical results show that the iteration diverges.

We can conclude that the numerical tests of the iterative T-matrix approach are consistent with the predictions of the contraction criterion.

4.2 Nested iteration to compute the inverse Q-matrix

We can take the iterative solution of the T-matrix problem one step further. In the previous section, we had a reference geometry with Q-matrix $\mathbf {Q}_0$, and a target geometry with Q-matrices $\mathbf {Q}$ and $Rg\mathbf {Q}$. From these three matrices we obtained $\mathbf {T}$ by an iterative approach. Now we want to add an additional intermediate step.

Suppose we have a reference geometry with Q-matrix $\mathbf {Q}_0$, a second reference geometry with Q-matrix $\mathbf {Q}_1$, and a target geometry with Q-matrix $\mathbf {Q}$. To take a specific example, the first reference geometry may be a sphere, the second one a Chebyshev particle of order $\ell$ with a small deformation parameter $\epsilon _1$, and the target geometry a Chebyshev particle of the same order $\ell$ and a somewhat larger deformation parameter $\epsilon > \epsilon _1$. We can determine the T-matrix of the target geometry by the following steps.

  • 1. We mentioned earlier that the implicit equation for the inverse of the Q-matrix, Eq. (29), can be solved by iteration analogous to Eqs. (12)–(13). Thus, we can determine $\mathbf {Q}_1^{-1}$ by solving
    $$ {\mathbf{Q}}_{1,(0)}^{-1} = \mathbf{Q}_0^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_1 \cdot \mathbf{Q}_0^{-1})$$
    $$ {\mathbf{Q}}_{1,(n+1)}^{-1} = \mathbf{Q}_0^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_1 \cdot \mathbf{Q}_{1,(n)}^{-1}), $$
    where $\Delta \mathbf {Q}_1 = \mathbf {Q}_1 - \mathbf {Q}_0$, and where for the spherical reference geometry the inverse $\mathbf {Q}_0^{-1}$ is known. If the iteration converges, then for sufficiently large $n$ we obtain $\mathbf {Q}_1^{-1}$.
  • 2. Once we know $\mathbf {Q}_1^{-1}$, we can determine $\mathbf {Q}^{-1}$ by the same iteration scheme, i.e.
    $$ {\mathbf{Q}}_{2,(0)}^{-1} = \mathbf{Q}_1^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_2 \cdot \mathbf{Q}_1^{-1})$$
    $$ {\mathbf{Q}}_{2,(n+1)}^{-1} = \mathbf{Q}_1^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_2 \cdot \mathbf{Q}_{2,(n)}^{-1}), $$
    where $\Delta \mathbf {Q}_2 = \mathbf {Q} - \mathbf {Q}_1$. If convergent, the iteration yields $\mathbf {Q}_2^{-1}=\mathbf {Q}^{-1}$.
  • 3. Finally, the T-matrix is computed by use of Eq. (7).

We apply this method to the Chebyshev particle considered in the previous section. The first reference geometry is a sphere, the second one a Chebychev particle with deformation parameter $\epsilon _1$, and the target geometry a Chebyshev particle with deformation parameter $\epsilon$. First we take $\epsilon _1$=0.02 and $\epsilon$=0.025. As indicated in Table 2, the convergence criterion yields 0.53 for the first iteration (which is identical with the corrsponding value for $\epsilon$=0.02 in Table 1), and 0.40 for the second iteration. The iterations given in Eqs. (37)–(38) and (39)–(40) converge after $n_{\mathrm {max}}^{(1)}$=5 and $n_{\mathrm {max}}^{(2)}$=4 iterations, respectively. This should be compared to the corresponding values in Table 1 for a single application of the iteration scheme without a second reference geometry, which was $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert$=0.73 and $n_{\mathrm {max}}$=15. Thus convergence is reached with relative ease in each of the two steps. The price we pay for this is that we need to compute an extra Q-matrix $\mathbf {Q}_1$, and we need to iteratively compute $\mathbf {Q}_1^{-1}$.

Tables Icon

Table 2. Convergence criterion in Eq. (35) for Chebyshev particles with size parameter $x=40$, Chebyshev order $\ell$=160, refractive index $m=3+0.1$i, and with different deformation parameters $\epsilon$. The second reference geometry in the iteration has deformation parameter $\epsilon _1$. The number of iterations in the first iteration (37)–(38) is denoted by $n_{\mathrm {max}}^{(1)}$, that in the second iteration (39)–(40) is denoted by $n_{\mathrm {max}}^{(2)}$.

Similarly, for $\epsilon$=0.03 and $\epsilon _1$=0.025 we obtain $\left \lVert {{\mathbf {Q}}_0^{-1}\cdot \Delta {\mathbf {Q}}_1}\right \rVert$=0.73 and $n_{\mathrm {max}}^{(1)}$=15 (which, again, is identical with the corresponding result in Table 1 for $\epsilon$=0.25), and we get $\left \lVert {{\mathbf {Q}}_1^{-1}\cdot \Delta {\mathbf {Q}}}\right \rVert$=0.57, $n_{\mathrm {max}}^{(2)}$=4. Thus, in total, we perform $n_{\mathrm {max}}^{(1)}+n_{\mathrm {max}}^{(2)}$=19 iteration steps, instead of $n_{\mathrm {max}}$=39 (see Table 1).

Finally, for $\epsilon$=0.04 and $\epsilon _1$=0.03, the convergence criterion yields a norm larger than 1, and the iterative method does not converge. To tackle this problem, it would be possible to generalise the method in Eqs. (37)–(40). To this end, one could introduce $m$ reference geometries with Q-matrices $\mathbf {Q}_0,\, \mathbf {Q}_1,\ldots ,\mathbf {Q}_{m-1}$. If we label the target Q-matrix $\mathbf {Q}$ by $\mathbf {Q}_m$, then we can compute $\mathbf {Q}^{-1}$ by the nested iteration scheme

$$\Delta\mathbf{Q}_{i} = \mathbf{Q}_{i} - \mathbf{Q}_{i-1} $$
$$ {\mathbf{Q}}_{i,(0)}^{-1} = \mathbf{Q}_{i-1}^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_{i} \cdot \mathbf{Q}_{i-1}^{-1})$$
$$\begin{aligned} {\mathbf{Q}}_{i,(n+1)}^{-1} & = \mathbf{Q}_{i-1}^{-1}\cdot (\mathbf{1} - \Delta\mathbf{Q}_i \cdot \mathbf{Q}_{i,(n)}^{-1}), \\ i & = 1, 2, \ldots, m. \end{aligned}$$
The iteration over $i$ is the outer iteration, the one over $n$ the inner iteration.

We have made no attempt to test the nested iteration method beyond a second-order outer iteration. Instead, we found a different approach for extending the applicability of the iterative T-matrix method, which is discussed in the following sections.

4.3 Optimising the convergence of the iterative approach by use of the contraction criterion

In the preceding sections, we have applied the contraction criterion to assess whether or not the iterative T-matrix scheme will converge. Now we turn this around and ask: Can we use the contraction criterion to set up the iterative method in a clever way such as to ensure its convergence? To get there, it is important to understand that our choice of the reference matrix $\mathbf {Q}_0$ was in no way unique. In fact, this matrix does not even have to be based on an actual reference geometry; it can be literally any regular matrix. Merely the matrices $\mathbf {Q}$ and $Rg\mathbf {Q}$ have to be computed for the actual target particle. Thus we will now drop the rather constraining assumption that $\mathbf {Q}_0$ is based on a reference geometry. Instead, we will tailor the matrix $\mathbf {Q}_0$ so that the iterative T-matrix scheme (13) converges. The guiding principle in the construction of $\mathbf {Q}_0$ is the contraction criterion (26).

Since we need the inverse of the matrix $\mathbf {Q}_0$, the matrix has to be regular. Further, it greatly simplifies the inversion if we assume that $\mathbf {Q}_0$ be diagonal. In that case the matrix product that occurs in the contraction criterion (26) has components

$$\left( \Delta\mathbf{Q}\cdot \mathbf{Q}_0^{-1} \right)_{i,j} = \frac{Q_{i,j}}{{Q_0}_{j,j}} - \delta_{i,j}.$$
Let us represent complex numbers by their magnitude and phase according to
$$Q_{i,j} = q_{i,j} \exp(\mathrm{i}\phi_{i,j}) $$
$${Q_0}_{j,j} = {q_0}_j \exp(\mathrm{i}{\phi_0}_j), $$
where $q_{i,j}, {q_0}_j\in \mathbb {R}$. For $i=j$, we have
$$\left( \Delta\mathbf{Q}\cdot \mathbf{Q}_0^{-1} \right)_{j,j} = \frac{q_{j,j}}{{q_0}_j} \exp[\mathrm{i}(\phi_{j,j}-{\phi_0}_j)] - 1.$$
We want to make the norm in the contraction criterion as small as possible, since, as we saw in the preceding sections, this is likely to speed up the rate of convergence. Thus, we want to make the distance in the complex plane between the complex number $q_{j,j}/{q_0}_j\exp [\mathrm {i}(\phi _{j,j}-{\phi _0}_j)]$ and 1 as small as possible. This is is achieved by choosing the phase difference $\phi _{j,j}-{\phi _0}_j=0$. Accordingly, we construct $\mathbf {Q}_0$ such that each element ${Q_0}_{j,j}$ has the same phase as $Q_{j,j}$, so that $Q_{j,j}/{Q_0}_{j,j}$ is real and positive. Then
$$\left\lvert\left( \Delta\mathbf{Q}\cdot \mathbf{Q}_0^{-1}\right)_{i,j}\right\rvert = \left\{ \begin{array}{lcl} \lvert\frac{q_{j,j}}{{q_0}_j} - 1\rvert & : & i=j\\ \frac{q_{i,j}}{{q_0}_j} & : & i\ne j \end{array} \right. .$$
We still have the freedom to choose the magnitude ${q_0}_j = \lvert {Q_0}_{j,j}\rvert$. We will choose it such that the contraction criterion in (26) is satisfied,
$$\max\left\lvert\left( \Delta\mathbf{Q}\cdot \mathbf{Q}_0^{-1} \right)_{i,j}\right\rvert < 1,$$
and so that this matrix norm becomes as small as possible.

Let

$$\bar{q}_j = \max \{ q_{i,j} : i=1,\ldots, N,\; i\ne j\}.$$
Then the second term in Eq. (48) satisfies
$$\frac{q_{i,j}}{{q_0}_j} \leq \frac{\bar{q}_j}{{q_0}_j}.$$
Thus, the left hand side in the contraction criterion (26) becomes
$$\left\lVert{\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert = \max \left\lvert\left( \Delta\mathbf{Q}\cdot \mathbf{Q}_0^{-1}\right)_{i,j}\right\rvert = \max \left\{ \left\lvert\frac{q_{j,j}}{{q_0}_j} - 1\right\rvert, \frac{\bar{q}_j}{{q_0}_j} \right\}.$$
The optimum choice of ${q_0}_j$ is found by searching the minimum of this matrix norm:
$$\stackrel{\min}{{q_0}_j} \left[ \max \left\{ \left\lvert\frac{q_{j,j}}{{q_0}_j} - 1\right\rvert, \frac{\bar{q}_j}{{q_0}_j} \right\}\right].$$
We can picture the minimisation problem by plotting the two functions $f({q_0}_j) = \bar {q}_j/{q_0}_j$ and $g({q_0}_j) = \lvert q_{j,j}/{q_0}_j - 1\rvert$ — see Fig. 2. The minimum in (53) is found at the intersection of the two curves, i.e. $f({q_0}_j) = g({q_0}_j)$. By considering the two cases $q_{j,j} \ge {q_0}_j$ and $q_{j,j} < {q_0}_j$ we find two solutions
$${q_0}_j = q_{j,j} - \bar{q}_j $$
$${q_0}_j = q_{j,j} + \bar{q}_j . $$
Back-substitution into (53) shows that the minimum of the term in brackets is obtained for ${q_0}_j = q_{j,j} + \bar {q}_j$, which is what we expected by inspecting Fig. 2. Substitution of ${q_0}_j = q_{j,j} + \bar {q}_j$ into Eq. (52) yields
$$\left\lVert{\Delta{\mathbf{Q}}\cdot{\mathbf{Q}}_0^{-1}}\right\rVert = \max_j \left\{ \left\lvert\frac{q_{j,j}}{q_{j,j} + \bar{q}_j} - 1\right\rvert, \frac{\bar{q}_j}{q_{j,j} + \bar{q}_j} \right\} = \max_j \left\{ \frac{\bar{q}_j}{q_{j,j} + \bar{q}_j}\right\}$$
Mathematically, the norm is smaller than 1 if and only if $q_{j,j}>0$ for all $j$. It becomes equal to 1 if and only if there exists at least one $j$ for which $q_{j,j}=0$. Numerically, the norm can also become equal to 1 if $\bar {q}_j \gg q_{j,j}$. The likelihood of encountering such problems can be reduced by performing an appropriate permutation of the row and/or column vectors. We tested the method with partial pivoting of the row vectors of $\mathbf {Q}$.

 figure: Fig. 2.

Fig. 2. Generic plot of the functions $f({q_0}_j) = \bar {q}_j/{q_0}_j$ and $g({q_0}_j) = \lvert q_{j,j}/{q_0}_j - 1\rvert$.

Download Full Size | PDF

In summary, we construct the matrix $\mathbf {Q}_0$ by the following recipe.

$$ \mathbf{Q}_0 = \mathrm{diag}\{ {Q_0}_{1,1},\ldots, {Q_0}_{N,N}\} $$
$${Q_0}_{j,j} = {q_0}_j\exp(i{\phi_0}_j) $$
$${\phi_0}_j = \arg Q_{j,j} $$
$${q_0}_j = \lvert Q_{j,j} \rvert + \bar{q}_j $$
$$ \bar{q}_j = \max \{ \lvert Q_{i,j}\rvert : i=1,\ldots, N,\; i\ne j\}, $$
where we perform partial pivoting of $\mathbf {Q}$ prior to constructing $\mathbf {Q}_0$. As long as the diagonal elements of $\mathrm {Q}$ (after pivoting) are non-zero, and as long as the column-maxima $\bar {q}_j$ are not much larger than the corresponding diagonal elements $q_{j,j}$, this construction ensures convergence of the iterative method. Further, this $\mathbf {Q}_0$ minimises the matrix norm in the contraction criterion (26), which is likely to speed up the rate of convergence.

4.4 Application of the optimisation of $\mathbf {Q}_0$

We revisit the Chebyshev particles encountered in Sect. 4.1. Before, we took $\mathbf {Q}_0$ to be the Q-matrix of the unperturbed sphere. Now we employ the procedure described in Eqs. (57)–(61). We obtain perfect agreement with the phase matrix elements computed earlier (not shown), but with significantly faster convergence rates.

Table 3 shows that the matrix norm in the contraction criterion is substantially reduced in all cases as compared to Table 1. For instance, for a Chebyshev deformation parameter of $\epsilon$=0.03, the norm has been reduced from 0.90 to 0.11. Correspondingly, the number of iteration $n_{\mathrm {max}}$ required for obtaining convergent results decreases from 39 to only 4. For $\epsilon$=0.04 the contraction criterion was previously violated, and we were not able to obtain convergent results. Now the contraction criterion is fulfilled, and convergence is achieved after only 10 iterations.

Tables Icon

Table 3. As Table 1, but computed by construction $\mathbf {Q}_0$ according to Eqs. (57)–(61)

Next, we consider a case in which the method described in Eqs. (57)–(61) fails. We consider an oblate spheroid with size parameter $x=20$ and aspect ratio $ar=a/b$=1.5 (where $b$ is the extend along the spheroid’s rotational symmetry axis, and $a$ is the maximum extent in the direction perpendicular to that axis). The refractive index is $m$=1.6 + 0.01i. The surface $r_s(\theta )$ of the spheroid is perturbed with a Chebyshev polynomial analogous to Eq. (11), i.e.

$$r(\theta,\phi)=r_s(\theta) (1+\epsilon \cos \ell\theta \cos \ell\phi).$$
The Chebyshev parameters are set to $\ell =160$ and $\epsilon$=0.03. The method for constructing the matrix $\mathbf {Q}_0$ described in the previous section fails to produce convergent results, even after pivoting of the matrix $\mathbf {Q}$. In that method we had imposed the constraint that $\mathbf {Q}_0$ be diagonal. Let us now drop that assumption.

4.5 Construction of a non-diagonal reference matrix $\mathbf {Q}_0$

In Sect. 4.3 we tried something clever to construct $\mathbf {Q}_0$. Now let us try something simple. But as before, we use the contraction criterion as a compass. The goal is to alleviate ill-conditioning problems in the inversion of the Q-matrix. Ill conditioning problems often occur whenever the matrix contains elements of largely different magnitude. Thus, let us introduce a number $P$, and partition the Q-matrix as follows.

$$\mathbf{Q} = \mathbf{Q}_0 + \Delta\mathbf{Q} $$
$$(\mathbf{Q}_0)_{i,j} = \left\{ \begin{array}{lll} Q_{i,j} & : & \mathrm{if}\; \lvert Q_{i,j}\rvert \ge P \\ 0 & : & \mathrm{otherwise} \end{array} \right. $$
Thus, $\mathbf {Q}_0$ contains all the large elements of $\mathbf {Q}$, while $\Delta \mathbf {Q}$ contains all the small elements, where the parameter $P$ determines what we mean by large and small. By a suitable choice of $P$ we may hope that $\mathbf {Q}_0$ becomes sufficiently well conditioned for numerical inversion, while the matrix norm $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert$ is small enough to ensure fast convergence of the iterative method.

We implemented this method with an automated procedure for determining $P$. If $P$ is chosen too large, then $\mathbf {Q}_0$ may end up with row and/or column vectors that only contain zeros, thus making the matrix singular. If $P$ is chosen too small, then $\mathbf {Q}_0$ may be too similar to $\mathbf {Q}$, so that no significant improvement in the conditioning of the matrix is achieved. Thus, our algorithm starts with a high value of $P$ and successively lowers that value until $\mathbf {Q}_0$ becomes regular. Then the norm in the contraction criterion is computed, and $P$ is decreased further until the norm becomes as small as possible, but before $\mathbf {Q}_0$ may become ill-conditioned. In practice, we found that the iterative method converges very fast if the matrix norm is smaller than 0.3. So, we can usually exit the search algorithm whenever $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert < 0.3$. The inversion of $\mathbf {Q}_0$ is performed by using standard Lapack routines for LU decomposition. Also, we combined the iterative method with group theoretical methods. The use of irreducible representations, as described in [8,10], block-diagonalises the Q-matrix prior to applying the iterative method. This way we can determine a partition number $P$ and a corresponding matrix $\mathbf {Q}_0$ for each block-matrix in $\mathbf {Q}$.

We now return to the Chebyshev spheroid mentioned in Sect. 4.4. The method of constructing a diagonal matrix $\mathbf {Q}_0$, developed in Sect. 4.3, failed to give convergent results for this problem. The simple construction of a non-diagonal matrix $\mathbf {Q}_0$ proposed here converges after only 3 iterations. The matrix norm in the contraction criterion is smaller than 0.3.

Figure 3 shows the elements $F_{11}$ (top), $-F_{12}/F_{11}$ (centre), and $F_{22}/F_{11}$ (bottom) of the scattering matrix for randomly oriented Chebyshev spheroids (solid line) and for randomly oriented unperturbed spheroids (dashed line). While the phase function (top) is hardly impacted by the surface perturbation, the other two elements display noticeable differences between smooth and rough spheroids. The differences are less dramatic than in Fig. 1. The main reason for this is that the spheroids considered here are optically softer and less strongly absorbing, which reduces the significance of surface roughness — see the discussion in [18].

 figure: Fig. 3.

Fig. 3. Phase function $F_{11}$ (top), degree of linear polarisation $-F_{12}/F_{11}$ (centre), as well as $F_{22}/F_{11}$ (bottom) for smooth oblate spheroid (dashed line) and Chebyshev spheroid (solid line). The particles have a size parameter of $x$=20, an aspect ratio of 1.5, and a refractive index of $m$=1.6+0.001i. The Chebyshev order and deformation parameter are $\ell$=160 and $\epsilon$=0.03, respectively.

Download Full Size | PDF

5. Summary

The main goal of this study was to derive a convergence criterion for the iterative T-matrix method. From the contraction theorem, we obtained a sufficient condition for the convergence of the method, which is given in Eq. (13). Specifically, if there exists an $\alpha <1$ so that

$$\max |(\Delta\mathbf{Q}\cdot\mathbf{Q}_0^{-1})_{n,m,q,n',m',q'}|\le\alpha,$$
then the iteration algorithm has a unique solution. The main idea in deriving this result was to apply the Banach fixed point theorem. Alternatively, the convergence criterion can be derived based on a direct comparison test with a geometric series.

The applications we showed were meant to illustrate the potential usefulness of the contraction criterion. We employed Eq. (65) to assess the chances of convergence of the iterative T-matrix method. First, we iteratively computed the T-matrix by use of a single reference geometry. Second, we performed a two-step iterative computation of the inverse Q-matrix by use of two reference geometries. In either case, the numerical tests confirm that the condition correctly predicts convergence of the algorithm. Also, if the criterion is fulfilled, then the magnitude of the matrix norm $\left \lVert {\Delta {\mathbf {Q}}\cdot {\mathbf {Q}}_0^{-1}}\right \rVert$ provides us with an indication of the prospective speed of convergence. Such a criterion is of great practical value, since it allows us to decide beforehand whether or not the iterative approach is a promising method for computing the T-matrix.

Next, we employed the contraction criterion in optimising the convergence of the method. To this end, we used the contraction criterion as a compass in constructing the reference matrix $\mathrm {Q}_0$. In a first attempt, we assumed $\mathrm {Q}_0$ to be diagonal and determined its elements by minimising the matrix norm in the contraction criterion. In a second attempt, we dropped the assumption that $\mathrm {Q}_0$ be diagonal. The obvious advantage is that one has more degrees of freedom in the construction of $\mathrm {Q}_0$. A major disadvantage is that it is not straightforward for a general matrix to impose the constraint that $\mathrm {Q}_0$ be regular, and even well-conditioned. We approached this problem by simply partitioning the Q-matrix into a matrix $\mathrm {Q}_0$ containing all the large elements of $\mathrm {Q}$, and a matrix $\Delta \mathrm {Q}$ containing all the small elements of $\mathrm {Q}$. The demarcation between small and large elements was defined by a parameter $P$. Although this is only an ad hoc approach, it gave us the flexibility of adjusting $P$ by use of the contraction criterion, with the constraint that the matrix $\mathrm {Q}_0$ must be well-conditioned. Numerical examples confirmed that the use of the contraction criterion in the construction of $\mathrm {Q}_0$ can, indeed, improve the range of applicability of the iterative T-matrix method, as well as speed up its convergence. The partitioning method, in conjunction with group theoretical methods, seemed to be a particularly promising candidate for further studies.

The matrix $\mathrm {Q}_0$ determines the starting value of the iteration — see Eq. (12) — which is critical for iterative methods. Other methods for constructing $\mathrm {Q}_0$ than the ones considered here are conceivable. For instance, it may be worth to investigate whether the optimisation procedure discussed in sect. 4.3 can be mapped to a corresponding eigenvalue analysis of the matrix $\Delta \mathrm {Q} \cdot \mathrm {Q}_0^{-1}$. The salient point in the context of this work is that the contraction criterion provides us with a powerful tool for optimising the choice of the reference matrix $\mathrm {Q}_0$ in the iterative T-matrix method.

Funding

Vetenskapsrådet (2016-03499).

Acknowledgments

M. Kahnert acknowledges funding by the Swedish Research Council (Vetenskapsrådet) under contract 2016-03499.

Disclosures

The authors declare no conflicts of interest.

References

1. P. C. Waterman, “Matrix formulation of electromagnetic scattering,” Proc. IEEE 53(8), 805–812 (1965). [CrossRef]  

2. F. M. Schulz, K. Stamnes, and J. J. Stamnes, “Scattering of electromagnetic waves by spheroidal particles: A novel approach exploiting the T-matrix computed in spheroidal coordinates,” Appl. Opt. 37(33), 7875–7896 (1998). [CrossRef]  

3. T. A. Nieminen, H. Rubinsztein-Dunlop, and N. R. Heckenberg, “Calculation of the T-matrix: general considerations and application of the point-matching method,” J. Quant. Spectrosc. Radiat. Transfer 79-80, 1019–1029 (2003). [CrossRef]  

4. B. Sun, L. Bi, P. Yang, M. Kahnert, and G. Kattawar, Invariant Imbedding T-matrix Method for Light Scattering by Nonspherical and Inhomogeneous Particles (Elsevier, Amsterdam, 2019).

5. B. R. Johnson, “Invariant imbedding T matrix approach to electromagnetic scattering,” Appl. Opt. 27(23), 4861–4873 (1988). [CrossRef]  

6. T. Rother and J. Wauer, “Case study about the accuracy behavior of three different T-matrix methods,” Appl. Opt. 49(30), 5746–5756 (2010). [CrossRef]  

7. M. Kahnert and T. Rother, “Modeling optical properties of particles with small-scale surface roughness: combination of group theory with a perturbation approach,” Opt. Express 19(12), 11138–11151 (2011). [CrossRef]  

8. T. Rother and M. Kahnert, Electromagnetic wave scattering on nonspherical particles: Basic Methodology and Simulations (Springer, Berlin, 2014), 2nd ed. Http://dx.doi.org/10.1007/978-3-642-36745-8.

9. F. M. Schulz, K. Stamnes, and J. J. Stamnes, “Point group symmetries in electromagnetic scattering,” J. Opt. Soc. Am. A 16(4), 853–865 (1999). [CrossRef]  

10. M. Kahnert, “Irreducible representations of finite groups in the T matrix formulation of the electromagnetic scattering problem,” J. Opt. Soc. Am. A 22(6), 1187–1199 (2005). [CrossRef]  

11. M. Kahnert, “T-matrix computations for particles with high-order finite symmetries,” J. Quant. Spectrosc. Radiat. Transfer 123, 79–91 (2013). [CrossRef]  

12. M. F. Iskander, A. Lakhtakia, and C. H. Durney, “A new procedure for improving the solution stability and extending the frequency range of the EBCM,” IEEE Trans. Antennas Propag. 31(2), 317–324 (1983). [CrossRef]  

13. A. Doicu, T. Wriedt, and Y. A. Eremin, Light scattering by systems of particles. Null-field method with discrete sources: theory and programs (Springer, Berlin, 2006).

14. E. Kreyszig, Introductory Functional Analysis with Applications (Wiley, Singapore, 1989).

15. A. N. Kolmogorov and S. V. Fomin, Elements of the Theory of Functions and Functional Analysis, Vols I and II (Martino Publishing, Mansfield Centre, 2012).

16. K. Schmidt, M. Yurkin, and M. Kahnert, “A case study on the reciprocity in light scattering computations,” Opt. Express 20(21), 23253–23274 (2012). [CrossRef]  

17. M. Kahnert, “The T-matrix code Tsym for homogeneous dielectric particles with finite symmetries,” J. Quant. Spectrosc. Radiat. Transfer 123, 62–78 (2013). [CrossRef]  

18. M. Kahnert, T. Nousiainen, M. A. Thomas, and J. Tyynelä, “Light scattering by particles with small-scale surface roughness: comparison of four classes of model geometries,” J. Quant. Spectrosc. Radiat. Transfer 113(18), 2356–2367 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Degree of linear polarisation $-F_{12}/F_{11}$ (top), scattering matrix element $F_{22}/F_{11}$ (bottom), and linear depolarisation ratio $\delta _L$ (bottom insert) of randomly oriented 3D-Chebyshev particles with size parameter $x=40$, Chebyshev order $\ell$=160, refractive index $m=3+0.1$i, and deformation parameter $\epsilon =0.03$, computed by performing 6, 15, 27, and 39 iterations of Eq. (13).
Fig. 2.
Fig. 2. Generic plot of the functions $f({q_0}_j) = \bar {q}_j/{q_0}_j$ and $g({q_0}_j) = \lvert q_{j,j}/{q_0}_j - 1\rvert$.
Fig. 3.
Fig. 3. Phase function $F_{11}$ (top), degree of linear polarisation $-F_{12}/F_{11}$ (centre), as well as $F_{22}/F_{11}$ (bottom) for smooth oblate spheroid (dashed line) and Chebyshev spheroid (solid line). The particles have a size parameter of $x$=20, an aspect ratio of 1.5, and a refractive index of $m$=1.6+0.001i. The Chebyshev order and deformation parameter are $\ell$=160 and $\epsilon$=0.03, respectively.

Tables (3)

Tables Icon

Table 1. Δ Q Q 0 1 from Eq. (26) and number of iterations n m a x for Chebyshev particles with size parameter x = 40 , Chebyshev order =160, refractive index m = 3 + 0.1 i, and with different deformation parameters ϵ .

Tables Icon

Table 2. Convergence criterion in Eq. (35) for Chebyshev particles with size parameter x = 40 , Chebyshev order =160, refractive index m = 3 + 0.1 i, and with different deformation parameters ϵ . The second reference geometry in the iteration has deformation parameter ϵ 1 . The number of iterations in the first iteration (37)–(38) is denoted by n m a x ( 1 ) , that in the second iteration (39)–(40) is denoted by n m a x ( 2 ) .

Tables Icon

Table 3. As Table 1, but computed by construction Q 0 according to Eqs. (57)–(61)

Equations (65)

Equations on this page are rendered with MathJax. Learn more.

E inc ( k 0 r ) = n = 1 N cut m = n n q = 1 2 a n , m , q ψ n , m , q ( 1 ) ( k 0 r )
E sca ( k 0 r ) = n = 1 N cut m = n n q = 1 2 p n , m , q ψ n , m , q ( 3 ) ( k 0 r )
E int ( k r ) = n = 1 N cut m = n n q = 1 2 c n , m , q ψ n , m , q ( 1 ) ( k r ) ,
a = Q c
p = R g Q c
p = T a .
T = R g Q Q 1 .
Δ Q = Q Q 0 .
T ( Q 0 + Δ Q ) = R g Q ,
T = ( R g Q + T Δ Q ) Q 0 1 .
r ( θ , ϕ ) = r 0 ( 1 + ϵ cos θ cos ϕ ) ,
T 0 = R g Q Q 0 1
T n = ( R g Q + T n 1 Δ Q ) Q 0 1 .
g ( C ^ x , C ^ y ) α g ( x , y ) x , y X .
T = max | T n , m , q , n , m , q | .
g ( T 1 , T 2 ) = T 1 T 2 .
C ^ : M N M N , T C ^ T = ( R g Q + T Δ Q ) Q 0 1 .
T = C ^ T .
g ( C ^ T 1 , C ^ T 2 ) = ( T 1 T 2 ) Δ Q Q 0 1
( T 1 T 2 ) Δ Q Q 0 1 α T 1 T 2
T Δ Q Q 0 1 α T T M N .
T Δ Q Q 0 1 T α T M N .
sup T M N T Δ Q Q 0 1 T α .
Δ Q Q 0 1 s = sup T M N T Δ Q Q 0 1 T .
Δ Q Q 0 1 s α .
α < 1 : Δ Q Q 0 1 α .
A = Q Z .
Z = Q 0 1 ( A Δ Q Z ) .
Q 1 = Q 0 1 ( 1 Δ Q Q 1 ) .
( 1 + Q 0 1 Δ Q ) Q 1 = Q 0 1 ,
Q 1 = ( 1 D ) 1 Q 0 1 ,
D = Q 0 1 Δ Q .
( 1 D ) 1 = 1 + D + D 2 + .
1 + D + D 2 + 1 + D + D 2 + n = 0 α n .
D = Q 0 1 Δ Q α < 1.
δ L = F 11 F 22 F 11 + 2 F 12 + F 22 .
Q 1 , ( 0 ) 1 = Q 0 1 ( 1 Δ Q 1 Q 0 1 )
Q 1 , ( n + 1 ) 1 = Q 0 1 ( 1 Δ Q 1 Q 1 , ( n ) 1 ) ,
Q 2 , ( 0 ) 1 = Q 1 1 ( 1 Δ Q 2 Q 1 1 )
Q 2 , ( n + 1 ) 1 = Q 1 1 ( 1 Δ Q 2 Q 2 , ( n ) 1 ) ,
Δ Q i = Q i Q i 1
Q i , ( 0 ) 1 = Q i 1 1 ( 1 Δ Q i Q i 1 1 )
Q i , ( n + 1 ) 1 = Q i 1 1 ( 1 Δ Q i Q i , ( n ) 1 ) , i = 1 , 2 , , m .
( Δ Q Q 0 1 ) i , j = Q i , j Q 0 j , j δ i , j .
Q i , j = q i , j exp ( i ϕ i , j )
Q 0 j , j = q 0 j exp ( i ϕ 0 j ) ,
( Δ Q Q 0 1 ) j , j = q j , j q 0 j exp [ i ( ϕ j , j ϕ 0 j ) ] 1.
| ( Δ Q Q 0 1 ) i , j | = { | q j , j q 0 j 1 | : i = j q i , j q 0 j : i j .
max | ( Δ Q Q 0 1 ) i , j | < 1 ,
q ¯ j = max { q i , j : i = 1 , , N , i j } .
q i , j q 0 j q ¯ j q 0 j .
Δ Q Q 0 1 = max | ( Δ Q Q 0 1 ) i , j | = max { | q j , j q 0 j 1 | , q ¯ j q 0 j } .
q 0 j min [ max { | q j , j q 0 j 1 | , q ¯ j q 0 j } ] .
q 0 j = q j , j q ¯ j
q 0 j = q j , j + q ¯ j .
Δ Q Q 0 1 = max j { | q j , j q j , j + q ¯ j 1 | , q ¯ j q j , j + q ¯ j } = max j { q ¯ j q j , j + q ¯ j }
Q 0 = d i a g { Q 0 1 , 1 , , Q 0 N , N }
Q 0 j , j = q 0 j exp ( i ϕ 0 j )
ϕ 0 j = arg Q j , j
q 0 j = | Q j , j | + q ¯ j
q ¯ j = max { | Q i , j | : i = 1 , , N , i j } ,
r ( θ , ϕ ) = r s ( θ ) ( 1 + ϵ cos θ cos ϕ ) .
Q = Q 0 + Δ Q
( Q 0 ) i , j = { Q i , j : i f | Q i , j | P 0 : o t h e r w i s e
max | ( Δ Q Q 0 1 ) n , m , q , n , m , q | α ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.