Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A three-dimensional radiative transfer model for shallow water environments

Open Access Open Access

Abstract

A geometric optical model for three-dimensional radiative transfer capable of handling arbitrary arrangements of surfaces within anisotropic scattering media is described. The model operates by discretizing surfaces and volumes into patches and voxels and establishing the radiative transfer relationship between every pair of elements. In a plane-parallel configuration results for directional radiance agree closely with the numerical integration invariant imbedded method. Model accuracy for two examples incorporating surface water waves and complex benthic structures were assessed by conservation of energy, errors were less than 1%. Potential applications in remote sensing or photobiological studies of structurally complex benthos in shallow water environments are illustrated.

©2008 Optical Society of America

1. Introduction

Many shallow water ecosystems such as coral reefs or kelp forests exhibit complex underwater three-dimensional structures with vertical scales comparable to the height of the water column itself. The penetration of light into these ecosystems plays a significant role in influencing their productivity and biological composition [1] and the backscatter of this same light provides the primary means for monitoring these systems by optical remote sensing [2]. In addition to benthic complexity, the propagation of water surface waves causes fluctuations in depth and water surface slope which may have radiative transfer consequences at time scales relevant to photobiology and remote sensing: wave focusing [3] and spatial sunglint patterns [4] are just two phenomenological examples. Many current important research questions relate to these structurally complex aquatic ecosystems and so there is an imperative to develop the capability for accurate modeling of the interaction light with three-dimensional structures underwater. Some of these active research areas include the adaptive role of coral morphology [5, 6, 7]; primary productivity in macroalgal and seagrass canopies [8]; advancing and understanding the limitations of remote sensing of coral reefs [9].

A variety of models for aquatic radiative transfer appropriate for radiometrically plane-parallel environments with time-averaged horizontal homogeneous boundary conditions have been developed [10, 11, 12, 2], but Monte-Carlo methods [13] remain the only general method for handling arbitrary three-dimensional aquatic radiative transfer. However, Monte-Carlo methods face computational efficiency limitations and thus far their application has been restricted to plane-parallel systems or examples of simple inhomogeneities in benthic reflectance, substrate slope or one-dimensional water surface patterns [14, 15, 16, 17]. Application-specific geometrical optics models [18] have also been devised to address specific questions but are by definition limited in scope.

In this paper I present a radiative transfer model based on the Inherent Optical Properties (IOPs) of source-free waters which is capable of accommodating arbitrary surfaces and participating volumetric media within a three-dimensional domain. The algorithm applies a spatial and volumetric discretization, initializes the system with incident solar energy, establishes the radiant power transfer between every pair of elements and converges to a solution by Gauss-Seidel iteration [19] (Fig. 1). The model has design similarities to radiosity or global illumination methods, which had early applications in modeling the physics of thermal transfer in furnaces [20], then were subsequently developed for realistic computer rendering of three-dimensional scenes [21, 22]. Radiosity methods have also been applied to photobiology and remote sensing of terrestrial vegetation canopies [23, 24]. Here the method is extended to allow surface structures to be embedded in volumetric scattering media and incorporate boundary surfaces between compartments of differing refractive index. The model contains two important features: 1) the capability for arbitrary anisotropic volumetric scattering functions, thus allowing the use of Petzold’s, Fournier-Forand or any form of phase function [25, 26] and 2) an analytical derivation of the radiative transfer relationships involving volumetric elements that significantly improves the accuracy compared to the original radiosity Zonal Method [27]. Other aspects of the model are identical to, or bear strong similarities to, existing methods from both hydrological [2] and atmospheric optics [28].

The next section establishes the model formulation in general terms, followed by the method of spatial and angular discretization (Sec. 3) and the solution algorithm (Sec. 4). Section 5.1 demonstrates that in a plane-parallel configuration, model outputs for directional radiance are virtually identical to the invariant imbedded method (as used in the commercial software Hydro-Light [29]). When three-dimensional structures are introduced (Secs. 5.2 and 5.3) one method for assessing accuracy is energy conservation. For the coral-like structures assessed here losses or gains are only a few percent and can be further constrained at computational expense.

 figure: Fig. 1.

Fig. 1. Overview of the 3D model. Surfaces are spatially discretized into patches while regions of space containing scattering media are discretized into cubic voxels (only a subset of the voxels are illustrated).

Download Full Size | PDF

2. Model theoretical background

2.1. Optical properties of volumetric media

In the absence of inelastic scattering the parameters necessary and sufficient to calculate radiative transfer through a volumetric media are the spectral absorption coefficient a(λ,p) and spectral volume scattering function (VSF), β (λ,p,Ψ) where λ is the wavelength, p=〈px, py, pz〉 is a point in space, and ψ is the scattering angle, 0≤ψπ [2]. The co-efficients of scattering b(λ,p) and attenuation c(λ,p) are derived from the absorption co-efficient and the VSF [2]

b(λ,p)=2π0πβ(λ,p,ψ)sinψdψ
c(λ,p)=a(λ,p)+b(λ,p)

In some literature c is referred to as the extinction co-efficient and denoted k [20].

Scattering of the radiance distribution at a point p in space, L(λ,p,v) by the VSF gives rise to the path radiance at that point,

L*(λ,p,vout)=LE*(λ,p,vout)+ΞL(λ,p,vin)β(λ,p,cos1[vin·vout])dΩ(vin)

where (v in) is the solid angle associated with v in over which the integral is performed and L * E allows for radiant emission from the medium.

The radiance distribution at a point L(λ,p,v) is the sum of the path attenuated radiance received from point p S on the first surface encountered in the direction looking back along v, plus any radiance received from volumetric scattering or emission in the intervening path.

L(λ,p,v)=K(λ,ps,p)LEXG(λ,ps,v)+ppsK(λ,p,p)L*(λ,p,v)dx(p)

where L EX-G(λ,p S,v) is the radiance exitant from the surface at point p S in the direction vector v, expressed in the global co-ordinate system. The function K(λ,p 1,p 2) is the transmittance of radiance from point p 1 to p 2 in the medium,

K(λ,p1,p2)=exp(p1p2c(λ,p')dx(p'))
=exp(0dc(λ,p1+xd(p1p2))dx)

where d=|p 1-p 2| is the distance between p 1 and p 2. If the attenuation coefficient is spatially homogeneous then Eq. (5) reduces to K(λ,p 1,p 2)=exp[-c(λ)d].

Note that there is no need to consider the path along the vector negative v beyond the first intersecting surface even if that surface is transparent, since the effect of radiance sources on the far side of the surface is expressed as radiance exitant the surface itself. A key part of the algorithm described below is the efficient detection of the first surface encountered looking back along a given incident vector from a point in space.

2.2. Optical properties of surfaces

Surfaces in the model may take arbitrary shapes in space subject to the assumption that at some small scale they may be approximated as locally flat. At this locally flat scale the optical properties of general two-sided reflecting and transmitting surface, such as the air-water interface, can be described by two directional reflection functions, r 1|2(λ,p,u in,u out), for sides 1 and 2, and two directional transmission functions t 12|21(λ,p,u in,u out) for transmission in both directions [2]. Here u in and u out represent incident and exitant vectors in the surface’s local co-ordinate system which is defined with the z-axis normal to the surface side 1 and the x-axis and y-axis lying in the tangential plane. Premultiplying a vector by the 3×3 matrix M G→L(p) performs the rotation from global co-ordinates to local surface co-ordinates at point p,

MGL(p)=(Xx(p)Xy(p)Xz(p)Yx(p)Yy(p)Yz(p)nx(p)ny(p)nz(p))

Where n(p)=〈nx(p),ny(p),nz(p)〉 is the unit normal on side 1 at point p. Thus defined, the local surface tangential x-axis and y-axis vectors 〈Xx(p),Xy(p),Xz(p)〉 and 〈Yx(p),Yy(p),Yz(p)〉 have rotational freedom about n(p). For surface types for which the optical properties are rotationally dependent, e.g. a wind-blown air-water interface [2], the orientation of x and y must be appropriately aligned across the surface. The inverse rotation, from local to global co-ordinates can similarly be derived and is denotedM L→G(p).

Since the four functions r 1, r 2, t 12 and t 21 jointly cover all hemispherical domain combinations in u in and u out it is convenient to combine them into a single surface transfer function with spherical domains in u in and u out,

s(λ,p,uin,uout)={r1(λ,p,uin,uout),ifuinΞzanduoutΞz+r2(λ,p,uin,uout),ifuinΞz+anduoutΞzt12(λ,p,uin,uout),ifuinΞzanduoutΞzt21(λ,p,uin,uout),ifuinΞz+anduoutΞz+

The surface transfer function establishes the relationship between radiance exitant from a surface in its local co-ordinate system L EX-L(λ,p,u out) and the incident radiance, L IN-L(λ,p,u in),

LEXL(λ,p,uout)=LE(λ,p,uout)+ΞLINL(λ,p,uin)s(λ,p,uin,uout)dΩ(uin)

This expression also allows for emissive surfaces when L E(λ,p,u out)≠0 and treats both sides of the surface simultaneously, so u out ∈Ξ. The radiance incident on a surface is calculated in an identical manner to the radiance at a point in volumetric space (Eq. (4)), but note that Eq. (4) is defined in the global co-ordinate system while L IN-L(λ,p,u in) is in the local surface co-ordinate system, so rotation of the incident vector by M L→G(p) is required,

LINL(λ,p,uin)=L[λ,p,MLG(p)uin]

The inverse rotation from global to local co-ordinates using M G→L(p) is similarly required to establish L EX-G as used in Eq. (4) from L EX-L(λ,p,u out). Hence, the co-ordinate system rotations defined byM G→L(p) and its inverse map the local surface radiance transfer functions onto a non-flat surface. The functions r 1, r 2, t 12 and t 21 with the p-dependence removed are identical in form to the air-water interface radiance transfer functions presented by Mobley [2].

For an opaque boundary surface, such as a the substrate in a shallow water application, only r 1 is non-zero and,

r1(λ,p,uin,uout)=R(λ,p,uin,uout)cosθ

where R(λ,p,u in,u out) is the bi-directional reflectance distribution function (BRDF) [30] of the surface at point p, and cosθ is the z-component of u in. For many applications it may be sufficient to consider the BRDF as constant for different surface types, so models can be built with a small set of BRDFs which in themselves are not p-dependent. Further, if it is appropriate to consider surfaces as locally Lambertian both R and the surface exitant radiance L EX-L lose their directional dependence. In this case significant computational storage savings can be made and r 1(λ,p,u in,u out)=[R D(λ,p)cosθ]/π, where R D(λ,p) is the diffuse reflectance at point p on the surface [2].

 figure: Fig. 2.

Fig. 2. Directional discretization schemes. (a) HydroLight standard spherical partition, (b) the cubic 8-partition and (c) as a spherical projection, (d) the corresponding hemicube 8-partition for surface directional functions and (e) its unfolded visualization as used in Fig. 4.

Download Full Size | PDF

3. Discretization of model

3.1. Directional discretization

Like the plane-parallel invariant imbedded solution method [2] the model presented here calculates directional radiance as the mean over segments of finite solid angle defined by a discretization of the surface of a sphere (Fig. 2). However, instead of a system based on angles of constant zenith and azimuth [2] here the sphere surface is partitioned into cells (or ‘quads’ [2]) numbered q=1,2,3, …6n 2 by a spherical projection of a cubic grid with n-cells on a side, referred to as a cubic n-partition (Fig. 2). This partitioning scheme is identical to the hemicube and fullcube methods developed for computer graphics radiosity methods [21, 22] and is a requirement of the algorithm. The cubic 8-partition contains 384 cells and offers comparable directional resolution to the standard 482-cell spherical discretization used in the commercial invariant imbedded implementation HydroLight [29]. One advantage of the cubic partition over the constant zenith-azimuth angle partition is that directional resolution can be increased, i.e. n=16,24, 32, while retaining a relatively uniform sampling of the sphere surface. Ensuring n is even allows trivial conversion between functions defined on the sphere and functions of surfaces that only require a hemispherical or ‘hemicubic’ n-partition, such as BRDFs. For a given n-partition the solid angle Ω(q) of each cell can be pre-calculated by numerical integration.

The cubic n-partition of the sphere enables the cell-averaged tabulation of functions of one vector direction, such as incident solar radiance or the radiance distribution at point, and of functions of two directions, such as the VSF. A mapping from an outward orientated vector direction u=〈u x,u y,u z〉 to a cell index q can be implemented highly efficiently, requiring just a single division as the most expensive computational operation,

q(u)=1+{0+nfxuy+h+fxuz+h,forux<uyandux<uzn2+nfxuy+h+fxuz+h,forux>uyandux>uz2n2+nfyux+h+fyuz+h,foruy<uxanduy<uz3n2+nfyux+h+fyuz+h,foruy>uxanduy>uz4n2+nfzux+h+fzuy+h,foruz<uxanduz<uy5n2+nfzux+h+fzuy+h,foruz>uxanduz>uy

where h=n/2 and fx=h/|ux|, fy=h/|uy|, fz=h/|uz|. Each of the six options in Eq. (11) corresponds to one face of the projected cube. The situation in which a direction vector lies exactly on an edge between cube faces can be detected easily by the equality of the u values and is handled by consistently assigning such vectors to one of the adjacent faces. The computational efficiency of converting a vector direction to cell index is important in the implementation because it occurs a huge number of times in the solution process.

Using Eq. (11), directionally dependent quantities expressed in terms of vectors can be tabulated into q-dependent quantities, e.g. L(λ,p,v in) in Eq. (3) becomes L(λ,p,q in). In particular the VSF, β (λ,p,ψ), is tabulated as, β (λ,p,q in,q out) by numerical integration [2]. If the VSF or scattering phase function are considered spatially invariant, by assuming Petzold’s [25] phase function, for example, then considerable computational savings can be made by pre-calculation of the tabulated VSF [2]. The current implementation of the model works directly with the tabulated directional values of radiance and does not utilize representation by orthogonal basis functions such as Spherical Harmonics [28, 31] or the Discrete Fourier Transform [2] to efficiently exploit the rotational invariance of the VSF [2, 28]. However the 48-fold symmetry of the cubic partition substantially reduces the required VSF data structure size.

3.2. Surface discretization

The scheme used to represent surfaces as a mesh of polygonal patches requires that 1) the radiance distribution at a patch center is representative of the whole surface of the patch, and 2) surfaces appear perfectly sealed when rendered from any view direction by a computer graphics algorithm. The model operates by establishing the radiance incident onto a patch from the perspective of a single view-point at the patch center, and then translating that into an uniform exitant radiance over the surface of the patch using Eq. (7). Requirement 1 implies that patches must be sufficiently small, especially on curved surfaces or on the boundaries of shadows. A consequence of the second requirement is that patches must be exactly flat to prevent rendering errors where cracks could allow leakage of radiance through opaque surfaces. This can be ensured by using only triangular patches on curved surfaces, restricting the use of quadrilaterals to flat surfaces. Depending on how the scene is constructed a polygon intersection algorithm may also be required as a pre-processing step to subdivide any intersecting pairs of patches along the intersection line. Overall, the requirements for surface meshing are identical to those for many computer graphics applications and are discussed in that literature [32].

Once the surface mesh has been established all positionally dependent surface properties can be tabulated over the finite number i=1,2,3, …n P of patches. E.g. with directional discretization applied as well, L EX-L(λ,p,u out) in Eq. (8) becomes L EX-L(λ, i,q out). The spatial and directional properties of all surfaces are thus finitely described ready for algorithmic manipulation.

3.3. Volumetric discretization

Volumetric discretization is initiated by subdividing the model domain into cubic voxels (Fig. 1) with each voxel view-point at the voxel center. However for voxels the translation of incident radiance into scattered radiance is not as straightforward as the direct application of the path radiance equation at the voxel center (Eq. (3)). This approach is taken in Zonal Method for computer graphics [27] and while producing visually acceptable solutions it ignores processes which occur over the path length through the voxel itself, potentially leading to substantial violations of conservation of energy.

Due to the voxels cubic shape (Fig. 3), the incident radiance at a voxel center has a q-dependent voxel path length, x(q) (note that both the orientation of the voxel and its directional discretization are global-axes aligned). The distance x(q) can be numerically pre-calculated for all voxels of given size as the mean over the solid angle of the cell q. Rather than using the radiance distribution at the voxel center, L(λ, i,q in), to estimate the scattered radiant energy from the whole voxel, a more accurate estimation is obtained by integrating the source function (Eq. (3)) along the entire path length d=x(q). This method takes into account both the incident radiance attenuation and increase due to in-scattering along the path in the voxel. If emission is zero the form of the source function (Eq. (3)) based on these path integrals is,

L*(λ,i,qout)=qin=16n2β(λ,i,qin,qout)[LEXT(λ,i,qin)+LSELF(λ,i,qin)]Ω(qin)

where

LEXT(λ,i,qin)=d2d2L(λ,i,qin)exp[c(λ,i)x]dx

and

LSELF(λ,i,qin)=0d(0xL*(λ,i,qin)exp[c(λ,i)t]dt)exp[c(λ,t)x]dx

Here L′(λ, i,q in) in Eq. (13) is the radiance distribution at the voxel center from external sources, i.e. excluding scattered radiance from the voxel itself. This is the quantity applied directly in Eq. (12) by the Zonal Method [27]. Eq. (13) extrapolates the radiance at the center point forward and backward to the voxel perimeter and integrates over the path length. Internal in-scattering to the incident radiance direction is handled by Eq. (14) which simultaneously integrates and attenuates the current voxel path radiance along the path through the voxel. Since the voxel-averaged VSF, β (λ, i,q in,q out), is assumed constant within the voxel it has been removed from both path integrals (Eqs. (13) and (14)) and incorporated into Eq. (12). The integrals can then be analytically derived as,

LEXT(λ,i,qin)=L(λ,i,qin)c(λ,i)x(qin)(exp[c(λ,i)x(qin)2]exp[c(λ,i)x(qin)2])
LSELF(λ,i,qin)=L*(λ,i,qin)c(λ,i)2x(qin)(c(λ,i)x(qin)+exp[c(λ,i)x(qin)]1)

For a given beam attenuation, c(λ, i), and voxel size, x(q in), Eqs. (15) and (16) reduce to simple directional q-dependent multiplicative factor for L′(λ, i,q in) and L *(λ, i,q in). In many model applications the range of voxel sizes and beam attenuation coefficient values will be very restricted, with, for example, spatially uniform attenuation and a single uniform voxel size, so these multiplicative factors can be pre-calculated and tabulated for q with little computational overhead. Note that while Eqs. (12), (15) and (16) are circular for L *, the solution method is iterative, so L * is updated from the current value on each iteration.

 figure: Fig. 3.

Fig. 3. Voxelization of volumetric radiative transfer. (a) Scattered energy resulting from a beam of radiance passing through the voxel through solid angular cell q in and the view-point p is found by integrating radiance along the in-voxel path length d. The resulting directional path radiances L *(q out) are assumed uniform throughout the voxel volume, and the radiance leaving the voxel is found by evaluating the integral of Eq. (4) along the exitant direction path length. (b) When surfaces and voxels intersect the path lengths are truncated and the view-point is shifted to the center of the remaining voxel segment.

Download Full Size | PDF

3.4. Voxel and surface intersections

The above discussion applies to voxels in open space filled only with volumetric scattering media. A specific modification is required to accommodate situations in which a voxel has surface elements within it:

1. Where a surface element intersects into a voxel it is determined if the voxel is completely bisected by the surface and if scattering is non-zero on both sides. If so, two notional voxels are superposed in space, one for each side of the surface.

2. If voxels are subdivided by a surface the view-point of the voxel is shifted to most closely represent the center point of the remaining voxel segment on the appropriate side of the surface.

3. The path length x(q) for each directional cell is re-calculated based on the path length through the view-point to the first intersecting surface or voxel perimeter in both directions (Fig. 3).

The above procedure maintains good theoretical accuracy only when voxels are wholly subdivided by surfaces of relatively low curvature compared to the voxel size. Where a large voxel contains a complex surface errors are introduced due to parts of the voxel itself being no longer visible to the view-point. However in practice, these introduced errors often do not seriously confound overall accuracy for two reasons: 1) the error only pertains to the volumetrically scattered component of the light in small interstices of complex structures, for the intended applications in waters of relatively high clarity this energetic component is extremely small; and 2) the error can be reduced to an arbitrary level anyway by recursively subdividing voxels. The model implementation has the capability to adaptively refine both the surface and volume discretizations in areas of high radiance gradient by subdividing polygons and voxels between solution iterations [33]. However, to simplify the discussion this feature is disabled for the results presented in this paper.

3.5. Domain boundaries

In the model implementation the boundaries in the x, y, and z directions may be independently specified as either periodic, so that the model structures are repeated for a set number of times, or a fixed incoming boundary radiance distribution may be specified. For shallow water applications the most appropriate configuration is that the domain repeats horizontally (x and y directions) but the upper z boundary is initialized with a sky downwelling radiance model. Note that this scheme requires that opposite sides of the model domain match exactly, so for example the water surface must be periodic at the width of the model domain.

4. Algorithm operation

The key process in the algorithm is establishing the incident radiance distribution at the viewpoint of an element by rendering the entire model domain as seen through a hemispherical or spherical field of view (Fig. 4). This is achieved by projecting the current elements onto each face of high resolution hemicubic or cubic partition (e.g. n=128), using standard computer graphics techniques, and then down-sampling to the directional resolution of the elements scattering function. The approach is conceptually identical to that of to the hemicube radiosity method [21]. Figure 4 illustrates the steps taken in establishing the incident radiance distribution for a surface patch and translating it into the patch exitant radiance distribution. The overall sequence of events for model setup and solution is as follows:

1. The input surface vector meshes are pre-processed (§3.2)

2. The volumetric subdivision into voxels is established and pre-processed (§3.3, §3.4)

3. The upper z-boundary incident radiance is initialized with a sky radiance model (§3.5)

4. All elements have their exitant radiance distributions set to zero.

5. For each element in the model, exitant radiance is updated based on the current incident radiance distribution at the element view-point (Fig. 4). The change in exitant radiant energy for this iteration is summed over all elements (§3.2, §3.3).

6. Iteration continues from Step 5 until either the total change in exitant energy for the last iteration falls below a predefined fraction of the total energy (called the solution stability threshold) or the maximum permitted number of iterations has occurred.

Convergence is guaranteed provided some absorption occurs, however there is no implicit constraint that the final solution will observe conservation of energy. In fact, small errors in the radiative transfer calculations can escalate to substantial discrepancies between the energy absorbed and input, e.g. a 10% loss in the radiant transfer between elements will increase the rate of convergence but result in nearly 70% energy loss over 10 iterations. In a shallow water application the radiant energy transferred upward through the water surface should be equal the energy transferred downward through the surface minus the energy absorbed by patches and voxels. Any discrepancy in the final energy balance is therefore a useful metric for overall model accuracy.

 figure: Fig. 4.

Fig. 4. Processing chain for one surface patch from the model example run of Sec. 5.2. Sequential rendering processes are shown on an unfolded hemicubic 128-partition, an identical process occurs for voxels but the rendering occurs on a cubic partition data structure.

Download Full Size | PDF

5. Model performance and applications

5.1. Plane-parallel performance

The outputs of the 3D model in a plane-parallel configuration were compared to the invariant imbedded numerical integration method [2] for radiative transfer through non-stratified IOPs with a total optical depth of 5 and single scattering albedos (ω 0) of 0 to 1 in 0.1 steps (an optical depth of 5 corresponds to approximately 10 m depth at 500 nm for the IOP data set used in the next section). Whereas the full 3D model is constrained by design to use cubic directional partitions, in a plane-parallel configuration the standard HydroLight spherical partition can be used (Fig. 2) and the horizontal periodic repeat can be effectively infinite. This enabled the accuracy and stability of the iterative solution of the volumetrically discretized radiative transfer equations (Eqs. (12), (13), (14)) to be assessed without the complicating factors of differing directional discretizations and finite horizontal periodic repeat.

 figure: Fig. 5.

Fig. 5. Modeled substrate-incident radiance distribution in the plane of the sun, for optical depth 5.0, substrate diffuse reflectance R D=0.7 and single scattering albedos, 0≤ω 0≤1, with Petzold’s phase function. Sun zenith angle is 30° in a HydroLight idealized sky model (C =0, R dif=0.3, irradiance=1.0 for all bands) [29]. The water surface is perfectly flat. (a) Comparison between invariant imbedded method (II) and 3D model algorithm (3Dpp) with 10 layers and 20 iterations. (b) 3D model convergence rate to invariant imbedded solution for substrate incident radiance at 60° from nadir. (c) Effect of vertical spatial discretization (no. of layers) on modeled substrate-incident radiance at 60° and 87° from nadir, runs of 100 iterations.

Download Full Size | PDF

The results are shown in Fig. 5, and the caption gives further details on the model input parameters. In Fig. 5(a), two points of disagreement between the invariant imbedded method and the 3D model can be identified. Firstly, the 3D model underestimated radiance levels resulting from a single scattering albedo of 1.0 over the majority of substrate incident angles. Figure 5(b) shows that this is not a fundamental inaccuracy but simply due to insufficient solution iterations, after 100 iterations the solution converges to the invariant-imbedded solution even for ω 0 equal to unity. Secondly, the invariant imbedded solution predicted a small increase in radiance incident on the substrate in directions close to horizontal across the substrate, i.e. near -90° and 90°. This phenomena, due to multiple scattering between the highly reflective substrate (R D=0.7) and the bottom of the water column, was less evident in the 3D model because the bottom 0.5 optical depths from the substrate were averaged into a single layer radiance distribution. Discretizing the water column into progressively thinner layers reduces this discrepancy to an arbitrarily small amount (Fig. 5(c)). Twenty uniform layers were sufficient to get close agreement with the invariant imbedded solution for substrate incident radiance at 60 ° from nadir, whereas radiance at 87° required 200 layers.

The results demonstrate that the volumetrically discretized treatment of radiative transfer (Eqs. (12), (13), (14)) is fundamentally accurate, since even the slightest inaccuracy would cause the result to drift away from the invariant imbedded solution as the number of iterations or layers increased. In addition, degradation in accuracy for lower numbers of layers and iterations is small and acceptable for many applications. A substrate reflectance of 0.7 is very high compared to most naturally occurring substrate materials but nevertheless even with only 10 layers and 20 iterations the overall error introduced by the discrepancy at substrate-incident angles greater than 60° magnitude is extremely small, corresponding to approximately 0.2% of the total downwelling irradiance on the substrate for ω 0=0.9, or 0.008% for ω 0=0.3.

5.2. Three-dimensional structures - example model setup

This section presents two example model setups incorporating three-dimensional substrate structures and a macro-scale water surface wave structure, the intention is to illustrate the model potential for remote sensing and photobiology applications in shallow water coral reefs. While in an actual application it would be desirable to parametrize the three-dimensional structures from empirical data, here the water surface is modeled as a superposition of three Gerstner waves [36] of mean amplitude 0.025 m and wavelengths from 1 m to 1.4 m. Two differing substrate structures are used, being simplified representations of dome shaped corals (e.g. Porites sp.) and branching corals (e.g. Acropora sp.) [37] standing approx. 0.5 m high on a sand substrate at 1 m depth (Fig. 6). The dome structures were formed by superposing two spheres using a patch intersection algorithm, giving a scene total of 3733 patches. The branches were composed of tubular segments with the patch intersection algorithm disabled, giving 3904 patches. The two model setups also differ with respect to downwelling sky radiance distributions which were modeled [34] from field-collected total and diffuse sky spectral irradiances collected under clear sky conditions at Glover’s Reef, Belize, with sun zenith angles of 15 ° and 45°. The two model setups are identical in every other respect and will henceforth be referred to as ‘domes high sun’ and ‘branches low sun’.

In the model setups the water surface was considered locally smooth, a Fresnel reflection and refraction function [2] was directionally discretized over a cubic 12-partition (864 cells) and mapped onto the triangulated Gerstner wave surface according to the local co-ordinate system (Eq. (9)). Lambertian-assumed coral and sand spectral reflectances, RD(λ), were from a previously published dataset [38], to facilitate comparison the same coral reflectance was mapped onto both the domes and branching structures. Water IOPs were from a dataset collected at Glover’s Reef lagoon with a WETLabs AC-S and ECOBB3 IOP profiling package [40], giving volumetrically homogeneous a(λ), c(λ) and the VSF, β (λ,ψ), was modeled using a wavelength-dependent Fournier-Forand phase function [26, 39]. All spectral data were resampled to 17 wavelength bands of 20 nm width from 400 nm to 740 nm. Over these bands the range of single scattering albedos (ω 0) was 0.13 to 0.82, and optical depths to the sand substrate ranged from 0.3 to 2.1. Both setups had 1200 voxels arranged in a 10×10×12 grid.

The 3D model implementation allows a variety of virtual sensors to be placed anywhere within the model domain, such as cameras or spectroradiometers with a defined field of view, or orthographic projection area integrators for simulating a single above-water remote sensing pixel. After the model solution is obtained the virtual sensors estimate collected light by a modification of the same rendering method used in the solution algorithm (Fig. 4). In the two example setups, two remote sensing pixel sensors of 1 m×1 m were placed above and below the water surface, and 72 camera sensors were placed in a 360° circle to provide views of the benthic structures from all directions (Fig. 6). Each camera sensor collected a spectral radiance image of 600×600 pixels. A PNG or JPEG image requires a linear scale of 0–255 in the red, green and blue, conversion was performed by weighting the spectral data by the human eye tristimulus functions [2] and log transforming the result multiplied by a user-defined parameter akin to camera exposure time. For each setup, 24 separate model solution runs were performed with the wave surface structure adjusted each time by propagating the Gerstner waves forward by 1/24 of the slowest wave time period. This enabled the variation of water surface effects, such as sun-glint patterns, to be estimated over a cyclic repeat of the water surface movement.

 figure: Fig. 6.

Fig. 6. Input vector meshes (a) and (c), and example camera sensor outputs (b) and (d) for the domes high sun and branches low sun model setups. On the left, red pyramids show a sample of the virtual camera FOV locations and blue rectangles are remote sensing pixel sensors above and below the water surface. Animations over the 24 time step solutions show effects of water movement (Media 1), (Media 2), and the 72 circularly placed cameras give a 360° rotational view of the 3D light field for a single time point (Media 3), (Media 4). Solution algorithm hemicubic and cubic resolutions (Fig. 4) were 96 and 48 respectively. Solution stability threshold was 0.001 with runs limited to 5 iterations. For clarity volumetric scattering is directionally interpolated in these visualizations.

Download Full Size | PDF

5.3. Three-dimensional structures - results

Model outputs from the virtual camera sensors display a high level of visual realism, Figs. 6(b) and 6(d). Note that these images and movies are directly calculated model outputs with no post-processing except for the choice of overall camera sensitivity. The capability of the model to capture various radiometric processes is apparent, such as: shading on structure sides and the surrounding substrate; total internal reflection on the underside of the water surface; directional volumetric scattering of light, especially when viewing towards the sun in the branches low sun setup; multiple scattering effects, where the water column horizontally scattered radiance becomes colored by the coral substrate; and wave focusing effects. An orthographic projection of a 1 m2 above-water view simulates a remote sensing pixel (Figs. 7(a) and 7(b)). The significant contribution of sunglint to variance in above-water reflectance and its dependence on sun elevation is evident, but in addition the water surface also spatially distorts the substrate, and the branching structures are relatively darker due to within-structure shading.

 figure: Fig. 7.

Fig. 7. Upwelling radiance, L u integrated over 1 m2 above the water surface by the virtual remote sensing pixel sensor for (a) domes high sun (Media 5) and (b) branches low sun (Media 6). The third panel (c) shows minimum and maximum percentage energy loss over the 24 runs for the domes and branches setups, ‘isec’ is one run of the branches setup using a patch intersection algorithm.

Download Full Size | PDF

Energy conservation over all 24 solution runs for the domes high sun setup was within ±1% for almost all wavelengths (Fig. 7(c)). For the branches low sun solutions there was a net energy gain of 2% to 4% corresponding in spectral shape to the water attenuation. This error was due to poor construction of the input vector mesh. Since a patch intersection algorithm was not applied parts of individual patches were embedded behind surfaces, so the coral surface area and hence total absorbed light were overestimated. When a single solution run of the branches setup was performed with a pre-processing patch intersection algorithm, energy conservation improved to within 0.5% for almost all wavelengths (Fig. 7(c)). This highlights the importance of having a well constructed vector mesh for input when assessing conservation of energy.

Figure 8 illustrates the potential use of the model for investigating three-dimensional processes in the vertical transmission of light through the water surface and benthic canopies. The periodic distortion of the sun image in the associated movies is the basis of the wave-focusing phenomena visible in Figs. 6(b) and 6(d). However, note that the model has some limitations with respect to high spatial and directional resolution phenomena such as wave focusing, since a finite directional resolution translates to reduced spatial resolution as the distance between elements increases. Consequently, with increased depth the narrow wave focusing peaks become spatially smeared out, although the overall spatially-averaged light energy remains accurate.

For either setup average solution time was 2.5 hours on a standard Linux based workstation. The basic time complexity of the algorithm is ~O(n 2) but in practice performance is better due to various optimizations, e.g. exploiting the fact that patches on flat surfaces cannot see other patches on the same surface. The current implementation is not yet fully optimized, a GPU (Graphics Processing Unit) based implementation is feasible [41]. The current rapid rate of GPU development implies that scenes of the complexity presented here could be solvable in under minute on a standard workstation within the next five years.

 figure: Fig. 8.

Fig. 8. Hemispherical ‘fish-eye’ projections of downwelling radiance, L d, from (a) the sky, (b) directly under the water surface (Media 7), (c) on the substrate under the branching structures without water column scattering included in the visualization (Media 8) and (d) with scattering (Media 9).

Download Full Size | PDF

6. Conclusion

A geometrical optics algorithm for solving three-dimensional radiative transfer for arbitrary surfaces in anisotropic scattering media has been presented. The 3D model is based on a spatial and directional discretization of radiative transfer theory. In a plane-parallel configuration the method produces essentially identical results to the invariant imbedded numerical integration technique, provided that sufficient volumetric resolution and solution iterations are used. Though general in its design, the model was primarily designed for applications in natural shallow water environments with complex benthic structures. For media with single scattering albedo less than 1.0 and benthic reflectances less than 70% solution convergence is sufficiently rapid to render many problems of interest computationally tractable. Model validation for complex structures is challenging, but internal conservation of energy implies that overall error can be constrained by careful model setups, e.g. within 1% for the example benthic structures presented here. Potential applications are numerous. Forward-modeling of structural effects and spatial patterns in water leaving radiance can be used to inform inversion methods for shallow water remote sensing, and since the model simultaneously provides light absorption by the benthic canopy there are clear applications in remote sensing of benthic photobiology. With suitably chosen input radiance distributions the model can construct a BRDF function for either the total water column or below-water structures, which can then be used as input to a plane-parallel atmospheric or water column model. Modeling of instrument shading by boats, buoys or cages is also possible and could inform instrument design or deployment practices. The main limitation in the model is capturing phenomena at small spatial scales that also rely on a fine directional resolution, such as wave focusing in deep water, since greater amounts of computer memory are required than most current workstations provide. Run times required for obtaining a solution are a less serious issue, a GPU-based implementation of the model is feasible and could reduce run times for problems of moderate complexity to a few minutes.

Acknowledgments

The author thanks Eugenio Méndez and an anonymous reviewer for comments on the manuscript, Stuart Phinn and Chris Roelfsema for field data on sky irradiances, and Alan Lim and Ellsworth LeDrew for validation of the invariant imbedded solution code against HydroLight. Reflectances and IOPs were collected using equipment held by the NERC Field Spectroscopy Facility. This work was funded by three NERC grants, NER/Z/S/2001/01029, NE/C513626/1, NE/E015654/1 (held in conjunction with Peter Mumby, Mark Cutler and Paul Blackwell) and also by the World Bank/GEF funded Coral Reef Targeted Research Program.

References and links

1. J. T. O. Kirk, Light and Photosynthesis in Aquatic Ecosystems, 2nd ed. (Cambridge University Press, Cambridge, 1994). [CrossRef]  

2. C. D. Mobley, Light and Water (Academic Press, San Diego, 1994).

3. D. Stramski, G. Rosenberg, and L. Legendre, “Photosynthetic and optical-properties of the marine chlorophyte Dunaliella-tertiolecta grown under fluctuating light caused by surface-wave focusing,” Mar. Biol. 115, 363–372 (1993). [CrossRef]  

4. J. D. Hedley, A. R. Harborne, and P. J. Mumby, “Simple and robust removal of sun glint for mapping shallowwater benthos,” Int. J. Remote Sens. 26, 2107–2112 (2005). [CrossRef]  

5. K. R. N. Anthony and O. Hoegh-Guldberg, “Variation in coral photosynthesis, respiration and growth characteristics in contrasting light microhabitats: an analogue to plants in forest gaps and understoreys?” Func. Ecol. 17, 246–259 (2003). [CrossRef]  

6. K. R. N. Anthony, M. O. Hoogenboom, and S. R. Connolly, “Adaptive variation in coral geometry and the optimization of internal colony light climates,” Func. Ecol. 19, 17–26 (2005). [CrossRef]  

7. S. Enriquez, E. R. Mendez, and R. Iglesias-Prieto, “Multiple scattering on coral skeletons enhances light absorption by symbiotic algae,” Limnol. Oceanogr. 50, 1025–1032 (2005). [CrossRef]  

8. S. Enriquez and N. I. Pantoja-Reyes, “Form-function analysis of the effect of canopy morphology on leaf selfshading in the seagrass Thalassia testudinum,” Oecologia 145, 235–243 (2005). [CrossRef]   [PubMed]  

9. J. D. Hedley and P. J. Mumby, “Biological and remote sensing perspectives of pigmentation in coral reef organisms,” Adv. Mar. Biol. 43, 277–317 (2002). [CrossRef]   [PubMed]  

10. K. Stamnes, S. C. Tsay, W. Wiscombe, and K. Jayaweera, “Numerically stable algorithm for discrete-ordinate-method radiative-transfer in multiple-scattering and emitting layered media,” Appl. Opt. 27, 2502–2509 (1988). [CrossRef]   [PubMed]  

11. D. Lubin, W. Li, P. Dustan, C. Mazel, and K. Stamnes, “Spectral signatures of coral reefs: Features from space,” Remote Sens. Environ. 75, 127–137 (2001). [CrossRef]  

12. C. D. Mobley, B. Gentili, H. R. Gordon, Z. H. Jin, G. W. Kattawar, A. Morel, P. Reinersman, K. Stamnes, and R. H. Stavn, “Comparison of numerical-models for computing underwater light fields,” Appl. Opt. 32, 7484–7504 (1993). [CrossRef]   [PubMed]  

13. J. Kirk, “Monte-Carlo study of the nature of the underwater light-field in, and the relationships between optical-properties of, turbid yellow waters,” Aus. J. Mar. Fresh. Res. 32, 517–532 (1981). [CrossRef]  

14. C. Mobley, H. Zhang, and K. Voss, “Effects of optically shallow bottoms on upwelling radiances: Bidirectional reflectance distribution function effects,” Limnol. Oceanogr. 48, 337–345 (2003). [CrossRef]  

15. C. Mobley and L. Sundman, “Effects of optically shallow bottoms on upwelling radiances: Inhomogeneous and sloping bottoms,” Limnol. Oceanogr. 48, 329–336 (2003). [CrossRef]  

16. K. L. Carder, C. C. Liu, Z. P. Lee, D. C. English, J. Patten, F. R. Chen, J. E. Ivey, and C. O. Davis, “Illumination and turbidity effects on observing faceted bottom elements with uniform Lambertian albedos,” Limnol. Oceanogr. 48, 355–363 (2003). [CrossRef]  

17. R. Deckert and K. J. Michael, “Lensing effect on underwater levels of UV radiation,” J. Geophys. Res. 111, C05,014 (2006). [CrossRef]  

18. J. Zaneveld and E. Boss, “The influence of bottom morphology on reflectance: Theory and two-dimensional geometry model,” Limnol. Oceanogr. 48, 374–379 (2003). [CrossRef]  

19. G. Strang, Linear algebra and its applications, 3rd ed. (Harcourt Brace Jovanovich, San Diego, 1988).

20. H. C. Hottel and A. F. Sarofim, Radiative Transfer (McGraw-Hill, New York, 1967).

21. M. F. Cohen and D. P. Greenberg, “The hemi-cube: a radiosity solution for complex environments,” in SIGGRAPH ’85: Proceedings of the 12th annual conference on Computer graphics and interactive techniques, pp. 31–40 (ACM Press, New York, NY, USA, 1985).

22. D. S. Immel, M. F. Cohen, and D. P. Greenberg, “A radiosity method for non-diffuse environments,” in SIGGRAPH ’86: Proceedings of the 13th annual conference on Computer graphics and interactive techniques, pp. 133–142 (ACM Press, New York, NY, USA, 1986).

23. C. Soler, F. X. Sillion, F. Blaise, and P. Dereffye, “An efficient instantiation algorithm for simulating radiant energy transfer in plant models,” ACM Trans. Graphics. 22, 204–233 (2003). [CrossRef]  

24. C. C. Borel, S. A.W. Gerstl, and B. J. Powers, “The radiosity method in optical remote sensing of structured 3-D surfaces,” Remote Sens. Environ. 36, 13–44 (1991). [CrossRef]  

25. T. Petzold, “Volume scattering functions for selected ocean waters,” Tech. Rep. SIO Ref. 72–78, Scripps Inst. Oceanogr., Scripps Inst. Oceanogr., La Jolla (1972).

26. G. Fournier and J. L. Forand, “Analytical phase function for ocean water,” in Ocean Optics XII, J. S. Jaffe, ed., vol. 2258, pp. 194–201 (Proc. SPIE, 1994).

27. H. E. Rushmeier and K. E. Torrance, “The zonal method for calculating light intensities in the presence of a participating medium,” in SIGGRAPH ’87: Proceedings of the 14th annual conference on Computer graphics and interactive techniques, pp. 293–302 (ACM Press, New York, NY, USA, 1987).

28. K. F. Evans, “The spherical harmonics discrete ordinate method for three-dimensional atmospheric radiative transfer,” J. Atmos. Sci. 55, 429–446 (1998). [CrossRef]  

29. C. Mobley and L. Sundman, “Hydrolight 4.2 technical documentation,” Tech. rep., Sequoia Scientific, Inc., Sequoia Scientific, Inc. Westpark Technical Center, 15317 NE 90th Street, Redmond, WA 98052 (2001).

30. S. Liang, Quantitative Remote Sensing of Land Surfaces (John Wiley & Sons, Inc, New Jersey, 2004).

31. F. X. Sillion, J. R. Arvo, S. H. Westin, and D. P. Greenberg, “A global illumination solution for general reflectance distributions,” in SIGGRAPH ’91: Proceedings of the 18th annual conference on Computer graphics and interactive techniques, pp. 187–196 (ACM Press, New York, NY, USA, 1991).

32. J. D. Foley, A. V. Dam, S. Feiner, and J. F. Hughes, Computer Graphics, 2nd ed. (Addison-Wesley, New York, 1995).

33. I. A. T. Campbell and D. S. Fussell, “Adaptive mesh generation for global diffuse illumination,” in SIGGRAPH ’90: Proceedings of the 17th annual conference on Computer graphics and interactive techniques, pp. 155–164 (ACM Press, New York, NY, USA, 1990).

34. R. H. Grant, G. M. Heisler, and W. Gao, “Photosynthetically-active radiation: Sky radiance distributions under clear and overcast conditions,” Agr. For. Met. 82, 267–292 (1996). [CrossRef]  

35. C. Mobley, “Hydrolight technical note 2. How does angular resolution affect computed radiances?” Tech. rep., Sequoia Scientific, Inc., Sequoia Scientific, Inc. Westpark Technical Center, 15317 NE 90th Street, Redmond, WA 98052 (2002).

36. M. Finch, “Effective Water Simulation from Physical Models,” in GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics, R. Fernando, ed., pp. 5–29 (Pearson Higher Education, 2004).

37. J. E. N. Veron, Corals of the World (Sea Challengers, 2000).

38. J. D. Hedley, P. J. Mumby, K. E. Joyce, and S. R. Phinn, “Spectral unmixing of coral reef benthos under ideal conditions,” Coral Reefs 23, 60–73 (2004). [CrossRef]  

39. C. Mobley, L. Sundman, and E. Boss, “Phase function effects on oceanic light fields,” Appl. Opt. 41, 1035–1050 (2002). [CrossRef]   [PubMed]  

40. J. L. Mueller, “Ocean optics protocols for satellite ocean color sensor validation,” Tech. Rep. NASA/TM 2003–211621/Rev 4-Vol IV (Erratum 1) (2003).

41. G. Coombe, M. J. Harris, and A. Lastra, “Radiosity on Graphics Hardware,” in Graphics Interface, W. Heidrich and R. Balakrishnan, eds., pp. 161–168 (Canadian Human-Computer Communications Society, 2004).

Supplementary Material (9)

Media 1: AVI (3681 KB)     
Media 2: AVI (3681 KB)     
Media 3: AVI (4056 KB)     
Media 4: AVI (4056 KB)     
Media 5: AVI (3549 KB)     
Media 6: AVI (3549 KB)     
Media 7: AVI (3681 KB)     
Media 8: AVI (3681 KB)     
Media 9: AVI (3681 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Overview of the 3D model. Surfaces are spatially discretized into patches while regions of space containing scattering media are discretized into cubic voxels (only a subset of the voxels are illustrated).
Fig. 2.
Fig. 2. Directional discretization schemes. (a) HydroLight standard spherical partition, (b) the cubic 8-partition and (c) as a spherical projection, (d) the corresponding hemicube 8-partition for surface directional functions and (e) its unfolded visualization as used in Fig. 4.
Fig. 3.
Fig. 3. Voxelization of volumetric radiative transfer. (a) Scattered energy resulting from a beam of radiance passing through the voxel through solid angular cell q in and the view-point p is found by integrating radiance along the in-voxel path length d. The resulting directional path radiances L *(q out) are assumed uniform throughout the voxel volume, and the radiance leaving the voxel is found by evaluating the integral of Eq. (4) along the exitant direction path length. (b) When surfaces and voxels intersect the path lengths are truncated and the view-point is shifted to the center of the remaining voxel segment.
Fig. 4.
Fig. 4. Processing chain for one surface patch from the model example run of Sec. 5.2. Sequential rendering processes are shown on an unfolded hemicubic 128-partition, an identical process occurs for voxels but the rendering occurs on a cubic partition data structure.
Fig. 5.
Fig. 5. Modeled substrate-incident radiance distribution in the plane of the sun, for optical depth 5.0, substrate diffuse reflectance R D=0.7 and single scattering albedos, 0≤ω 0≤1, with Petzold’s phase function. Sun zenith angle is 30° in a HydroLight idealized sky model (C =0, R dif=0.3, irradiance=1.0 for all bands) [29]. The water surface is perfectly flat. (a) Comparison between invariant imbedded method (II) and 3D model algorithm (3Dpp) with 10 layers and 20 iterations. (b) 3D model convergence rate to invariant imbedded solution for substrate incident radiance at 60° from nadir. (c) Effect of vertical spatial discretization (no. of layers) on modeled substrate-incident radiance at 60° and 87° from nadir, runs of 100 iterations.
Fig. 6.
Fig. 6. Input vector meshes (a) and (c), and example camera sensor outputs (b) and (d) for the domes high sun and branches low sun model setups. On the left, red pyramids show a sample of the virtual camera FOV locations and blue rectangles are remote sensing pixel sensors above and below the water surface. Animations over the 24 time step solutions show effects of water movement (Media 1), (Media 2), and the 72 circularly placed cameras give a 360° rotational view of the 3D light field for a single time point (Media 3), (Media 4). Solution algorithm hemicubic and cubic resolutions (Fig. 4) were 96 and 48 respectively. Solution stability threshold was 0.001 with runs limited to 5 iterations. For clarity volumetric scattering is directionally interpolated in these visualizations.
Fig. 7.
Fig. 7. Upwelling radiance, L u integrated over 1 m2 above the water surface by the virtual remote sensing pixel sensor for (a) domes high sun (Media 5) and (b) branches low sun (Media 6). The third panel (c) shows minimum and maximum percentage energy loss over the 24 runs for the domes and branches setups, ‘isec’ is one run of the branches setup using a patch intersection algorithm.
Fig. 8.
Fig. 8. Hemispherical ‘fish-eye’ projections of downwelling radiance, L d, from (a) the sky, (b) directly under the water surface (Media 7), (c) on the substrate under the branching structures without water column scattering included in the visualization (Media 8) and (d) with scattering (Media 9).

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

b ( λ , p ) = 2 π 0 π β ( λ , p , ψ ) sin ψ d ψ
c ( λ , p ) = a ( λ , p ) + b ( λ , p )
L * ( λ , p , v out ) = L E * ( λ , p , v out ) + Ξ L ( λ , p , v in ) β ( λ , p , cos 1 [ v in · v out ] ) d Ω ( v in )
L ( λ , p , v ) = K ( λ , p s , p ) L EX G ( λ , p s , v ) + p p s K ( λ , p , p ) L * ( λ , p , v ) d x ( p )
K ( λ , p 1 , p 2 ) = exp ( p 1 p 2 c ( λ , p ' ) d x ( p ' ) )
= exp ( 0 d c ( λ , p 1 + x d ( p 1 p 2 ) ) d x )
M G L ( p ) = ( X x ( p ) X y ( p ) X z ( p ) Y x ( p ) Y y ( p ) Y z ( p ) n x ( p ) n y ( p ) n z ( p ) )
s ( λ , p , u in , u out ) = { r 1 ( λ , p , u in , u out ) , if u in Ξ z and u out Ξ z + r 2 ( λ , p , u in , u out ) , if u in Ξ z + and u out Ξ z t 12 ( λ , p , u in , u out ) , if u in Ξ z and u out Ξ z t 21 ( λ , p , u in , u out ) , if u in Ξ z + and u out Ξ z +
L EX L ( λ , p , u out ) = L E ( λ , p , u out ) + Ξ L IN L ( λ , p , u in ) s ( λ , p , u in , u out ) d Ω ( u in )
L IN L ( λ , p , u in ) = L [ λ , p , M L G ( p ) u in ]
r 1 ( λ , p , u in , u out ) = R ( λ , p , u in , u out ) cos θ
q ( u ) = 1 + { 0 + n f x u y + h + f x u z + h , for u x < u y and u x < u z n 2 + n f x u y + h + f x u z + h , for u x > u y and u x > u z 2 n 2 + n f y u x + h + f y u z + h , for u y < u x and u y < u z 3 n 2 + n f y u x + h + f y u z + h , for u y > u x and u y > u z 4 n 2 + n f z u x + h + f z u y + h , for u z < u x and u z < u y 5 n 2 + n f z u x + h + f z u y + h , for u z > u x and u z > u y
L * ( λ , i , q out ) = q in = 1 6 n 2 β ( λ , i , q in , q out ) [ L EXT ( λ , i , q in ) + L SELF ( λ , i , q in ) ] Ω ( q in )
L EXT ( λ , i , q in ) = d 2 d 2 L ( λ , i , q in ) exp [ c ( λ , i ) x ] d x
L SELF ( λ , i , q in ) = 0 d ( 0 x L * ( λ , i , q in ) exp [ c ( λ , i ) t ] d t ) exp [ c ( λ , t ) x ] d x
L EXT ( λ , i , q in ) = L ( λ , i , q in ) c ( λ , i ) x ( q in ) ( exp [ c ( λ , i ) x ( q in ) 2 ] exp [ c ( λ , i ) x ( q in ) 2 ] )
L SELF ( λ , i , q in ) = L * ( λ , i , q in ) c ( λ , i ) 2 x ( q in ) ( c ( λ , i ) x ( q in ) + exp [ c ( λ , i ) x ( q in ) ] 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.