Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

End-to-end neural network for pBRDF estimation of object to reconstruct polarimetric reflectance

Open Access Open Access

Abstract

Estimating the polarization properties of objects from polarization images is still an important but extremely undefined problem. Currently, there are two types of methods to probe the polarization properties of complex materials: one is about the equipment acquisition, which makes the collection of polarization information unsatisfactory due to the cumbersome equipment and intensive sampling, and the other is to use polarized imaging model for probing. Therefore, the accuracy of the polarized imaging model will be crucial. From an imaging perspective, we propose an end-to-end learning method that can predict accurate, physically based model parameters of polarimetric BRDF from a limited number of captured photographs of the object. In this work, we first design a novel pBRDF model as a powerful prior knowledge. This hybrid pBRDF model completely defines specular reflection, body scattering and directional diffuse reflection in imaging. Then, an end-to-end inverse rendering is performed to connect the multi-view measurements of the object with the geometry and pBRDF parameter estimation, and a reflectance gradient consistency loss is introduced to iteratively estimate the per-pixel normal, roughness, and polarimetric reflectance. Real-world measurement and rendering experiments show that the results obtained by applying our method are in strong agreement with ground truth, validating that we can reproduce the polarization properties of real-world objects using the estimated polarimetric reflectance.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In this work, we investigate the polarization properties of the objects. Accurately capturing the shape and appearance of real-world objects has been an active area of vision and graphics research. Polarization is a property of light that distinguishes it from features such as light intensity and spectrum. It reflects the properties of the transverse wave of light, i.e., the special properties of light in the cross section perpendicular to the direction of transmission, and thus its ability to capture features that are not observed by the human visual system. Since the polarization state of light affects the interaction between light and matter, polarization has been widely used to solve many computer vision tasks such as Shape from Polarization (SfP) [13], polarization fusion [4], underwater image restoration [5] and 3D reconstruction [6,7] etc.

Acquiring the polarization properties of the appearance of most real-world objects by measuring is a very difficult and costly task. Therefore, physically based rendering is usually considered to obtain the polarization properties of materials, i.e., a rational physical modeling of light-matter interactions to achieve polarimetric imaging. The reflection from the material surface is described by the bi-directional reflection distribution function (BRDF). Relative to the information of light intensity, polarized light is described by adding three variables on the basis of $S_{0}$, so the polarimetric bi-directional reflectance function (pBRDF, i.e. the Mueller matrix) is used in the computational rendering of polarized light instead of the single numerical variable representation of the BRDF. Usually we use the pBRDF to describe a complex polarization phenomenon and therefore the use of a pBRDF to provide a realistic description of the material appearance is an important part of physically based light transmission simulations. The availability of measurement-based, data-driven pBRDF models [8] has significantly improved the description of polarimetric scattering in real scenes in existing studies. Despite the more accurate description of this approach, an empirical library of material measurements representing intensity, spectra and polarization of light would be a powerful asset in this task, and with larger devices for measurements, manoeuvring to obtain these information at sufficiently high angular resolution would be difficult. This research has also stimulated the development and application of novel pBRDFs. Kondo et al. [9] proposed a pBRDF model that realistically reflects the actual polarization properties of specular and diffuse reflections, and applied this model to render polarized images in Blending. However, this rendering requires resetting viewpoints and lights that are not consistent with the actual object appearance. In contrast, neural networks have demonstrated excellent capabilities in learning complex patterns. By leveraging these qualities, we can effectively capture the intricate relationships between the captured polarization images and the underlying physical parameters of the pBRDF model.

To accurately reproduce the real object appearance, we design an end-to-end learning strategy using physical priors and data driven to estimate the polarized reflectance of object appearance under natural illumination. We perform multi-view measurements on real-world, while introducing directional diffuse reflectance to construct a complete imaging model based on the existing hybrid polarimetric BRDF model to act as a strong prior knowledge. Then, in this work, we use an inverse rendering method to connect the measurements with geometry and pBRDF parameter estimation. In the estimation process, the parameter estimation of pBRDF is firstly initialized with measurement data for complete Mueller matrix imaging, and secondly, the reflectance gradient consistency loss is introduced in the specific iterative optimization to solve the challenge of disentanglement caused by inter-parameter interactions. To be precise, we have the following three novel contributions:

  • • We propose an end-to-end polarimetric reflectance estimation method guided a priori by the pBRDF imaging model.
  • • We construct a complete pBRDF model including specular reflection, body reflection and directional diffuse reflection to simulate the polarimetric reflection properties of the object, especially in the directional diffuse reflection, we design an energy change function based on the scattering reflection law of the incident radiation within the material.
  • • To effectively improve the per-pixel polarimetric reflectance accuracy, we introduce a multi-view reflectance gradient consistency loss in the iterative estimation to constrain the training.

2. Related work

2.1 Polarimetric BRDF models

To accurately describe the polarized reflection properties embodied in the interaction of light with material surfaces, many pBRDF models [1013] have been proposed based on micro-facet theory [14,15], which assume the diffuse reflection as a non-polarized component in order to simplify the model. The description of Fresnel theory by Collett shows that the polarization effect of diffuse reflections is not negligible [16]. Therefore, based on the existing models, the polarimetric diffuse reflection terms have been added to some models [9,1719] to simulate the complete polarized light transmission. However, several studies have shown that the true multiple subsurface scattering phenomena are not fully described by the combination of polarized diffuse and specular reflection lobes. Thus, Hwang et al. [20] introduced a new additional term for the scattering effect to the model, which allows a better fit to the measured polarization data, but multiple sub-surfaces can have different effects on the polarization state and their directional orientation is difficult to approximate with the same standard distribution.

2.2 Polarimetric inverse rendering

In the synthesis of visual scenes, it is essential to construct accurate bi-directional reflectance distribution functions for materials, which can render realistic scene content by describing the appearance of the material [21,22]. Kondo et al. [9] constructed a model that accurately describes the polarization properties of materials through introducing a diffuse reflection polarization term, and used this model to render polarized images. Subsequently, Baek et al. [17] defines a hybrid polarimetric BRDF model of specular and diffuse reflectance and proposes a new inverse rendering method in which the authors jointly optimize the pBRDF and normals by iterating over them to obtain the per-pixel specular properties, diffuse reflectance and normals of materials. For a more realistic simulation of polarized radiation transmission, Baek et al. [8]proposed an image-based isotropic pBRDFs acquisition scheme to simulate light transmission in a data-driven pBRDF model approach. Concomitantly, another related work has been implemented where the diffuse component and specular component have different polarization properties, and this distinction has been applied to estimating pBRDF models [17], reflectivity separation [23,24] and 3D reconstruction [25,26].

3. Polarimatric image formation

3.1 Background on polarization

Stokes vector The polarization state of light is described as a Stokes vector consisting of four dimensional components, $\boldsymbol{S}=\left [S_0, S_1, S_2, S_3\right ]^T$, where $S_0$ is the total radiance, $S_1$ and $S_2$ are the linear polarization of the total intensity, and $S_3$ is the circular polarization component, which is usually neglected. The Stokes vector can be parametrized as functions of the degree of polarization ($DoP$) and the angle of polarization ($AoP$): $\psi =\sqrt {S_1^2+S_2^2+S_3^2} / S_0$ and $\zeta =\tan ^{-1}\left (S_2 / S_1\right ) / 2$.

Mueller matrices When light interacts with a target surface, the polarization state of the light changes. Therefore, the reflection of polarized light can be expressed by the Stokes-Mueller function as $\boldsymbol{{S}_{out}}=\boldsymbol{M}\boldsymbol{{S_{in}}}$. $\boldsymbol{{S_{in}}}$ and $\boldsymbol{{S}_{out}}$ are inbound Stokes vector and outgoing Stokes vector respectively, $\boldsymbol{M}$ is a 4-by-4 Mueller matrix. To formulate a complete pBRDF model, we employ four standard Mueller matrix transformations, e.g., Fresnel transmission/reflection, coordinate rotation, linear polarization, and depolarization.

3.2 Polarimetric reflectance model

The polarimetric bidirectional reflectance distribution function (pBRDF) is defined as the reflectance of polarized light $\boldsymbol{S}_{i}$ at any point ${p}$ on the material appearance and can be represented by a Mueller matrix $\boldsymbol{M}\in {R}^{4\times 4}$. The Mueller matrix can predict the polarization properties of light scattered from these materials and from which a pBRDF can be determined. There are many types of pBRDF models, but most of them focus on the accurate description of surface reflection and ignore the directional diffuse reflection component. In addition, only the Lambertian model [27] is used for the description of the bulk scattering component. None of these models can be applied to the description of polarization properties of most materials. In order to describe the polarization properties of the material more precisely, surface reflection, bulk reflection and directional diffuse reflection should be considered simultaneously, and directional diffuse reflection generally happens within a lobe around the surface reflection, and this phenomenon is more significant in sub-surface reflection [28]. Therefore, in this article we present a complete micro-plane polarization reflection model which is a linear combination of surface reflection, body reflection and directional diffuse reflection, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Local spatial definition of the pBRD model in the direction of light propagation.

Download Full Size | PDF

Our novel pBRDF model is described as follows.

$$\boldsymbol{M}\left({p}, \boldsymbol{{\omega}_{i}}, \boldsymbol{{\omega}_{o}}\right)=\boldsymbol{{M}_{s}}\left({p}, \boldsymbol{{\omega}_{i}}, \boldsymbol{{\omega}_{o}}\right)+\boldsymbol{{M}_{b}}\left({p}, \boldsymbol{{\omega}_{i}}, \boldsymbol{{\omega}_{o}}\right)+\boldsymbol{{M}_{d}}\left({p}, \boldsymbol{{\omega}_{i}}, \boldsymbol{{\omega}_{o}}\right)$$
Where, $\boldsymbol{{M}_{s}}\left ({p}, \boldsymbol{{\omega }_{i}}, \boldsymbol{{\omega }_{o}}\right )$ is surface reflection, $\boldsymbol{{M}_{b}}\left ({p}, \boldsymbol{{\omega }_{i}}, \boldsymbol{{\omega }_{o}}\right )$ is a body reflection and $\boldsymbol{{M}_{d}}\left ({p}, \boldsymbol{{\omega }_{i}}, \boldsymbol{{\omega }_{o}}\right )$ is directional diffuse reflection. In the model construction we use three orthogonal vectors to form the Stokes vector coordinate system, as shown in Fig. 1. The propagation axes of the Stokes vectors follow the laws of light propagation, where the z-axis represents the direction of incidence and the direction of emission. For outgoing light the y-axis is aligned with the upward vector of the camera and for incident light the y-axis is aligned with the orthogonal direction of the horizontal linear polarization filter to the light, both perpendicular to the x-axis. In the coordinate system, the half-vector $\boldsymbol{h}$ is defined as: $\boldsymbol{h}=\frac {\boldsymbol{{\omega }_{i}}+\boldsymbol{{\omega }_{o}}}{\|\boldsymbol{{\omega }_{i}}+\boldsymbol{{\omega }_{o}}\|}$, where $\boldsymbol{{\omega }_{i}}$ denotes the direction of incidence and $\boldsymbol{{\omega }_{o}}$ denotes the direction of exit. The surface normal $\boldsymbol{n}$ and the half-vector $\boldsymbol{h}$ form the plane of incidence. Where the angle of incidence is ${\theta }_{i}={\cos }^{-1}\left (\boldsymbol{n}\cdot \boldsymbol{{\omega }_{i}}\right )$, the angle of exit is ${\theta }_{o}={\cos }^{-1}\left (\boldsymbol{n}\cdot \boldsymbol{{\omega }_{o}}\right )$, the half-angle is ${\theta }_{h}={\cos }^{-1}\left (\boldsymbol{n}\cdot \boldsymbol{h}\right )$ and the zenith angle is ${\theta }_{d}={\cos }^{-1}\left (\boldsymbol{h}\cdot \boldsymbol{{\omega }_{i}}\right )$[29].

Polarization of Specular Reflection When the incident light produces specular reflection on the appearance of the material, the surface can be assumed to consist of many micro-planes with specular reflection properties at different angles according to the micro-facet theory [30]. In order to study the specular reflection properties of the micro-plane, the refractive index ${\eta }$ of the material is one of the first factors to be considered, and the matrix ${\boldsymbol{F}_{R}}$ is constructed following Fresnel’s law of reflection [31]. In addition, the delay generated in the light transmission causes a certain degree of attenuation of the intensity and polarization components of the incident light, so the delay matrix ${\boldsymbol{D}_{S}}\left ({\delta }\right )$ will be introduced in the specular reflection model [32]. Finally, considering the shading and masking function $\boldsymbol{G}$ [33] and the normal GGX distribution function $\boldsymbol{D}$ [34], we design the Mueller matrix for specular reflection as follows.

$$\boldsymbol{{M}_{s}}\left({p}, \boldsymbol{{\omega}_{i}}, \boldsymbol{{\omega}_{o}}\right)={k}_{s} \boldsymbol{C}_{{i}\rightarrow{h}}{\left({\phi}_{l}\right)}\boldsymbol{D}_{s}{\left({\delta}\right)}\boldsymbol{F}_{R}{\left({{\theta}_{d},{\eta}}\right)}\boldsymbol{C}_{{h}\rightarrow{o}}{\left(-{\phi}_{c}\right)}$$
where ${k}_{s}=\rho _{s} \frac {{\boldsymbol{D}\left ({{\theta }_{h},{\sigma }_{s}}\right )\boldsymbol{G}\left ({{\theta }_{i},{\theta }_{o};{\sigma }_{s}}\right )}} {4\left ({\boldsymbol{n}\cdot \boldsymbol{{\omega }_{i}}}\right )\left ({\boldsymbol{n}\cdot \boldsymbol{{\omega }_{o}}}\right )}$, $\rho _{s}$ is the specular reflectance, ${\sigma }_{s}$ is the surface roughness, ${\delta }$ is the phase delay angle. $\boldsymbol{C}_{{i}\rightarrow {h}}{\left ({\phi }_{l}\right )}$ denotes the rotation matrix of the ${\phi }_{l}$ from the incident optical axis to the incident plane, and $\boldsymbol{C}_{{h}\rightarrow {o}}{\left (-{\phi }_{c}\right )}$ denotes the rotation matrix of the ${\phi }_{c}$ from the incident plane to the camera optical axis.

Polarization of Body Reflection Body scattering, also known as diffuse reflection, is a phenomenon in which light is transmitted into the interior of a material and refracted back into the air after depolarization. The Fresnel transmission Mueller matrix $\boldsymbol{{F}_{T}}$ [27] describes the polarization changes that occur in the transmission of light at the interface of two dielectric materials(air and lumpy materials). The depolarized Mueller matrix describes the diffraction and absorption of light within the material, which was introduced by Baek et al. [17] in the construction of their pBRDF model to describe the polarization diffuse reflection variation of the material. The polarization diffuse reflection term was defined as

$$\boldsymbol{M_b}\left(p, \boldsymbol{\omega_i}, \boldsymbol{\omega_o}\right)=\boldsymbol{C}_{i \rightarrow n}\left(\varphi_l\right) \boldsymbol{F}_T^i\left(\theta_i, \eta\right) \boldsymbol{D}_b\left(\rho_b\right) \boldsymbol{F}_T^o\left(\theta_o, \eta\right) \boldsymbol{C}_{n \rightarrow o}\left(-\varphi_c\right)$$
where ${\rho _b}$ is the diffuse albedo, ${\boldsymbol{C}_{i \rightarrow n}\left (\varphi _l\right )}$ and ${\boldsymbol{C}_{n \rightarrow o}\left (-\varphi _c\right )}$ denote the rotation matrices of the incident light and the outgoing light formed at some angle with respect to the surface normal.

Polarization of Directional Diffuse Reflection Directional diffuse reflection is usually a scattering phenomenon of light in a micro-plane with a large slope angle. The shadowing and shading effects in the micro-plane make the angle of reflection smaller when the diffuse reflection phenomenon occurs. As a result, the energy of directional diffuse reflection is mainly generated in the small reflection direction around the surface normal, and is easily neglected in the model construction. In order to more accurately describe the change of polarization state when the light interacts with the material appearance, we introduce the polarization directional diffuse reflection term to improve the pBRDF model. The light path of directional diffuse reflection consists of two parts: the Fresnel transmission that occurs when light enters the material and returns to the air, as well as the scattering of the incident radiation that occurs within the material. When light is scattered inside the material, its energy variation function $\boldsymbol{D}_{s s}\left (\sigma _{s s}\right )$ decreases with the increase of the reflection angle. According to this theoretical law, we propose a model of directional diffuse reflection in which the polarization state varies with the reflection angle, described as follows

$$\boldsymbol{M_d}\left(p, \boldsymbol{\omega_i}, \boldsymbol{\omega_o}\right)=k_{d d} \boldsymbol{C}_{i \rightarrow n}\left(-\varphi_l\right) \boldsymbol{F}_R\left(\theta_d, \eta_p\right) \boldsymbol{C}_{n \rightarrow o}\left(\varphi_c\right)$$
Where $k_{d d}=\rho _{d d} \boldsymbol{D}_{s s}\left (\sigma _{s s}\right )$, $\rho _{d d}$ is the albedo of the medium, numerically a colour vector, as opposed to the single-valued specular component. We test different functions to fit the data for 10 different materials and found that the energy change function can be expressed as $\boldsymbol{D}_{s s}\left (\sigma _{s s}\right )=\frac {C}{\sqrt {2 \pi \sigma _{s s}}} \exp \left (-2 \tan \theta _o / \sigma _{s s}^2\right )$, $\sigma _{s s}$ is the surface roughness for single scattering.

In summary, we treat the interaction of polarized light with a surface at point $p$ as the sum of specular, diffuse and directional diffuse components. For imaging and inverse rendering applications, we can again re-parameterize the model as $\boldsymbol{M_s}\left (p, \boldsymbol{\omega _i}, \boldsymbol{\omega _o} ; \sigma, \eta, k_s, \boldsymbol{n}\right )$, $\boldsymbol{M_b}\left (p, \boldsymbol{\omega _i}, \boldsymbol{\omega _o} ; \rho _{b}, k_s, \boldsymbol{n}\right )$, and $\boldsymbol{M_{d d}} \left (p, \boldsymbol{\omega _i}, \boldsymbol{\omega _o}; \sigma, \eta, \rho _{d d}, \boldsymbol{n}\right )$. Based on the parameterized model, we can predict the surface roughness $\sigma$, refractive index $\eta$, normals $\boldsymbol{n}$ and polarization changes for different scattering components for a given surface.

4. Our approach

4.1 Overview

The overall implementation of our method is shown in Fig. 2. Our input is a set of polarimetric photographs $\boldsymbol{I}=\left \{I_k\right \}$ captured using our portable hardware (Fig. 6) for multi-view measurement. First, deep learning architecture processes the $k$ input polarimetric images and generates multi-channel tensor parameters with the same spatial dimensions, i.e., normal, roughness, specular albedo and diffuse albedo, through polarimetric spatially varying bidirectional reflectance distribution function mapping. Meanwhile, the proposed pBRDF model is applied as a priori knowledge to initialize the parameters estimation. Then, the minimization of the difference between the rendered image and the ground truth image is sought in an end-to-end inverse rendering to iteratively update the optimally estimated parameters. Finally, the polarization properties of the object appearance are reproduced after obtaining the optimal estimated parameters.

 figure: Fig. 2.

Fig. 2. Overview of our method. After measuring multiple views of the object appearance, the pBRDF is accurately estimated with an end-to-end inverse rendering.

Download Full Size | PDF

The Mueller matrix $\boldsymbol{M} \left (p, \boldsymbol{\omega _i}, \boldsymbol{\omega _o}\right )$ at the pixel of the material microfacet in the incident direction $\boldsymbol{\omega _i}$ and the exitant direction $\boldsymbol{\omega _o}$. The Mueller matrix is introduced into the rendering equation to describe the Stokes vectors of the exitant rays at different incident rays.

$$\boldsymbol{S_o}\left(\boldsymbol{\omega_o}, p\right)=\int_{S^2} \boldsymbol{M}\left(p, \boldsymbol{\omega_i}, \boldsymbol{\omega_o}\right) \boldsymbol{S_i}\left(\boldsymbol{\omega_i}, p\right) \cos \theta_t d \boldsymbol{\omega_i}$$

Thus, the Stokes vector of the outgoing light at the material microfacet pixel $p$ is a function of the direction of propagation and the pixel position. That is, when the irradiance of the incident light is known, we can redefine the problem to explore the optimal parameters $\Theta$ of the imaging model, including the surface normal $\boldsymbol{n}$, the surface roughness $\sigma$ and the three reflectance maps $\left \langle \theta _{s p e c+d i r-d i f f}, \theta _{d i f f}\right \rangle$ of the parameterized pBRDF.

4.2 Initial analytical separation

$T_{\{i, o\}}^{+}$ and $T_{\{i, o\}}$ are computed from the incident/exitant Fresnel transmission coefficients, $R^{+}$, $R^{-}$ and $R^{\times }$ are computed from the Fresnel reflection coefficients. Simplifying by adding Eq. (2), Eq. (3) and Eq. (4) to obtain the complete Mueller matrix expression:

$$\mathbf{M} \approx\left[\begin{array}{cccc} \rho_d T^{+} T^{+}+\kappa_s R^{+}+\kappa_{d d} R^{+} & -\rho_d T^{-} T^{+} \beta & \rho_d T^{-} T^{+} \alpha & 0 \\ -\rho_d T^{-} T^{+} \beta & \kappa_s R^{+}+\kappa_{d d} R^{+} & 0 & 0 \\ -\rho_d T^{-} T^{+} \alpha & 0 & -\kappa_s R^{+}-\kappa_{d d} R^{+} & 0 \\ 0 & 0 & 0 & -\kappa_s R^{+}-\kappa_{d d} R^{+} \end{array}\right]$$
where $\alpha$ and $\beta$ denote $\sin 2 \varphi _l$ and $\cos 2 \varphi _c$ for the azimuth of incident/exitant polarized light, respectively. The information on the diffuse component can be obtained by subtracting the intensity $I_{45^{\circ }}$ from the intensity $I_{135^{\circ }}$, and we define the diffuse-dominated polarization observation as:
$$I_d={-}D \rho_d T^{-} T^{+} \alpha=I_{135^{{\circ}}}-I_{45^{{\circ}}}$$

In the case of a mixture of specular, diffuse and directional diffuse reflections in an observation, the specular-dominated polarization observation can be defined as:

$$I_s=D\left(k_s R^{+}+k_{d d} R^{+}-\rho_d T^{-} T^{+} \beta\right)=I_{0^{{\circ}}}-I_{90^{{\circ}}}$$

Based on the above definition, we perform a simple separation of the captured data to be used as an initial for network learning. Fig. 3 shows the reflectance separation results for four scene.

 figure: Fig. 3.

Fig. 3. Separation of polarized reflectivity of different radiation components. In a strict sense this simple separation is inaccurate, the specular radiation component (marked by the red box) is mixed in the separation of diffuse radiation in the mixture. It also shows that the diffuse radiation has some polarization information, which cannot be neglected in the estimation of polarized reflectivity.

Download Full Size | PDF

4.3 Network structure

Our deep network architecture specifically consists of two parts, the first part is the feature extraction structure, which includes several codec modules, i.e. polarization image codec module, to extract features from the input image, respectively. we divide the feature mapping into two parts, detail information and global information, and normalize all features by pooling layer to normalize all features. The second part is the feature reconstruction structure, which fuses the feature information and generates four pBRDF parameter mapping maps after three layers of convolution.

Among them, each image codec module consists of two-stream branches, as shown in Fig. 4. We design two-stream network consists of two separate branches with different functions, namely GF-Branch and LF-Branch, for global features and local features, respectively. GF-Branch is a new global feature track that deals with vectors instead of feature mapping. The whole network consists of repeating "blocks", including six GF convolution blocks for feature mapping and downsampling, followed by an extended GF convolution block and two upsampling blocks, each with LeakyReLU activation. In the seventh layer of the GF-Branch, an expanded GF convolution with a factor of 2 is used to increase the size of the receptive field to allow the network to retain additional information in the bottleneck. Skip connections are used between layers of the same size of the encoder and decoder to help the decoder retain as much detail as possible in feature extraction. After processing by GF-Branch, the global information is injected into the local information to reduce bias. LF convolution is the standard convolution, which is capable of handling all input pixels. Since the material surface is assumed to consist of an infinite number of microfacet elements, the pBRDF is assumed to be spanned by a small set of basis pBRDFs. Therefore, features in distant regions need to be extracted to provide supplementary information in the pBRDF recovery. For this, we introduce GF convolution to explicitly consider the effect of global information in feature extraction. For the processing of global features due to the presence of nonlinearities, we use the scaled exponential linear unit (SELU) activation function, which is able to effectively stabilize the entire training [35]. To enable the use in conjunction with the local features, a fully connected layer is used to transform the global features after their extraction.

 figure: Fig. 4.

Fig. 4. Image codec module for feature extraction.

Download Full Size | PDF

4.4 Loss function

The loss function in the parametric polarimetric reflectance model estimation is crucial for the overall training as it reflects the optimization objective of the model. Since the ground truth of each parameter is difficult to obtain, the Stokes vector, DoP and AoP are used as the final output in the rendering. Correspondingly, a customized loss function associated with the output is defined in our approach using the $\mathcal {L}_1$ loss.

$$\mathcal{L}_{\text{out}}=\frac{1}{K} \sum_{k=1}^K\left(\lambda_1\left\|\hat{Y}_{S_0}-Y_{S_0}\right\|_1+\lambda_2\left\|\hat{Y}_{D o P}-Y_{D o P}\right\|_1+\lambda_3\left\|\hat{Y}_{A o P}-Y_{A o P}\right\|_1-\lambda_4 \log C\right)$$
where $K$ is the number of views, $\hat {Y}_{S_0}$, $\hat {Y}_{D o P}$ and $\hat {Y}_{A o P}$ are the rendered polarization properties, ${Y}_{S_0}$, ${Y}_{D o P}$ and ${Y}_{A o P}$ are the captured polarization properties. $\lambda _1$, $\lambda _2$, $\lambda _3$ and $\lambda _4$ are the weight coefficients of the loss function, which are experimentally set to be 0.1, 1, 0.05 and 0.02 respectively. The constraint term is used to correct the AoP during the training process, usually C takes a value between 0 and 1, which describes the closeness of the variance between the rendered AoP and the ground truth image. If the variance between the rendered AoP image and the ground truth is large, the constraint term will incur a large loss. On the contrary, if their variances are close, the loss is small. Therefore, minimizing the constraint term can solve the problem of homogenization of AoP gray values caused by the $\mathcal {L}_1$ loss function, and thus render high-quality AoP image.

In addition, to encourage the estimated parameters to be more accurate, we further optimize inverse rendering by computing the spatial gradient operator of the polarized reflectance map $\left \langle \theta _{s p e c+d i r-d i f f}, \theta _{d i f f}\right \rangle$. The reflectance gradient consistency loss function is designed as:

$$\mathcal{L}_{G C}=\sum_{p=1}^N\left[\frac{1}{K-1} \sum_{k=1}^K\left[\frac{\partial I_{\text{rendered}}^k}{\partial \theta_{\text{spec}}}+\frac{\partial I_{\text{rendered}}^k}{\partial \theta_{\text{diff}}}+\frac{\partial I_{\text{rendered}}^k}{\partial \theta_{\text{dir-diff}}}\right]_p^2\right]$$

In the inverse rendering, we compute the gradient of the reflectance at pixel $p$ in multiple views of the object appearance to overcome amplitude variations in the surface normal in the spatial dimension.

The net loss used to train all the networks described in the pipeline.

$$\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{out}}+\mathcal{L}_{G C}$$

5. Evaluation of our polarimetric BRDF

To further evaluate the accuracy of the proposed pBRDF model for refractive index estimation and Mueller matrix modeling, the models of Baek et al. [17], Kondo et al. [9], and the proposed model are used for comparison to evaluate the optimal model. Since the variation of refractive index depends on the wavelength of light, we use the red channel data in both measurements and simulations.

Refraction Index Estimation We first evaluate the accuracy of the optimized refractive index for real materials. We measure the object at multi-views and then invert the refractive index parameters by a genetic algorithm. Table 1 shows the results of the material refractive index estimation. Comparing the ground truth of the material samples with the estimated refractive index reveals that the inversion results of our pBRDF model are most consistent with the ground truth refractive index.

Tables Icon

Table 1. Refractive index estimation results for typical materials

Mueller Matrix Imaging Simulations To evaluate the accuracy of the Mueller matrix represented by our pBRDF model, we compare the simulated Mueller matrix images of ceramic horses by Baek, Kondo and our pBRDF model, respectively, and evaluated the rendered images qualitatively. As shown in Fig. 5, assuming that in a coaxial acquisition system, the Mueller matrix image simulated by our model is more theoretically valid. Although all three methods are in good agreement in predicting polarized shadows, there are very significant differences between the elements in the matrix simulated by the different methods, from which it is intuitively observed that the elements ${m}_{33}$ and ${m}_{44}$ in Fig. 5(c) behave more similarly, which also indicates that our model can better represent the specular reflection and body scattering phenomena of the material, and also verifies that our model is better than the models of Baek et al. and Kondo et al. can better describe the polarized reflection properties of the material appearance. In addition, we calculate the average PSNR values of the simulated Mueller matrix images. As shown in Fig. 5(d), our model has the best average PSNR values. Since we take scattering into account more sufficiently in the material prediction, the accuracy of the prediction is greatly improved with the addition of the directional diffuse reflection phenomenon.

 figure: Fig. 5.

Fig. 5. Mueller matrix image of a ceramic horse. (a) Rendered Mueller matrix image reconstructed using the model of Baek et al. (b) Rendered Mueller matrix image reconstructed using the model of Kondo et al. (c) Rendered Mueller matrix image reconstructed using our model; and (d) the average PSNR values of the Mueller matrix image simulated by the above three methods.

Download Full Size | PDF

6. Results and evaluation

6.1 Datasets

We use a self-developed measurement system for active detection of polarization properties of materials to obtain the complete polarization properties of the material, as shown in Fig. 6. Under structured illumination conditions, we used the LUCID PHX050S-QC color polarization sensor for multi-view snapshot polarization acquisition of 100 real object samples. The dataset is shown in Fig. 7 and consists mainly of planar samples but also includes non-planar objects such as mugs, soccer balls, etc. It contains a rich diversity of materials, including diffusely or specularly reflecting wrapping paper, fabrics, plastics, ceramics, and glass, etc.

 figure: Fig. 6.

Fig. 6. Measurement system for active detection of polarization properties of objects.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Example set of images from the materials dataset consisting of 100 different captured sample objects.

Download Full Size | PDF

6.2 Reconstruction of polarization properties

Synthetic date We validate our approach on synthetic data as shown in Fig. 8. We applied the KAIST dataset to mitsuba3 to obtain $S_0$, $DoP$ and $AoP$ for different objects rendered in multiple views. The obtained synthetic data is used as input to get the re-rendered $S_0$, $DoP$ and $AoP$ after our designed network learning architecture. The material test sphere is the dataset rendered under polarized illumination and skullpanda is the dataset rendered under natural illumination. Comparison of the detail information reveals that the rendered images outperform the captured images for either type of illumination, especially the edge information and specular reflection are more detailed and rich.

 figure: Fig. 8.

Fig. 8. Ground truth image vs. rendered image in synthetic data. (a) The first column shows the intensity $S_0$, $DoP$ and $AoP$ of the object rendered in mitsuba3; (b) the second column corresponds to the rendered image after estimation of the parameterized pBRDF model; and (c) the estimated parameters $\Theta$ used to accurately reproduce the object appearance.

Download Full Size | PDF

Real date To evaluate the accuracy of our pBRDF model for material description, we apply the proposed method to the real object appearance reproduction. In our evaluation, the polarization properties of the object will be rendered with this pBRDF model after obtaining the estimated parameters. We compare the ground truth and rendered images in Fig. 9. Fig. 9 (a) and Fig. 9 (b) show strong agreement in the polarization states, with an average PSNR of 36.35 dB and 38.92 dB for the ground truth and rendered images in the two scenarios, respectively, which also indicates that our pBRDF model is able to simulate the polarization behaviors of each material under known light, i.e., it is able to correctly characterize the polarization properties of the material. Fig. 9 (c) shows the estimated parameter ${\Theta }$ for rendering. The experimental results show that the proposed method can accurately reconstruct the experimental measurements of the real scene. The model is extended by the addition of directional diffuse reflection, which has a large impact on both the wavelength and the application range of the material, while helping to accurately characterize the scattering properties of polarized light from the material.

 figure: Fig. 9.

Fig. 9. Ground truth image vs. rendered image in real data. (a) The first column shows the actual measurements of the intensity and DoP of the target appearance under unpolarized illumination; (b) The second column is the rendered image after estimation of the parameterized pBRDF model; and (c) The estimated parameters $\Theta$ used to accurately reproduce the object appearance.

Download Full Size | PDF

6.3 Comparative experiment

We compare our method with that of [26] applied to the reproduction of measured object appearance. Dave et al. started with an initial diffuse-specular separation taking into account specular and diffuse reflections and further passes it to a trained deep network to improve the diffuse-specular separation. In this paper, on the other hand, the parameters of the pBRDF model considering directional diffuse reflections are accurately estimated and applied to reproduce the polarization properties with end-to-end learning after taking multi-view measurements of real-world objects. In the experiment, we test four objects with different dielectric materials, namely a pencil bag made of textile, a paper box, a racket and a mascot made of plastic. From the comparison results in Fig. 10, we observe that our method is able to present better detail information about the objects relative to [26], which also indicates that the introduction of directional diffuse reflectance makes the polarized reflectance estimation of the material more accurate. This performance is particularly significant in the estimation of the specular reflection term. In addition, the application of the gradient consistency loss of the polarized reflectance in the iterative optimization makes our results appear sharper for the presentation of the edges. To more obviously show the advantages of our method, the errors of the parameter maps estimated by the two methods are compared, as shown in Table 2. In terms of the average error scores of the estimated parameters, our method demonstrates excellent performance in both cases.

 figure: Fig. 10.

Fig. 10. Results of our method and Dave et al. for four examples of real objects with different dielectric materials. (a) Normal; (b) Roughness; (c) Specular; and (d)Diffuse.

Download Full Size | PDF

Tables Icon

Table 2. The average error of each parameter estimated by Dave et al. and our method

6.4 Ablation study

Polarization directional diffuse reflection and reflectance gradient-consistent loss function are the key aspects to achieve polarized reflectance reconstruction in our designed method. Here, we analyze the role of two components by designing the following experiment.

Ours w / o directional diffuse reflection The directional diffuse reflection occurs around the specular reflection flap, therefore, the introduction of directional diffuse reflection can be regarded as the consideration of multiple scattering in our pBRDF model. In the estimation of polarization reflectivity, the directional diffuse reflection term can make the polarization information in specular reflection more fully expressed, as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Estimation of specular reflection components with and without directional diffuse reflections in synthetic and real data.

Download Full Size | PDF

In addition, we compare the convergence of the whole training with and without directional diffuse reflections. Fig. 12 represents the number of samples (left) and the rendering cost function (right). It can be clearly observed that although a phenomenon similar to multiple scattering is considered in our model, it does not significantly increase the variance and cost.

 figure: Fig. 12.

Fig. 12. Convergence results for ours w / o directional diffuse reflection, both as a function of samples per-pixel and time (in seconds). Our model removes the single-scattering restriction without significantly increasing the rendering cost or variance.

Download Full Size | PDF

Ours w / o reflectance gradient consistency loss function We use the above training loss function in polarimetric reflectance reconstruction. In the context of uniform sampling, we compare the training results with different loss functions for different number of views. Fig. 13 shows that the reconstruction error after introducing the reflectance gradient-consistent loss function is significantly better than that of the loss function $L_{out}$ only in most cases, but falls into local minima when optimizing in some places. To address this problem, inspired by the literature [1], we use an initialization approach to make the loss function optimal when optimizing the parameters.

 figure: Fig. 13.

Fig. 13. Reconstruction errors of different loss function methods for different number of views.

Download Full Size | PDF

7. Conclusion

We present a novel and complete polarimetric BRDF model that constructs on the existing mixed specular and diffuse reflection models, which introduces a directional diffuse reflection component. To make the proposed pBRDF model accurately extend to the polarization properties of most materials, we use an end-to-end inverse rendering method to connect the multi-view measurements of the object with the estimation of the geometry and reflectance parameters. Throughout the iterative optimization parameters, we design a multi-view polarization reflectance gradient consistency loss function that allows for more accurate per-pixel reflectance estimation. We validate the accuracy of our pBRDF model and inverse rendering results in synthetic and real measurement experiments. Despite the prominent performance on both our pBRDF model and inversion results, our method still bears some limitations. In particular, only reflection phenomena are considered in this work. Furthermore, the method is only allowed to be used for the estimation of isotropic materials. We expect that our approach will contribute to the development of a wider range of applications of accurate polarization imaging. Future work can focus on more extending our model to estimate polarimetric multiple scattering on transparent and metallic surfaces.

Funding

Jilin Scientific and Technological Development Program (20210203181SF); Natural Science Foundation of Chongqing (cstc2021jcyj-msxmX0145); National Natural Science Foundation of China (62127813).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. A. Atkinson and E. R. Hancock, “Multi-view surface reconstruction using polarization,” in Proceedings of the Tenth IEEE International Conference on Computer Vision, vol. 1 (IEEE, 2005), pp. 309–316.

2. D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” in Computer Vision, IEEE International Conference on, vol. 3 (IEEE Computer Society, 2003), p. 982.

3. T. Huang, H. Li, K. He, C. Sui, B. Li, and Y.-H. Liu, “Learning accurate 3d shape based on stereo polarimetric imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), pp. 17287–17296.

4. J. Liu, J. Duan, Y. Hao, G. Chen, and H. Zhang, “Semantic-guided polarization image fusion method based on a dual-discriminator gan,” Opt. Express 30(24), 43601–43621 (2022). [CrossRef]  

5. X. Li, J. Xu, L. Zhang, H. Hu, and S.-C. Chen, “Underwater image restoration via stokes decomposition,” Opt. Lett. 47(11), 2854–2857 (2022). [CrossRef]  

6. X. Huang, J. Bai, K. Wang, Q. Liu, Y. Luo, K. Yang, and X. Zhang, “Target enhanced 3d reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173–1184 (2017). [CrossRef]  

7. X. Tian, R. Liu, Z. Wang, and J. Ma, “High quality 3d reconstruction based on fusion of polarization imaging and binocular stereo vision,” Inform. Fusion 77, 19–28 (2022). [CrossRef]  

8. S.-H. Baek, T. Zeltner, H. Ku, I. Hwang, X. Tong, W. Jakob, and M. H. Kim, “Image-based acquisition and modeling of polarimetric reflectance,” ACM Trans. Graph. 39(4), 139 (2020). [CrossRef]  

9. Y. Kondo, T. Ono, L. Sun, Y. Hirasawa, and J. Murayama, “Accurate polarimetric brdf for real polarization scene rendering,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX 16, (Springer, 2020), pp. 220–236.

10. M. W. Hyde IV, J. D. Schmidt, and M. J. Havrilla, “A geometrical optics polarimetric bidirectional reflectance distribution function for dielectric and metallic surfaces,” Opt. Express 17(24), 22138–22153 (2009). [CrossRef]  

11. R. G. Priest and T. A. Gerner, “Polarimetric brdf in the microfacet model: Theory and measurements,” Tech. rep., Naval Research lab Washington DC (2000).

12. M. Mojzik, T. Skrivan, A. Wilkie, and J. Krivanek, “Bi-directional polarised light transport,” in EGSR (EI&I), (2016), pp. 97–108.

13. Y. Zhang, Y. Zhang, H. Zhao, and Z. Wang, “Improved atmospheric effects elimination method for pbrdf models of painted surfaces,” Opt. Express 25(14), 16458–16475 (2017). [CrossRef]  

14. R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACM Trans. Graph. 1(1), 7–24 (1982). [CrossRef]  

15. K. Torrance and E. Sparrow, “Directional emittance of an electric nonconductor as a function of surface roughness and wavelength,” Int. J. Heat and Mass Transfer 10(12), 1709–1716 (1967). [CrossRef]  

16. E. Collett, Field guide to polarization, (Spie Bellingham, WA, 2005).

17. S.-H. Baek, D. S. Jeon, X. Tong, and M. H. Kim, “Simultaneous acquisition of polarimetric svbrdf and normals,” ACM Trans. Graph. 37(6), 268 (2018). [CrossRef]  

18. Z. Cui, J. Gu, B. Shi, P. Tan, and J. Kautz, “Polarimetric multi-view stereo,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1558–1567.

19. S.-H. Baek and F. Heide, “Polarimetric spatio-temporal light transport probing,” ACM Trans. Graph. 40(6), 1–18 (2021). [CrossRef]  

20. I. Hwang, D. S. Jeon, A. Munoz, D. Gutierrez, X. Tong, and M. H. Kim, “Sparse ellipsometry: portable acquisition of polarimetric svbrdf and shape with unstructured flash photography,” ACM Trans. Graph. 41(4), 1–14 (2022). [CrossRef]  

21. X. Wu, J. Zhang, Y. Chen, and X. Huang, “Real-time mid-wavelength infrared scene rendering with a feasible brdf model,” Infrared Phys. Technol. 68, 124–133 (2015). [CrossRef]  

22. I. G. Renhorn, T. Hallberg, and G. D. Boreman, “Efficient polarimetric brdf model,” Opt. Express 23(24), 31253–31273 (2015). [CrossRef]  

23. A. Dave, Y. Hold-Geoffroy, M. Hašan, K. Sunkavalli, and A. Veeraraghavan, “Snapshot polarimetric diffuse-specular separation,” Opt. Express 30(19), 34239–34255 (2022). [CrossRef]  

24. Y. Ding, Y. Ji, M. Zhou, S. B. Kang, and J. Ye, “Polarimetric helmholtz stereopsis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), pp. 5037–5046.

25. Y. Ba, A. Gilbert, F. Wang, J. Yang, R. Chen, Y. Wang, L. Yan, B. Shi, and A. Kadambi, “Deep shape from polarization,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, (Springer, 2020), pp. 554–571.

26. A. Dave, Y. Zhao, and A. Veeraraghavan, “Pandora: Polarization-aided neural decomposition of radiance,” in European Conference on Computer Vision, (Springer, 2022), pp. 538–556.

27. M. Oren and S. K. Nayar, “Generalization of lambert’s reflectance model,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques, (1994), pp. 239–246.

28. J. Sun, X. Zhou, Z. Fan, and Q. Wang, “Investigation of light scattering properties based on the modified li-liang brdf model,” Infrared Phys. Technol. 120, 103992 (2022). [CrossRef]  

29. S. M. Rusinkiewicz, “A new change of variables for efficient brdf representation,” Render. techniques 98, 11–22 (1998). [CrossRef]  

30. K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967). [CrossRef]  

31. A. Wilkie and A. Weidlich, “Polarised light in computer graphics,” in SIGGRAPH Asia 2012 Courses, (2012).

32. T. A. Germer, “Evolution of transmitted depolarization in diffusely scattering media,” J. Opt. Soc. Am. A 37(6), 980–987 (2020). [CrossRef]  

33. E. Heitz, “Understanding the masking-shadowing function in microfacet-based brdfs,” J. Comput. Graphics Techniques 3(2), 32–91 (2014).

34. B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, “Microfacet models for refraction through rough surfaces,” in Proceedings of the 18th Eurographics conference on Rendering Techniques, (2007), pp. 195–206.

35. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” Adv. neural information processing systems 30 (2017).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Local spatial definition of the pBRD model in the direction of light propagation.
Fig. 2.
Fig. 2. Overview of our method. After measuring multiple views of the object appearance, the pBRDF is accurately estimated with an end-to-end inverse rendering.
Fig. 3.
Fig. 3. Separation of polarized reflectivity of different radiation components. In a strict sense this simple separation is inaccurate, the specular radiation component (marked by the red box) is mixed in the separation of diffuse radiation in the mixture. It also shows that the diffuse radiation has some polarization information, which cannot be neglected in the estimation of polarized reflectivity.
Fig. 4.
Fig. 4. Image codec module for feature extraction.
Fig. 5.
Fig. 5. Mueller matrix image of a ceramic horse. (a) Rendered Mueller matrix image reconstructed using the model of Baek et al. (b) Rendered Mueller matrix image reconstructed using the model of Kondo et al. (c) Rendered Mueller matrix image reconstructed using our model; and (d) the average PSNR values of the Mueller matrix image simulated by the above three methods.
Fig. 6.
Fig. 6. Measurement system for active detection of polarization properties of objects.
Fig. 7.
Fig. 7. Example set of images from the materials dataset consisting of 100 different captured sample objects.
Fig. 8.
Fig. 8. Ground truth image vs. rendered image in synthetic data. (a) The first column shows the intensity $S_0$, $DoP$ and $AoP$ of the object rendered in mitsuba3; (b) the second column corresponds to the rendered image after estimation of the parameterized pBRDF model; and (c) the estimated parameters $\Theta$ used to accurately reproduce the object appearance.
Fig. 9.
Fig. 9. Ground truth image vs. rendered image in real data. (a) The first column shows the actual measurements of the intensity and DoP of the target appearance under unpolarized illumination; (b) The second column is the rendered image after estimation of the parameterized pBRDF model; and (c) The estimated parameters $\Theta$ used to accurately reproduce the object appearance.
Fig. 10.
Fig. 10. Results of our method and Dave et al. for four examples of real objects with different dielectric materials. (a) Normal; (b) Roughness; (c) Specular; and (d)Diffuse.
Fig. 11.
Fig. 11. Estimation of specular reflection components with and without directional diffuse reflections in synthetic and real data.
Fig. 12.
Fig. 12. Convergence results for ours w / o directional diffuse reflection, both as a function of samples per-pixel and time (in seconds). Our model removes the single-scattering restriction without significantly increasing the rendering cost or variance.
Fig. 13.
Fig. 13. Reconstruction errors of different loss function methods for different number of views.

Tables (2)

Tables Icon

Table 1. Refractive index estimation results for typical materials

Tables Icon

Table 2. The average error of each parameter estimated by Dave et al. and our method

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

M ( p , ω i , ω o ) = M s ( p , ω i , ω o ) + M b ( p , ω i , ω o ) + M d ( p , ω i , ω o )
M s ( p , ω i , ω o ) = k s C i h ( ϕ l ) D s ( δ ) F R ( θ d , η ) C h o ( ϕ c )
M b ( p , ω i , ω o ) = C i n ( φ l ) F T i ( θ i , η ) D b ( ρ b ) F T o ( θ o , η ) C n o ( φ c )
M d ( p , ω i , ω o ) = k d d C i n ( φ l ) F R ( θ d , η p ) C n o ( φ c )
S o ( ω o , p ) = S 2 M ( p , ω i , ω o ) S i ( ω i , p ) cos θ t d ω i
M [ ρ d T + T + + κ s R + + κ d d R + ρ d T T + β ρ d T T + α 0 ρ d T T + β κ s R + + κ d d R + 0 0 ρ d T T + α 0 κ s R + κ d d R + 0 0 0 0 κ s R + κ d d R + ]
I d = D ρ d T T + α = I 135 I 45
I s = D ( k s R + + k d d R + ρ d T T + β ) = I 0 I 90
L out = 1 K k = 1 K ( λ 1 Y ^ S 0 Y S 0 1 + λ 2 Y ^ D o P Y D o P 1 + λ 3 Y ^ A o P Y A o P 1 λ 4 log C )
L G C = p = 1 N [ 1 K 1 k = 1 K [ I rendered k θ spec + I rendered k θ diff + I rendered k θ dir-diff ] p 2 ]
L total = L out + L G C
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.