Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Near-infrared monocular 3D computational polarization imaging of surfaces exhibiting nonuniform reflectance

Open Access Open Access

Abstract

This paper presents a near-infrared (NIR) monocular 3D computational polarization imaging method to directly reconstruct the shape of surfaces exhibiting nonuniform reflectance. A reference gradient field is introduced to the weight constraints for globally correcting the ambiguity of the surface normal for a target with nonuniform reflectance. We experimentally demonstrated that our method can reconstruct the shape of surfaces exhibiting nonuniform reflectance in not only the near field but also the far field. Moreover, with the proposed method, the axial resolution can be kept constant even under different object distances as long as the ratio of the focal length to the object distance is fixed. The simplicity and robustness of the proposed method make it an attractive tool for the fast modeling of 3D scenes.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional imaging with high-precision depth-sensing ability has a wide range of applications, such as unlocking a phone, automatic drives, digital documentation of heritage sites, and city modeling [13]. Numerous approaches have been proposed for 3D imaging, including binocular stereoscopic vision, time-of-flight, structured light illumination, and LIDAR [46]. However, these methods are complicated and environmentally sensitive because of the limitations of the sensors and modulated light sources, which are essential elements of their setup. Polarization information has been utilized as a new method to reconstruct the shape of surfaces by effectively identifying the normal vector to the surface [710]. This method of recovering the shape using polarization information does not require multiple detectors, active patterned illumination, or complex mechanical structures, which are major drawbacks of other methods when performing 3D imaging in a natural scene. Polarization-based shape reconstruction is expected to be a simpler system with robust results, benefiting more applications such as biomedical instruments and portable 3D-scanned terminal devices.

Unfortunately, the ambiguity of the zenith and azimuth angles of the normal vector is inevitable when a single polarization image is used [11,12]. In this case, the ambiguity is solved by propagating the true normal of the surface inward from the boundary under the assumption that the object exhibits convexity [9,13]. However, these methods cannot produce globally optimal results, and the integrability issue cannot be effectively resolved. Because of these drawbacks, they are inapplicable to targets with visible occlusion boundaries. Other methods avoid these problems using additional cues, such as combining shape from shading (SFS) and photometric stereo, which can help obtain a coarse depth map with the normal [10,1416]. The coarse depth map provides the geometric constraint of the target to correct the ambiguity of the normal vector. These works generally assumed that the reflectance of the target surface is uniform and that the lighting direction is known. However, these assumptions are impractical in the case of real-world targets with nonuniform reflectance, such as colored targets. Because the reflectance of a different colored surface is nonuniform for incident light in the visible band, to avoid the influence of nonuniform regions on the reconstruction results, Microsoft Kinect or multi-view imaging methods have been proposed to obtain coarse 3D geometries by combining 3D cameras [7,17]. These methods also require controlled illumination and have complex imaging systems [18]. According to Refs. [1921], different colored patches have more stable reflectance in the NIR band than in the visible waveband, which provides an effective cue for the monocular 3D imaging of colored targets.

In this paper, we propose a NIR monocular 3D computational imaging method based on polarization properties to accurately reconstruct 3D surface shapes. We first address the varying intensity issue between the color patches due to the nonuniform reflectance of the colored target using the stable spectral radiative feature. The reference gradient field of the normal is then obtained through the corrected intensity information, from which two reference normal parameters are derived to disambiguate the surface normal. We utilize the corrected normal information to reconstruct a highly accurate 3D shape. The proposed 3D polarization imaging method has the potential to recover the shape of surfaces exhibiting nonuniform reflectance with a single view. The reconstruction accuracy of the method is analyzed to confirm the imaging stability. Finally, a series of experiments in a range of challenging scenarios and object distances were conducted to demonstrate the effectiveness of the method.

2. Three-dimensional polarization imaging model in the NIR band

Diffuse reflected light emitted from a target’s surface can help reconstruct the 3D shape of the target surface based on the polarization reflectance model [2224]. However, the surface reflectance of diffuse reflection components due to the color is a key factor influencing the reconstruction results [25]. Fortunately, the reflectance in the NIR band exhibits a stable variation trend, which is different from that in the visible spectrum, as shown in Fig. 1. In other words, the reflectance in the NIR band changes much more gradually compared with that in other wavelength ranges. The diffuse reflection components in the NIR band are robust to the shape information inversion of multi-colored areas based on the stable spectral radiative properties. Taking a typical checkerboard with six different colors as an example, the variation trend indicates that the reflectance intensity is largely maintained at a certain percentage in the NIR band [19]. This ensures highly robust information for solving the distortion problem in the color surface 3D reconstruction. The detailed procedure of 3D polarization imaging based on the NIR properties will be introduced.

 figure: Fig. 1.

Fig. 1. (a) Reflectance spectra of colors; (b) Colored checkerboard.

Download Full Size | PDF

We used a NIR camera to capture sub-polarization images. Figure 2 illustrates the proposed method. Based on the Fresnel reflection model [22], the normal vector is determined by the azimuth angle φ and zenith angle θ. The polarization properties can be estimated by capturing a sequence of images in which a polarizer placed in front of the camera is rotated at different angles γj, where j∈{1,…, P}, for sampling different polarized azimuth images (P ≥ 3) [26,27]. The NIR intensity with different colors can be expressed as

$$I_{_{col}}^{{\gamma _j}}({\textbf v} )= \frac{{I_{\max }^{col}({\textbf v} )+ I_{\min }^{col}({\textbf v} )}}{2} + \frac{{I_{\max }^{col}({\textbf v} )- I_{\min }^{col}({\textbf v} )}}{2}\cos ({2({{\gamma_j} - \phi ({\textbf v} )} )} )\;,$$
where $I_{_{col}}^{{\gamma _j}}({\textbf v} )$ = INIF·ρcol(v), INIF expresses the intensity of the NIR-reflected light by a standard white target, and ρcol(v) is the reflectance of the different colored surfaces, the value of which ranges from 0 to 1. v = (x, y) is the pixel position in the image. ${I^{{col}}_{\textrm {max}}} ({ {\textbf V}})$ and ${I^{{col}}_{\textrm {min}}} ({ {\textbf V}})$ are the observed maximum and minimum pixel brightness values during polarizer rotation, respectively. Equation (1) estimates the phase angle ϕ(v), which determines the azimuth angle of the surface normal φ(v). Unfortunately, there are two types of ambiguities while solving for φ(v), which can be simply written as
$$\varphi ({\textbf v} )= \phi ({\textbf v} )\quad \textrm{or}\quad \phi ({\textbf v} )+ \mathrm{\pi }\;.$$

 figure: Fig. 2.

Fig. 2. Three-dimensional reconstruction model based on polarization information in NIR band.

Download Full Size | PDF

This leads to errors while estimating the surface normal and further deforms the recovered 3D information; the surface normal direction can be obtained by solving the azimuth angle φ(v) and zenith angle θ(v), as shown in Fig. 2. In addition, the relationship between the zenith angle θ(v) and the degree of polarization p(v) can be estimated on the basis of the Fresnel equations [27], which can be expressed as

$$p({\textbf v} )= \left|{\frac{{{T_p}({\textbf v} )- {T_s}({\textbf v} )}}{{{T_p}({\textbf v} )+ {T_s}({\textbf v} )}}} \right|\;,$$
where Tp(v) = sin2θi(v)sin2θ(v)/sin2(θi(v)+θ(v)), and Ts(v) = sin2θi(v)sin2θ(v)/ (sin2(θi(v)+θ(v)) cos2(θi(v)-θ(v))), which are the corresponding transmittance values of the incident light in directions perpendicular and parallel to the incident surface, respectively. The zenith angel θ(v) can be expressed using the incident angle and refractive index of the incident and refractive media, as θ(v) = nsinθi(v). For two images represented by Icol max(v) and Icol min(v), the degree of polarization p(v) can be written as
$$\begin{array}{l} p({\textbf v} )= ({I_{\max }^{col}({\textbf v} )- I_{\min }^{col}({\textbf v} )} )/({I_{\max }^{col}({\textbf v} )+ I_{\min }^{col}({\textbf v} )} )\\ \;\;\;\;\;\;\;\; = \frac{{{{(n - 1/n)}^2}{{\sin }^2}\theta ({\textbf v} )}}{{2 + 2{n^2} - {{(n + 1/n)}^2}{{({\sin \theta ({\textbf v} )} )}^2} + 4\cos \theta ({\textbf v} )\sqrt {{n^2} - {{({\sin \theta ({\textbf v} )} )}^2}} }}\;, \end{array}$$
where n is the refractive index, taken in the range of 1.4–1.5, considering that the feature of the main target is in natural condition [17,28]. The ambiguity problem of the zenith angle in its solution process can be effectively avoided on the basis of the diffuse reflected light, and the unique correspondence between the degree of polarization and the zenith angle can be achieved [12]. Having discussed how to obtain the two parameters of the normal vector, we explain the high-precision surface 3D reconstruction process in the next section.

3. High-precision surface 3D reconstruction

Figure 3 illustrates the 3D reconstruction procedure in detail. Based on the polarization information of the NIR-reflected light, we can obtain two parameters, namely the incident angle θ(v) and azimuth angle φ(v), to constrain the normal-vector direction. However, the ambiguity problem of the azimuth angle, expressed in Eq. (2), affects the accuracy of the normal vector. Therefore, it is necessary to correct the azimuth angle.

 figure: Fig. 3.

Fig. 3. Schematic of the developed NIR 3D imaging model.

Download Full Size | PDF

Resolving the azimuth angle ambiguity: Solving the azimuth angle problem in the case of a colored target is challenging without prior information. Here, we use the gradient variation field obtained from the NIR intensity image as a reference gradient field to accurately determine the surface normal. The gradient can be expressed as GN-II = {gN-II(x), gN-II(y)}. The parameters obtained from the NIR intensity information are denoted by the superscript N-II. To obtain the reference gradient field of the target surface, the target intensity information I(v) is expressed as

$$\begin{array}{l} I({\textbf v} )= R({{g^{\textrm{N - II}}}(x ),{g^{\textrm{N - II}}}(y )} )\\ \;\;\;\;\;\;\; = \frac{{1 + {g^{\textrm{N - II}}}(x )g_s^{\textrm{N - II}}(x )+ {g^{\textrm{N - II}}}(y )g_s^{\textrm{N - II}}(y )}}{{\sqrt {1 + {{({{g^{\textrm{N - II}}}(x )} )}^2} + {{({{g^{\textrm{N - II}}}(y )} )}^2}} + \sqrt {1 + {{({g_s^{\textrm{N - II}}(x )} )}^2} + {{({g_s^{\textrm{N - II}}(y )} )}^2}} }}\;\;, \end{array}$$
where gN-II(x) = ∂depthN-II/∂x, gN-II(y) = ∂depthN-II/∂y, and gN-IIs(x) and gN-IIs(y) are constants associated with the imaging system [29]. Applying Taylor series expansion of the reflectance function R(·) about gN-II(x) = gN-II0(x), gN-II(y) = gN-II0(y), and ignoring the high order terms, Eq. (5) can be written as
$$\begin{array}{l} I({\textbf v}) = R({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )+ ({{g^{\textrm{N - II}}}(x )- {g^{\textrm{N - II}}}_0(x )} )\frac{{\partial R}}{{\partial {g^{\textrm{N - II}}}(x )}}({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )\\ \;\;\;\;\;\;\;\;\; + ({{g^{\textrm{N - II}}}(y )- {g^{\textrm{N - II}}}_0(y )} )\frac{{\partial R}}{{\partial {g^{\textrm{N - II}}}(y )}}({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;. \end{array}$$

We take the Fourier transform of both sides of Eq. (6). On the right of Eq. (6), the first term is a direct current term, hence, it can be dropped [30]. The reference gradient field is calculated as follows

$$\begin{array}{{l}} {{g^{\textrm{N - II}}}(x )\leftrightarrow {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_1}} )}\\ {{g^{\textrm{N - II}}}(y )\leftrightarrow {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_2}} )} \end{array}\;\;,$$
where Fdepth is the Fourier transform of the reference height of the surface, which can be calculated using Eq. (8) because the reference height (depthN-II)2 is a linear function of the NIR intensity I(v).
$${F_{\textrm{N - II}}} = \eta ({{F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_1}} )g_s^{\textrm{N - II}}(x )+ {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_2}} )g_s^{\textrm{N - II}}(y )\;} )\;,$$
where FN-II is the Fourier transform of the NIR intensity I(v), and η is a constant ranging from 0 to 1. Using Eqs. (58), we can obtain the reference gradient field GN-II. The objective of our method is to find an operator Λ that relates Gpolar and GN-II, and Gpolar represents a gradient field by polarization. In other words, Gpolar is corrected using the reference gradient field GN-II from the NIR intensity information. A set of binary operands can be numerically expressed as follows
$$\hat{\Lambda } = \mathop {\arg \min }\limits_\Lambda ||{{\textbf W}({{\rho_0}} )\cdot {G^{\textrm{N - II}}} - \Lambda \cdot ({{G^{\textrm{polar}}}} )} ||_2^2,\quad \Lambda \in \{ 0,1\} \;,$$
where Gpolar = {gpolar (x), gpolar (y)}. As shown in Fig. 2, the gradient field of the normal vector in x and y directions can be expressed as, based on the relationship between the two polarization parameters θ and φpolar and the normal vector, gpolar (x) = tanθ·cosφpolar, and gpolar (y) = tanθ·sinφpolar. W(ρ0) is the matrix with the same size as the target, which contains the weights of different colored patches. We present an alternative for the case of a target with spatially varying reflectance to obtain W(ρ0). According to Eq. (1), the intensity of the light received by the NIR detector can be expressed as in Eq. (10) when the rotated angle γj is fixed.
$${I_{col}}({\textbf v} )= {\textbf L}({\textbf v} )\cdot {\textbf N}({\textbf v} )\;{\rho _{col}}({\textbf v} )\;{I_s}\;,$$
where N is the normal vector of the surface, L expresses the normalized light-direction vector, and Is is the intensity of the incoming NIR light. Because L(vN(v) = cosθ(v), we can expand Eq. (10), such that
$${I_{col}}({\textbf v} )= {\rho _{col}}({\textbf v} )\;{I_s}\cos \theta ({\textbf v} )\;.$$

A standard reflectance ρ0 is selected from the color of the surface to eliminate the difference in the intensities of light reflected by different colors and satisfy the assumption for solving the azimuth angle ambiguity. The weight between the reflected light intensity of different colored patches can be written as

$$w({\textbf v} )= {{{{{I_{col}}({\textbf v} )} / {{I_0}}} = \tau ({\textbf v} )\cdot {\rho _{col}}({\textbf v} )} / {{\rho _0}}}\;,$$
where I0 is the intensity of the NIR-reflected light in the standard area ρ0, and τ(v) = cosθcol(v)/cosθ0. When τ(v) = 1, the difference between the reflectance of the color area and the standard reflectance should be corrected. The selection of the standard area can be random; here, we selected the one with the largest area occupied by the target surface color as the standard area. In Eq. (9), the reference gradient field GN-II can be used to correct the gradient field Gpolar when the influence of spatially varying reflectance has been removed in the reference gradient field GN-II. The ambiguities of φpolar(v) elimination can be written as
$${\varphi ^{\textrm{cor}}}({\textbf v} )= {\varphi ^{\textrm{polar}}}({\textbf v} )+ (1 - \hat{\Lambda }({\textbf v} )) \cdot \pi \;\;.$$
For φcor(v), there are two cases, one when $\hat{\Lambda }$ is equal to 1, meaning that the gradient field Gpolar and the gradient field GN-II have the same sign, φcor(v) = φpolar(v).The other is when $\hat{\Lambda }$ is equal to 0, meaning that the gradient fields have the opposite sign; the azimuth angle by polarization cues is required for correction by changing φcor(v) = φpolar(v) into φcor(v) = φpolar(v)+π; this can help correct the azimuth angle ambiguity of the colored target.

For the colored corner scene [31,32] shown in Fig. 4(a), a nonuniform intensity version of the target is acquired, as shown in Fig. 4(b), using a NIR camera. Figures 4(c) and 4(d) present the directly calculated gradient field components in the x and y directions; Figs. 4(e) and 4(f) show the reference gradient field components, respectively. The difference in the intensity distributions demonstrates how the correction improves the gradient field. Figure 4(k) plots the intensity variations along the vertical red and blue dotted lines shown in Figs. 4(c) and 4(d) versus the vertical pixel position. At the border of the different colors, the value of the gradient field components fluctuates significantly, including in the x and y directions. This is disadvantageous to the polarization gradient information correction. Nevertheless, the reference gradient fields shown in Figs. 4(e) and 4(f) remain stable along with the surface in the vertical direction, which is reasonable considering its geometry, as shown in Fig. 4(l). In comparison, the reference data exhibit a much higher level of compliance with the gradient variation along with the target surface, indicating that an accurate gradient field information can be obtained from the colored shape. This example demonstrates that the proposed method can solve the problem of nonuniform reflectance, thus ensuring an accurate gradient field recovery.

 figure: Fig. 4.

Fig. 4. Gradient field illustrations of a commonly used benchmark scene: (a) Experiment target; (b) NIR intensity distribution; (c) and (d) Intensity gradient fields before correction in the x and y directions; (e) and (f) Intensity gradient fields after correction using our method in the x and y directions; (g) and (h) Normal gradient fields by polarization in the x and y directions; (i) and (j) Normal gradient fields after correction by our method in the x and y directions; (k) and (l) Variations in the values of column 300 in (c) and (d), (e) and (f) respectively; (m) Variations in the values of Row 550 in (g), (e), and (i); (n) Variations in the values of Column 700 in (h), (f), and (j).

Download Full Size | PDF

Figures 4(g) and 4(h) show the gradient fields of the normal obtained using the polarization-retrieved shape in the x and y directions, respectively. Similarly, Figs. 4(e) and 4(f) show the reference gradient fields obtained using the NIR intensity. We traversed from the ambiguous gradient field in Figs. 4(g) and 4(h) using the proposed correction method to the correctly flipped gradient field of the normals in Figs. 4(i) and 4(j). Figure 4(m) plots the values of the horizontal lines in Figs. 4(g), 4(e), and 4(i) versus the horizontal pixel position. The blue dotted line represents the corrected gradient field change in the x direction, with similar values on both sides of the corner and rapid changes at the corner, which is reasonable considering its geometry. However, the gradient field from the raw polarization data in the x direction maintains a value of approximately −0.6. In comparison, the gradient field of the corrected data has a more realistic variation with the corner scene’s surface, as shown in the raw polarization data. More importantly, the proposed correction method has higher robustness, which is evident in that the corrected data are unaffected by the reference gradient field distortion near 550 pixels obtained from the NIR intensity information. Similarly, Fig. 4(n) includes raw polarization data, NIR data, and corrected data in the y direction from Figs. 4(h), 4(f), and 4(j), which have the same value. Compared with the raw polarization data, the corrected data have a more stable gradient trend in the y direction, which is consistent with the actual situation. This shows that the method can solve the π-ambiguity and ensure a high accuracy of the recovered 3D shape.

Shape from polarization: By utilizing the parameters of the normal vector, the surface can be recovered from a single polarization image after the parameters of the surface with nonuniform reflectance are corrected using the above method. Based on the accurate polarization parameter (θ(v), φcor(v)), the final shape can be expressed as

$$z({\textbf v}) = {F^{ - 1}}\left\{ { - \frac{j}{{2\pi }}\frac{{\frac{u}{M}F\{{\tan \theta ({\textbf v})\cos {\varphi^{cor}}({\textbf v})} \}+ \frac{v}{N}F\{{\tan \theta ({\textbf v})\sin {\varphi^{cor}}({\textbf v})} \}}}{{{{\left( {\frac{u}{M}} \right)}^2} + {{\left( {\frac{v}{N}} \right)}^2}}}} \right\}\;,$$
where F{·} and F−1{·} are the Fourier and inverse Fourier transforms, respectively. M and N indicate the M × N pixels in the target area. Thus, from Eq. (8), when M and N remain unchanged, the results of Gpolar and GN-II remain the same. This is a valuable clue for long-distance 3D imaging with a high accuracy requirement. To this end, Eq. (14) can be rewritten in terms of the focal length f and target distance D, such that
$$z(({\textbf v})) = {F^{ - 1}}\left\{ { - \frac{j}{{2\pi }}\frac{{\frac{{uD}}{{\alpha f}}F\{{\tan \theta ({\textbf v})\cos {\varphi^{cor}}({\textbf v})} \}+ \frac{{vD}}{{\beta f}}F\{{\tan \theta ({\textbf v})\sin {\varphi^{cor}}({\textbf v})} \}}}{{{{\left( {\frac{{uD}}{{\alpha f}}} \right)}^2} + {{\left( {\frac{{vD}}{{\beta f}}} \right)}^2}}}} \right\}\;.$$

Here, α = L/μ and β = W/μ are both constants, which represent the ratios of the target length L and the target width W to the pixel size μ, respectively.

Our method can perform the 3D recovery of the target with an accuracy of approximately 1 mm (the pixel size of the camera is 5.5 μm × 5.5 μm), even when the target is far away from the detector, as long as the D/f ratio remains unchanged by adjusting the focal length f.

4. Results and analyses

We demonstrated two experiments for target recovery. First, a colored plaster target (10 cm × 10 cm × 10 cm) was imaged under natural lighting and local highlight conditions to verify the robustness of the proposed method in applications involving complex conditions. Second, we show that a highly precise result can be maintained under varying imaging distances. A Basler acA2040-90umNIR camera (resolution 2048 × 2048, pixel size 5.5 μm × 5.5 μm, frame rate 90 fps, and spectral response 400–1000 nm) was used to capture the images in the NIR band. The 850 nm NIR light source produced by Thorlabs M850L3 and the cut-off filter Zolix NL850.8 were used to ensure that the camera can capture NIR light. Moreover, four different polarization images were obtained through a Thorlabs SM2V15 polarizer placed in front of the camera. To demonstrate that the recovery accuracy of the proposed method can be maintained under different object distances through a quantitative comparison, the suitable error metrics are measured [33,34]:

  • • Average relative error (rel): $\frac{1}{{MN}}\sum {_{\textbf v}\frac{{|{z({\textbf v} )- z_{}^{\textrm{other}}({\textbf v} )} |}}{{z({\textbf v} )}}},$
  • • Root-mean-squared error (rms): $\sqrt {\frac{1}{{MN}}\sum {_{\textbf v}{{({z({\textbf v} )- z_{}^{\textrm{other}}({\textbf v} )} )}^2}} },$
  • • Average log10 error (log10): $\;\frac{1}{{MN}}\sum {_{\textbf v}|{{{\log }_{10}}z({\textbf v} )- {{\log }_{10}}z_{}^{\textrm{other}}({\textbf v} )} |},$
where z(v) represents the 3D result, which is recovered by the closest set of NIR polarization images to the camera, and zother(v) represents the 3D result under another object distance.

4.1. Target surface recovery under complex conditions

Robustness to nonuniform reflectance: Fig. 5(a) illustrates the experimental condition and setup. The first selected target was a common plaster surface with a corner scene, making it ideal to demonstrate the 3D surface reconstruction. Two polarization images were required, one of which maximizes the reflected light and the other minimizes it, as shown in Figs. 5(b) and 5(c). A difference between Imax and Imin proves the polarization signature of the surface reflected light, based on which the proposed model is established. By calculating the parameters of the azimuth and zenith angles using Eqs. (1) and (4), respectively, we can constrain the normal vector using the two known variables. The depth information of the target is reconstructed by integrating the normal vector of every pixel on the surface. Figures 5(d) and 5(f) illustrate the reconstruction of the 3D shape from the polarization data. Unfortunately, the shape cannot be accurately recovered by merely employing polarization data. The model emphasizes on the ambiguous normal by introducing the gradient variation field into the normal solution model, as described in Section 3. It estimates the real gradient variation in the target surface by analyzing the intensity information distribution on the surface. The true azimuth angle can be obtained using the reference gradient field to correct the gradient field retrieved from the polarization data. Figures 5(e) and 5(g) show examples of the final reconstructed 3D surface, which tend to show the correct shape.

 figure: Fig. 5.

Fig. 5. (a) NIR 3D polarization imaging system; (b) Imax; (c) Imin; (d) Shape obtained from polarization data; (e) Proposed method; (f) Relative height value of Fig. 5(d); (g) Relative height value of Fig. 5(e).

Download Full Size | PDF

To visually compare the variation trends in the recovered surface before and after the correction, we determined the height of the lines in Figs. 5(f) and 5(g). Figure 6 shows the relative height values before (blue curve) and after (red curve) the correction more visually. The blue profile maintains a fixed slope and fails to distinguish the difference in the gradients between the two sides of the corner, which does not match the actual shape of the target. In comparison, the slope of the red profile line reverses at approximately 600 pixels in the middle of the target, presenting a result consistent with the real situation, which clearly demonstrates that the proposed method could significantly improve the results. This proves the effectiveness of our method.

 figure: Fig. 6.

Fig. 6. Contours of lines before and after correction for performance comparison.

Download Full Size | PDF

Assuming target surface continuity, the proposed method can recover the 3D shape of a surface exhibiting nonuniform reflectance. Figure 7(a) shows the polarization intensity image obtained by the camera directly where the region with different intensity values was constructed as grooves, as shown in Fig. 7(c). This is due to the limitation in the assumption of uniform reflectance that a higher intensity corresponds to a closer distance between the target to the camera. In comparison, the proposed method could recover regions with color more accurately. As shown in Figs. 7(b) and 7(d), the intensity information is corrected like the reflected light distribution of one color, and the surface is reconstructed without any unexpected bumps or hollows. Figure 7(g) shows the variations in the values of the 350th column in Figs. 7(c) and 7(d), demonstrating that the proposed method yields significantly improved results. Figures 7(e) and 7(f) further compare the shape reconstructions of the regions with color. The regions with a smooth shape could be recovered more accurately by the proposed method than the previous method without uniform correction.

 figure: Fig. 7.

Fig. 7. Three-dimensional results for a target without varying the reflectance (top) and with varying the reflectance (bottom): (a) Input intensity information; (b) After correcting for the reflectance; (c) and (d) 3D shapes of (a) and (b); (e) and (f) Relative height value of (c) and (d); (g) Contours of 350th column shown in (c) and (d).

Download Full Size | PDF

Moreover, to further demonstrate the performance of the proposed method, we imaged a colored cartoon plaster target (the mean value is approximately 18 cm × 9.5 cm × 7.5 cm) under outdoor conditions. Unlike the cube-colored plaster target, the colored cartoon target is much more complex in term of details and colors in different depths. Figures 8(a) and 8(c) illustrate the reconstruction of the 3D surface without correcting for the reflectance where the structured information in the reflectance change region is distorted. Figures 8(b) and 8(d) show the 3D surface reconstructed by the proposed method, which tend to show the correct structured detail and shape. Figure 8(e) shows the relative height variations in the target arm region shown in Figs. 8(a1) and 8(b1), where the 3D results affected by nonuniform reflectivity show a bump given the reflectivity variation. In comparison, the green line in Fig. 8(e) clearly demonstrates an improvement, which has a similar trend with the true surface shape and without the unexpected bump.

 figure: Fig. 8.

Fig. 8. Three-dimensional results for a colored cartoon plaster target: (a) 3D-recovered result without correcting for the reflectance; (b) 3D-recovered result using our method; (c) and (d) Relative height values of (a) and (b); (a1) and (b1) Approximately 10× magnification of the 3D shape in the region of the target arm; (e) Height variations in the pixels of (a1) and (b1).

Download Full Size | PDF

Robustness to lighting conditions: To evaluate our method in practical applications, we experimented with three different lighting environments. Figure 9 presents several results obtained under complex illumination environments. For a distinctive demonstration, we recovered the high value of each pixel of the same target in a different scene of directly captured Imax and Imin images and the final reconstructed 3D shape. Compared with the 3D results of the proposed method, the result constructed using SFS shows a big ridge, and the 3D surface is overall rough, as shown in Fig. 9(d1). As shown in Figs. 9(a1), 9(b1), and 9(c1), the surfaces are reconstructed without any unexpected bumps. In particular, these results have the same trend as the real condition under different scenes. In comparison, the proposed method can recover the target shape accurately. Figures 9(a2), 9(b2), and 9(c2) further show the shape reconstruction results in terms of depth maps. In different lighting environments, these depth maps have largely similar values at the same point on the surfaces. Hence, to demonstrate that the proposed method can perform effectively for 3D reconstructions under different complex illumination environments, a numerical analysis was performed using the counted height value. As shown in Fig. 9(e), the trends in the height curves of the reconstruction results under different scenes are similar, and the three curves largely coincide, thus proving the robustness of our method. This technique is expected to support the applications of 3D imaging in complex environments.

 figure: Fig. 9.

Fig. 9. NIR 3D polarization imaging implemented in a range of lighting conditions (real experiment): (a) Polarization 3D-recovered results under indoor conditions; (b) Polarization 3D-recovered results obtained outdoors on a partly sunny day; (c) Polarization 3D-recovered results obtained under disco lighting (simulation with LED lighting); (d) 3D-recovered result using intensity information directly in an indoor environment; (e) Height statistics of the reconstruction results shown in (a)-(c) for Row 513.

Download Full Size | PDF

4.2 High-precision recovery under increasing imaging distance

To further demonstrate the stable performance of the proposed method, we imaged a paper cup (the bottom, top, and height are approximately 5, 7, and 8 cm, respectively) under NIR source illumination; this object had grooves with a feature depth <1 mm and appropriate high-frequency information for statistics. Unlike the conventional 3D method, our method maintains the same recovery accuracy when targets have the same spatial resolution, as expressed in Eq. (15). A sliding rail is used to extend the distance of the target from the camera, and the other appliances are the same as those described in Section 4.1.

The experiments were conducted under four different object distances. The focal length f was adjusted to keep the ratio D/f as constant as possible with increasing object distance D, to verify that our method can effectively maintain the reconstruction resolution. In this experiment, we obtained a ratio of approximately 1.1 based on the polarization images from the first set of acquisitions (nearest point to the camera). Thereafter, the target is moved away from the camera, and three positions are randomly selected for image acquisition; the ratio is always maintained by adjusting the focal length f. For targets with considerable details, for instance in Fig. 10(a), it is essential to preserve the details as the object distance increases. Figures 10(b)–10(e) show the paper cup reconstructed by the proposed method under four different object distances. The features visualized in the 3D renderings fit well with the original appearance, and some fine features can be easily identified. To quantify the consistency in the height of the four recovered results, a horizontal line plot for each 3D result, shown in Figs. 10(b)–10(e), is generated by plotting the relative height value versus the horizontal position at a given vertical position. For the vertical position pixel 320, the horizontal lines pass through the middle of the cup, as indicated by the different colored lines. Then, each relative height value of the curve in Fig. 10(b) is used as a standard and subtracted the height value at the corresponding position in Figs. 10(c)–10(e) from the standard, to make a comparison. This provides a simple background and a feature that is easy to observe. Figure 10(f) illustrates the error variation in the relative height. The error curves under the four conditions have the same values, close to zero, which agree well with the expected behavior. Figures 10(b1)–10(e1) further compare the shape reconstructions of the regions with grooves. To prevent the curves from overlapping, the relative height values of the reconstructed results were processed by subtracting 6, 12, and 18 from the height values shown in Figs. 10(c1)–10(e1), respectively, for a clear comparison of the results of the trench detail reconstruction under varying object distances. We measured the real height and width values of the grooves; the averaged values were 0.3 mm and 1.0 mm, respectively. The details can be effectively reconstructed by the proposed method. Figure 10(g) illustrates that the ability to reconstruct the details is well maintained under varying object distances.

 figure: Fig. 10.

Fig. 10. (a) and (h) Captured images of paper cups and statue faces under different object distances; (b)-(e) and (i)-(l) Recovered 3D shape at different object distances; (f) Relative height error in the pixels of Row 320 in (c), (d), and (e) with (b); (g) and (n) Relative height variations in the pixels of (b1)-(e1) and (i1)-(l1); (m) Relative height error in the pixels of Row 450 in (j), (k), and (l) with (i); (b1)-(e1) and (i1)-(l1) Approximately 10× magnification of the 3D shape in the region with grooves and mouths.

Download Full Size | PDF

A plaster statue face (mean value of approximately 12 cm × 8 cm × 7.5 cm) was tested for the 3D reconstruction accuracy under four different object distances and a fixed D/f ratio. In this experiment, the ratio was ensured to be approximately 3.05 based on the first set of acquisitions. Figure 10(h) shows the target images, which have complex structure information. Figures 10(i)–10(l) show the reconstructed 3D shapes under the four different distances; clearly, the reconstructed shape visualized fits well with the target appearance, and the resolution of the structure details is consistent. A similar approach to Fig. 10(f) is taken to compare the height values under different target distances, and the results are shown in Fig. 10(m), which converge to zero. The highest relative height error is approximately 25 occurring on both sides of the nose, which can be ignored compared with the overall reconstructed relative height of the target. The shape reconstructions of the regions with the mouth shown in Figs. 10(i1)–(l1) are further compared, where the real length and height-averaged values measured are 1.1 cm and 3 mm, respectively. These curves still have the same values as expected, as shown in Fig. 10(n). Once again, it is demonstrated that when the ratio is fixed, the ability to reconstruct the details can be maintained.

To quantitatively compare the 3D-recovered shapes of the paper cup under different object distances, three quantitative comparison methods are evaluated. The set of reconstructions closest to the camera was used as a standard to quantitatively compare the high value with the other three sets of 3D results. Tables 1 and 2 list the quantitative errors of the two regions under the four different object distances. Based on the definitions of these indices, lower values show higher similarities between two 3D-recovered shapes. In Table 1, the values of the three error metrics rel, log10, and rms for the middle region are 0.0065, 0.0028, and 0.0750, respectively. These results are almost equal to zero, numerically showing that these values are similar. We find the same pattern in Table 2: our method consistently maintains a significant advantage in terms of the accuracy, even for high-frequency details of <1 mm, which demonstrates the robustness and effectiveness of our proposed system.

Tables Icon

Table 1. Error and accuracy metrics for the 3D-recovered results in the middle region

Tables Icon

Table 2. Error and accuracy metrics for the 3D-recovered results in the groove region

From Tables 1 and 2, we find that the average error values in the region with grooves are slightly higher than the average error values in the middle region. This can be attributed to the fact that the groove region has more volume reflection than the smooth surface, which may cause some error information. Nevertheless, statistically, there is still a high degree of consistency between the reconstructed results under different object distances in the groove region. In summary, the results of these indices show good stability for the recovered 3D shapes under four object distances, proving that a highly accurate 3D reconstruction can be achieved even at longer distances.

We further reconstructed the target 3D shape to compare the variation in height with the increase in the ratio (e.g., ratios = 1.1, 2.2, 3.3, 4.4). Figures 11(a) and 11(b) compares the four estimated and recovered groove heights and the measured and recovered average heights for the same position. In Fig. 11(a), the height values of the four grooves were calculated and compared with the estimated results, showing similar trends with a maximum error of only 0.03 mm. Moreover, the groove height error increases with increasing ratio; the maximum difference is approximately four times, as shown in Fig. 11(b). This once again confirms the relationship between the ratio and accuracy expressed in Eq. (15).

 figure: Fig. 11.

Fig. 11. Accuracy comparison under different ratios of D/f: (a) Between the estimated and recovered groove height; (b) Between the measured and recovered average groove height.

Download Full Size | PDF

5. Conclusions

In this study, we developed a highly accurate monocular 3D target reconstruction method based on NIR polarization information for targets exhibiting nonuniform reflectance. The method features two major contributions: avoiding the influence of varying reflectance and ensuring a consistent high-quality reconstructed shape. Introducing weight constraints to the reference gradient field by utilizing the intensity difference in the NIR band between different colored areas could help accurately unify the reflectance of colored areas into a standard reflectance. The unified NIR reference gradient field was utilized to provide a globally correct reference to solve the ambiguity of the normal azimuth, enabling the recovery of the 3D shape of colored targets. Moreover, we determine the appropriate ratio of the object distance to the focal length, which is a strong clue to maintain the 3D resolution of the target shape retrieved from polarization data. The experimental results showed that 3D shape results with micron-level depth resolution can be recovered and that the application scope of 3D polarization imaging can be extended to complex illumination conditions and long detection distances.

Funding

National Natural Science Foundation of China (62005203, 62075175); Key Laboratory of Optical Engineering Foundation of Chinese Academy of Sciences.

Acknowledgments

We would like to thank Yudong Cai and Mingyu Yan from Xidian University for their help in obtaining the experimental data. We also thank the reviewers for their comments on the manuscript.

Disclosures

The authors declare no conflicts of interest.

References

1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padfett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

2. K. Morimoto, A. Ardelean, M. L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, and E. Charbon, “Megapixel time-gated SPAD image sensor for 2D and 3D imaging application,” Optica 7(4), 346–354 (2020). [CrossRef]  

3. G. Krishnan, R. Joshi, T. O’Connor, F. Pla, and B. Javidi, “Human gesture recognition under degraded environments using 3D-integral imaging and deep learning,” Opt. Express 28(13), 19711–19725 (2020). [CrossRef]  

4. F. Q. Li, H. J. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “CS-ToF: high-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017). [CrossRef]  

5. Z. Song, Z. Song, J. Zhao, and F. Gu, “Micrometer-level 3D measurement techniques in complex scenes based on stripe-structured light and photometric stereo,” Opt. Express 28(22), 32978–33001 (2020). [CrossRef]  

6. G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3D imaging,” Appl. Opt. 50(31), 5917–5920 (2011). [CrossRef]  

7. Z. Cui, J. Gu, B. Shi, P. Tan, and J. Kautz, “Polarimetric multi-view stereo,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1558–1567.

8. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Polarization-based surface normal estimation of black specular objects from multiple viewpoints,” in 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (IEEE, 2012), pp. 104–111.

9. G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. on Image Process. 15(6), 1653–1664 (2006). [CrossRef]  

10. A. Kadambi, V. Taamazyan, B. Shi, and R. Raskar, “Depth sensing using geometrically constrained polarization normal,” Int J Comput Vis 125(1-3), 34–51 (2017). [CrossRef]  

11. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. 56(4), 041303 (2016). [CrossRef]  

12. G. A. Atkinson and E. R. Hancock, “Shape Estimation Using Polarization and Shading from Two Views,” IEEE Trans. Pattern Anal. Machine Intell. 29(11), 2001–2017 (2007). [CrossRef]  

13. D. Miyazaki, R. T. Tan, K. Hara, and K. Ikeuchi, “Polarization-based inverse rendering from a single view,” in Computer Vision (IEEE, 2003), pp. 982.

14. C. Wu, B. Wilburn, Y. Matsushita, and C. Theobalt, “High-quality shape from multi-view stereo and shading under general illumination,” in CVPR (IEEE, 2011), pp. 969–976.

15. F. Langguth, K. Sunkavalli, S. Hadap, and M. Goesele, “Shading-aware multi-view stereo,” in European Conference on Computer Vision (Springer, Cham, 2016), pp. 469–485.

16. G. Oxholm and K. Nishino, “Multiview shape and reflectance from natural illumination,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 2155–2162.

17. G. A. Atkinson and E. R. Hancock, “Surface Reconstruction Using Polarization and Photometric Stereo,” in International conference on computer analysis of images and patterns (Springer, Berlin, Heidelberg, 2007), pp. 466–473.

18. W. A. Smith, R. Ramamoorthi, and S. Tozza, “Linear depth estimation from an uncalibrated, monocular polarisation image,” in European Conference on Computer Vision (Springer, Cham, 2016), pp. 109–125.

19. D. Zhao, Y. Asano, L. Gu, I. Sato, and H. Zhou, “City-Scale Distance Sensing via Bispectral Light Extinction in Bad Weather,” Remote Sens. 12(9), 1401 (2020). [CrossRef]  

20. D. A. Burns and E. W. Ciurczak, Handbook of near-infrared analysis (CRC press, 2007).

21. X. Wang, T. Hu, D. Li, K. Guo, J. Gao, and Z. Guo, “Performances of Polarization-Retrieve Imaging in Stratified Dispersion Media,” Remote Sens. 12(18), 2895 (2020). [CrossRef]  

22. L. B. Wolff and T. E. Boult, “Constraining object features using a polarization reflectance model,” IEEE Trans. Pattern Anal. Machine Intell. 13(7), 635–657 (1991). [CrossRef]  

23. D. Li, K. Guo, Y. Sun, X. Bi, J. Gao, and Z. Guo, “Depolarization Characteristics of Different Reflective Interfaces Indicated by Indices of Polarimetric Purity (IPPs),” Sensors 21(4), 1221 (2021). [CrossRef]  

24. F. Shen, M. Zhang, K. Guo, H. Zhou, Z. Peng, Y. Cui, F. Wang, J. Gao, and Z. Guo, “The depolarization performances of scattering systems based on the Indices of Polarimetric Purity (IPPs),” Opt. Express 27(20), 28337–29349 (2019). [CrossRef]  

25. B. Javidi, E. Tajahuerce, and P. Andres, “Multi-dimensional Imaging, chapter Passive Polarimetric Imaging,” John Wiley &Sons, Ltd., (2014).

26. H. D. Tagare and R. J. P. deFigueuredo, “A theory of photometric stereo for a class of diffuse non-lambertian surface,” IEEE Trans. Pattern Anal. Machine Intell. 13(2), 133–152 (1991). [CrossRef]  

27. F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018). [CrossRef]  

28. D. Miyazaki, M. Saito, Y. Sato, and K. Ikeuchi, “Determining surface orientations of transparent objects based on polarization degrees in visible and infrared wavelengths,” J. Opt. Soc. Am. A 19(4), 687–694 (2002). [CrossRef]  

29. P. S. Tsai and M. Shah, “A fast linear shape from shading,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1992), pp. 734–736.

30. A. Pentland, “Shape Information from Shading: A Theory about Human Perception,” in Proceedings of the IEEE Conference on Computer Vision (1988), pp. 404–413.

31. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015). [CrossRef]  

32. N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. B. Kang, “A light transport model for mitigating multipath interference in tof sensors,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 73–81.

33. F. Liu, C. Shen, and G. Lin, “Deep convolutional neural fields for depth estimation from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 5162–5170.

34. C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised monocular depth estimation with left-right consistency,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 270–279.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (a) Reflectance spectra of colors; (b) Colored checkerboard.
Fig. 2.
Fig. 2. Three-dimensional reconstruction model based on polarization information in NIR band.
Fig. 3.
Fig. 3. Schematic of the developed NIR 3D imaging model.
Fig. 4.
Fig. 4. Gradient field illustrations of a commonly used benchmark scene: (a) Experiment target; (b) NIR intensity distribution; (c) and (d) Intensity gradient fields before correction in the x and y directions; (e) and (f) Intensity gradient fields after correction using our method in the x and y directions; (g) and (h) Normal gradient fields by polarization in the x and y directions; (i) and (j) Normal gradient fields after correction by our method in the x and y directions; (k) and (l) Variations in the values of column 300 in (c) and (d), (e) and (f) respectively; (m) Variations in the values of Row 550 in (g), (e), and (i); (n) Variations in the values of Column 700 in (h), (f), and (j).
Fig. 5.
Fig. 5. (a) NIR 3D polarization imaging system; (b) Imax; (c) Imin; (d) Shape obtained from polarization data; (e) Proposed method; (f) Relative height value of Fig. 5(d); (g) Relative height value of Fig. 5(e).
Fig. 6.
Fig. 6. Contours of lines before and after correction for performance comparison.
Fig. 7.
Fig. 7. Three-dimensional results for a target without varying the reflectance (top) and with varying the reflectance (bottom): (a) Input intensity information; (b) After correcting for the reflectance; (c) and (d) 3D shapes of (a) and (b); (e) and (f) Relative height value of (c) and (d); (g) Contours of 350th column shown in (c) and (d).
Fig. 8.
Fig. 8. Three-dimensional results for a colored cartoon plaster target: (a) 3D-recovered result without correcting for the reflectance; (b) 3D-recovered result using our method; (c) and (d) Relative height values of (a) and (b); (a1) and (b1) Approximately 10× magnification of the 3D shape in the region of the target arm; (e) Height variations in the pixels of (a1) and (b1).
Fig. 9.
Fig. 9. NIR 3D polarization imaging implemented in a range of lighting conditions (real experiment): (a) Polarization 3D-recovered results under indoor conditions; (b) Polarization 3D-recovered results obtained outdoors on a partly sunny day; (c) Polarization 3D-recovered results obtained under disco lighting (simulation with LED lighting); (d) 3D-recovered result using intensity information directly in an indoor environment; (e) Height statistics of the reconstruction results shown in (a)-(c) for Row 513.
Fig. 10.
Fig. 10. (a) and (h) Captured images of paper cups and statue faces under different object distances; (b)-(e) and (i)-(l) Recovered 3D shape at different object distances; (f) Relative height error in the pixels of Row 320 in (c), (d), and (e) with (b); (g) and (n) Relative height variations in the pixels of (b1)-(e1) and (i1)-(l1); (m) Relative height error in the pixels of Row 450 in (j), (k), and (l) with (i); (b1)-(e1) and (i1)-(l1) Approximately 10× magnification of the 3D shape in the region with grooves and mouths.
Fig. 11.
Fig. 11. Accuracy comparison under different ratios of D/f: (a) Between the estimated and recovered groove height; (b) Between the measured and recovered average groove height.

Tables (2)

Tables Icon

Table 1. Error and accuracy metrics for the 3D-recovered results in the middle region

Tables Icon

Table 2. Error and accuracy metrics for the 3D-recovered results in the groove region

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

$$I_{_{col}}^{{\gamma _j}}({\textbf v} )= \frac{{I_{\max }^{col}({\textbf v} )+ I_{\min }^{col}({\textbf v} )}}{2} + \frac{{I_{\max }^{col}({\textbf v} )- I_{\min }^{col}({\textbf v} )}}{2}\cos ({2({{\gamma_j} - \phi ({\textbf v} )} )} )\;,$$
$$\varphi ({\textbf v} )= \phi ({\textbf v} )\quad \textrm{or}\quad \phi ({\textbf v} )+ \mathrm{\pi }\;.$$
$$p({\textbf v} )= \left|{\frac{{{T_p}({\textbf v} )- {T_s}({\textbf v} )}}{{{T_p}({\textbf v} )+ {T_s}({\textbf v} )}}} \right|\;,$$
$$\begin{array}{l} p({\textbf v} )= ({I_{\max }^{col}({\textbf v} )- I_{\min }^{col}({\textbf v} )} )/({I_{\max }^{col}({\textbf v} )+ I_{\min }^{col}({\textbf v} )} )\\ \;\;\;\;\;\;\;\; = \frac{{{{(n - 1/n)}^2}{{\sin }^2}\theta ({\textbf v} )}}{{2 + 2{n^2} - {{(n + 1/n)}^2}{{({\sin \theta ({\textbf v} )} )}^2} + 4\cos \theta ({\textbf v} )\sqrt {{n^2} - {{({\sin \theta ({\textbf v} )} )}^2}} }}\;, \end{array}$$
$$\begin{array}{l} I({\textbf v} )= R({{g^{\textrm{N - II}}}(x ),{g^{\textrm{N - II}}}(y )} )\\ \;\;\;\;\;\;\; = \frac{{1 + {g^{\textrm{N - II}}}(x )g_s^{\textrm{N - II}}(x )+ {g^{\textrm{N - II}}}(y )g_s^{\textrm{N - II}}(y )}}{{\sqrt {1 + {{({{g^{\textrm{N - II}}}(x )} )}^2} + {{({{g^{\textrm{N - II}}}(y )} )}^2}} + \sqrt {1 + {{({g_s^{\textrm{N - II}}(x )} )}^2} + {{({g_s^{\textrm{N - II}}(y )} )}^2}} }}\;\;, \end{array}$$
$$\begin{array}{l} I({\textbf v}) = R({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )+ ({{g^{\textrm{N - II}}}(x )- {g^{\textrm{N - II}}}_0(x )} )\frac{{\partial R}}{{\partial {g^{\textrm{N - II}}}(x )}}({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )\\ \;\;\;\;\;\;\;\;\; + ({{g^{\textrm{N - II}}}(y )- {g^{\textrm{N - II}}}_0(y )} )\frac{{\partial R}}{{\partial {g^{\textrm{N - II}}}(y )}}({{g^{\textrm{N - II}}}_0(x ),{g^{\textrm{N - II}}}_0(y )} )\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;. \end{array}$$
$$\begin{array}{{l}} {{g^{\textrm{N - II}}}(x )\leftrightarrow {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_1}} )}\\ {{g^{\textrm{N - II}}}(y )\leftrightarrow {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_2}} )} \end{array}\;\;,$$
$${F_{\textrm{N - II}}} = \eta ({{F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_1}} )g_s^{\textrm{N - II}}(x )+ {F_{depth}}({{\omega_1},{\omega_2}} )({ - i{\omega_2}} )g_s^{\textrm{N - II}}(y )\;} )\;,$$
$$\hat{\Lambda } = \mathop {\arg \min }\limits_\Lambda ||{{\textbf W}({{\rho_0}} )\cdot {G^{\textrm{N - II}}} - \Lambda \cdot ({{G^{\textrm{polar}}}} )} ||_2^2,\quad \Lambda \in \{ 0,1\} \;,$$
$${I_{col}}({\textbf v} )= {\textbf L}({\textbf v} )\cdot {\textbf N}({\textbf v} )\;{\rho _{col}}({\textbf v} )\;{I_s}\;,$$
$${I_{col}}({\textbf v} )= {\rho _{col}}({\textbf v} )\;{I_s}\cos \theta ({\textbf v} )\;.$$
$$w({\textbf v} )= {{{{{I_{col}}({\textbf v} )} / {{I_0}}} = \tau ({\textbf v} )\cdot {\rho _{col}}({\textbf v} )} / {{\rho _0}}}\;,$$
$${\varphi ^{\textrm{cor}}}({\textbf v} )= {\varphi ^{\textrm{polar}}}({\textbf v} )+ (1 - \hat{\Lambda }({\textbf v} )) \cdot \pi \;\;.$$
$$z({\textbf v}) = {F^{ - 1}}\left\{ { - \frac{j}{{2\pi }}\frac{{\frac{u}{M}F\{{\tan \theta ({\textbf v})\cos {\varphi^{cor}}({\textbf v})} \}+ \frac{v}{N}F\{{\tan \theta ({\textbf v})\sin {\varphi^{cor}}({\textbf v})} \}}}{{{{\left( {\frac{u}{M}} \right)}^2} + {{\left( {\frac{v}{N}} \right)}^2}}}} \right\}\;,$$
$$z(({\textbf v})) = {F^{ - 1}}\left\{ { - \frac{j}{{2\pi }}\frac{{\frac{{uD}}{{\alpha f}}F\{{\tan \theta ({\textbf v})\cos {\varphi^{cor}}({\textbf v})} \}+ \frac{{vD}}{{\beta f}}F\{{\tan \theta ({\textbf v})\sin {\varphi^{cor}}({\textbf v})} \}}}{{{{\left( {\frac{{uD}}{{\alpha f}}} \right)}^2} + {{\left( {\frac{{vD}}{{\beta f}}} \right)}^2}}}} \right\}\;.$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.