Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Exploiting redundancy in color-polarization filter array images for dynamic range enhancement

Open Access Open Access

Abstract

Color-polarization filter array (CPFA) sensors are able to capture linear polarization and color information in a single shot. For a scene that contains a high dynamic range of irradiance and polarization signatures, some pixel values approach the saturation and noise levels of the sensor. The most common CPFA configuration is overdetermined, and contains four different linear polarization analyzers. Assuming that not all pixel responses are equally reliable in CPFA channels, one can therefore apply the high dynamic range imaging scheme to improve the Stokes estimation from a single CPFA image. Here I present this alternative methodology and show qualitative and quantitative results on real data.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

22 December 2021: A typographical correction was made to Eqs. (6) and (7) to correct a production error.

Polarization imaging has recently gained interest since the emergence of snapshot devices such as polarization filter array (PFA) [1] cameras and color PFA (CPFA) cameras [2]. The latter is a combination of two filter arrays, a PFA and a color filter array (CFA), one on top of the other. A spatial modulation on the focal-plane array permits to sample the intensities ${I_{\theta ,c}}$ of the light field through several polarizing directions $\theta$ and spectral bandpass filters $\textit{c}$. The most common CPFA is a 12-channel sensor, which combines four angles of analysis equally distributed between ${0^ \circ}$ and ${180^ \circ}$ ($\theta = {0^ \circ}{,45^ \circ}{,90^ \circ}{,135^ \circ}$), and three color filters ($c = \textit{r},\textit{g},b$) arranged in a quad-Bayer [3] spatial configuration (see Fig. 1). The SONY IMX250 MYR [2] is one of these sensors and is commercially available. A linear polarization imager with four measurements is overdetermined, as only three different polarization states are needed to estimate the first three elements of the Stokes vector [4]. In this Letter, a high dynamic range (HDR)-like scheme is applied to combine the pre-processed data from multiple redundant Stokes element estimations. It leads to a dynamic range improvement in the Stokes images from a single color-polarization filter image.

HDR of irradiances and polarization signatures can occur in some scenes, e.g., a scene containing highly specular surfaces [5], diffuse reflection at low zenith angles, and shadow areas [6]. On one hand, low irradiance with a high degree of polarization [7] induces weak signal and thus a noisy image, inducing the degradation of the polarization information. On the other hand, high irradiance makes the camera response approach the saturation level, where the sensor operates in its nonlinear area. These particular scenarios can be present simultaneously in one snapshot CPFA image, which can be classified as follows:

  • • High polarization signature and high irradiance (Hp-Hi): the digital values can reach the saturation and noise levels of the camera [see the surrounded area in Fig. 4(a)].
  • • High polarization and low irradiance (Hp-Li): can lead to digital values below the noise level [see Fig. 4(b)].
  • • Low polarization and high irradiance (Lp-Hi): can lead to saturation [see Fig. 4(c)].
  • • Low polarization and low irradiance (Lp-Li) [see Fig. 4(d)].

These four scenarios are illustrated in Fig. 2, where the corresponding theoretical camera responses are shown for each polarizing angle of analysis. The dots are the four discrete intensity measurements ${I_\theta}$ that the polarimeter would have made under the specific cases. Plain lines are the simulated signals from the Malus law. According to it, intensity transmitted by a polarizer varies sinusoidally in the presence of a polarized incident beam. Even by employing a good auto-exposure algorithm, a polarization state image can exhibit pixels close to the saturation and noise levels. Combining such a noisy frame will result in noise in Stokes elements. If some outputs are still usable (not completely saturated or below the noise level), then it is still possible to estimate the Stokes elements. One may take care of minimizing the influence of noisy values in the calculation of these elements to prevent it from being further amplified, since the degree of linear polarization (DOLP) and the angle of linear polarization (AOLP) are computed from the Stokes elements by nonlinear processing, and thus are very sensitive to noise [4].

 figure: Fig. 1.

Fig. 1. CPFA sensor case. Each $2 \times 2$ super-pixel is covered by four linear polarization filters and one spectral filter.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Simulated theoretical polarization camera responses, with a varying input signal according to the Malus law.

Download Full Size | PDF

HDR imaging (HDRI) [8,9] is a kind of “hardware augmentation,” where multiple exposure images are fused through a computational imaging algorithm. Inputs are registered images of the same scene (i.e., the same irradiance levels) taken under different exposure times. A weighted average is applied during image fusion, taking into account the contribution of each image in the calculation. It is largely assumed that a pixel value can be judged to be within a “trustable” or a “dubious” zone of the digital value ranges of the camera. Several weighting functions have been investigated, taking into account either dubious pixels near the extremes, noise model of the camera, digital output variance, etc. [10]. Noise can likely be encountered in the dubious zone where undesired degradation in images may occur during capture, transmission, and/or processing, i.e., photon shot, dark current, or readout noise. I believe that the weighted average in HDRI is applicable on a single CPFA image where multiple redundant information can be reconstructed. As three polarization states are sufficient to compute the linear Stokes vector, one can use multiple ways of Stokes element calculations, and combine them. I present the proposed pipeline hereafter.

After a snapshot CPFA image acquisition, some pre-processing steps are performed on raw data such as linearization, polarimetric calibration [11], and spatial interpolation [12]. Here I assume that the input data ${I_{\theta ,c}}$ have been previously calibrated [11] and spatially interpolated. Only a sub-sampling is employed in this Letter to not introduce additional spatial noise. According to the above paragraphs, the noise considerations made are: (a) dubious pixels near the extremes should be excluded as much as possible from the Stokes calculation [13], and (b) the image noise structure is signal dependent: signal-to-noise ratio (SNR) increases with signal intensity [9].

These considerations are taken into account by means of a weighting function $\omega ({I_{\theta ,c}})$, applied to the 12 camera channels individually. The confidence given to the observed data is materialized by its individual weight. A well-known weighting function from HDRI [14,15] is used. The function is a combination of a broad hat function that eliminates dubious pixel values near the extreme levels, with the inverse of the camera response function (${\rm CRF}_{\theta ,c}^{- 1}$) and its derivative:

$$\omega ({I_{\theta ,c}}) = {\rm CRF}_{\theta ,c}^{- 1} \times {{\rm CRF}^\prime _{\theta ,c}} \times [1 - {({I_{\theta ,c}} \times 2 - 1)^{12}}].$$

The weighting function (for the channel $\theta {= 0^ \circ}$, $c = r$) used for the IMX250 MYR sensor is shown in Fig. 3. The CRF was estimated using the Mitsunaga and Nayar method [9]. Note that only one CRF and one weighting function among the 12 channels are shown for convenience, since the different curves are very similar for this sensor. Figure 4 is a visualization of the well-exposedness where all channels are represented in false color. The scene is a plastic roll tape with a uniform and flat black background ($HpLi$ zone), and a directional illuminant coming from the right, which generates highlights. It can be noted that the polarization and color channels have very different results of well-exposedness, in particular in the light and dark areas.

 figure: Fig. 3.

Fig. 3. Weighting function $\omega ({I_\theta})$ used in this Letter.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a)–(d) Sub-sampled polarization RGB images (without color correction) captured by an IMX250 MYR sensor. (e)–(p) Applying the weighting function over the 12 channels.

Download Full Size | PDF

After applying Eq. (1), I then estimate the Stokes elements ${S_0}$, ${S_1}$, and ${S_2}$. The key is to exploit the redundancy of intensity information that is inherent to PFA sensors, to take into account the potential non-equal contributions of noise during Stokes reconstruction. This redundancy has already been exploited in the literature to remove dead pixels [16], or for spatial resolution enhancement [17]. From the electromagnetic basis theory, ${S_0}$, ${S_1}$, and ${S_2}$ can be recovered from all of these equations:

$$\begin{array}{rcl}{{S_{0,1,c}}}& = &{{I_{0,c}} + {I_{90,c}} ,}\\{{S_{0,2,c}}}& = &{{I_{45,c}} + {I_{135,c}} ,}\\{{S_{1,1,c}}}& = &{{I_{0,c}} - {I_{90,c}} ,}\\{{S_{1,2,c}}}& = &{2{I_{0,c}} - {I_{45,c}} - {I_{135,c}} ,}\\{{S_{1,3,c}}}& = &{{I_{45,c}} - 2{I_{90,c}} + {I_{135,c}} ,}\\{{S_{2,1,c}}}& = &{{I_{45,c}} - {I_{135,c}} ,}\\{{S_{2,2,c}}}& = &{- {I_{0,c}} + 2{I_{45,c}} - {I_{90,c}} ,}\\{{S_{2,3,c}}}& = &{{I_{0,c}} + {I_{90,c}} - 2{I_{135,c}} ,}\end{array}$$
where ${S_{0,M,c}}$ are computed with $M = 2$ ways for each of the three color channels, whereas ${S_{1,N,c}}$ and ${S_{2,N,c}}$ are computed with $N = 3$ ways. Taking only the best calculation methods giving the best scores can lead to a sub-optimal selection of the polarization angles. If only three polarization states among the four are used to estimate the Stokes elements, the angle distribution will not be equally spaced between 0° and ${180^ \circ}$. It is known that the configuration can affect the system condition, and thus the error performance in terms of SNR [4]. Instead, as in HDRI, one prefers a weighted average fusion technique, to include all the calculation ways, thus involving all of the four available polarization states. As in Ref. [18], it is important to emphasize all measurements at once, using a product over the weighting scores for each computation way as follows:
$$\begin{array}{rcl}{{W_{{S_{0,1,c}}}}}& = &{\omega ({I_{0,c}}) \times \omega ({I_{90,c}}) ,}\\{{W_{{S_{0,2,c}}}}}& = &{\omega ({I_{45,c}}) \times \omega ({I_{135,c}}) ,}\\{{W_{{S_{1,1,c}}}}}& = &{\omega ({I_{0,c}}) \times \omega ({I_{90,c}}) ,}\\{{W_{{S_{1,2,c}}}}}& = &{\omega ({I_{0,c}}) \times \omega ({I_{45,c}}) \times \omega ({I_{135,c}}) ,}\\{{W_{{S_{1,3,c}}}}}& = &{\omega ({I_{45,c}}) \times \omega ({I_{90,c}}) \times \omega ({I_{135,c}}) ,}\\{{W_{{S_{2,1,c}}}}}& = &{\omega ({I_{45,c}}) \times \omega ({I_{135,c}}) ,}\\{{W_{{S_{2,2,c}}}}}& = &{\omega ({I_{0,c}}) \times \omega ({I_{45,c}}) \times \omega ({I_{90,c}}) ,}\\{{W_{{S_{2,3,c}}}}}& = &{\omega ({I_{0,c}}) \times \omega ({I_{90,c}}) \times \omega ({I_{135,c}}) .}\end{array}$$
These weight maps will be used to guide the fusion process by the color channel. The estimation of the intensity ${\hat S_0}$ is then done by
$$\begin{array}{*{20}{l}}{{{\hat S}_{0,c}} = \frac{{\sum\nolimits_{m = 1}^M {{W_{{S_{0,m,c}}}}} {S_{0,m,c}}}}{{\sum\nolimits_{m = 1}^M {{W_{{S_{0,m,c}}}}}}} ,}\end{array}$$
where $m$ indexes the number of ways for the ${S_0}$ computation.

${\hat S_1}$ and ${\hat S_2}$ are computed with Eq. (5), using a weighted average similar to that of the intensity. A normalization with respect to ${S_0}$ is applied to keep each term proportional to radiometric energy [19]:

$$\begin{split}{{{\hat S^\prime}_{1,c}} = \frac{{\sum\nolimits_{n = 1}^N {{W_{{S_{1,n,c}}}}} \left[{\frac{{{S_{1,n,c}}}}{{{{\hat S}_{0,c}}}}} \right]}}{{\sum\nolimits_{n = 1}^N {{W_{{S_{1,n,c}}}}}}} ,}\\{{{\hat S^\prime}_{2,c}} = \frac{{\sum\nolimits_{n = 1}^N {{W_{{S_{2,n,c}}}}} \left[{\frac{{{S_{2,n,c}}}}{{{{\hat S}_{0,c}}}}} \right]}}{{\sum\nolimits_{n = 1}^N {{W_{{S_{2,n,c}}}}}}} ,}\end{split}$$
where $n$ indexes the number of ways for ${S_1}$ and ${S_2}$ calculations.

Excess noise or nonlinearity in the data can still cause a large variation in the different estimates in Eq. (2). A reasonable assumption made in the literature is that polarization changes slightly with wavelength in the visible spectrum. Thus, ${S_1}$ and ${S_2}$ should change slightly over the three spectral bands. This assumption is experimentally highlighted in Refs. [20,21], where the differences in degrees of polarization among all spectral channels do not exceed 10%. To verify it on a large dataset, I did a statistical analysis on an existing dataset of noise-corrected RGB-polarization images from Ref. [22]. By computing the DOLP differences by pixel and over all of the observations, it appears that $\approx 99\%$ of the total amount of pixels exhibit less than 20% of DOLP differences (see histogram in Fig. S1 in Supplement 1). Therefore, to further reduce the effect of noise in the case of a high variability of Stokes elements, I use a weighted average over the color channels. A threshold $T$ is used, which permits to finally estimate ${S_1}$ and ${S_2}$ as this:

$${\hat S_{1,c}} = \left\{{\begin{array}{*{20}{l}}{{{\hat S^\prime}_{1,c}}}&{{\rm for}\mathop {\max}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) - \mathop {\min}\limits_{c \in r,g,b} ({\rm DOLP}_{c}) \le T}\\{\frac{{\sum\nolimits_{c = r}^{c = b} {\sum\nolimits_{n = 1}^N {\left[{{W_{{S_{1,n,c}}}}\frac{{{S_{1,n,c}}}}{{{{\hat S}_{0,c}}}}} \right]}}}}{{\sum\nolimits_{c = r}^{c = b} {\sum\nolimits_{n = 1}^N {{W_{{S_{1,n,c}}}}}}}}}&{{\rm for}\mathop {\max}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) - \mathop {\min}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) \gt T}\end{array}} \right.,$$
$${\hat S_{2,c}} = \left\{{\begin{array}{*{20}{l}}{{{\hat S^\prime}_{2,c}}}&{{\rm for}\mathop {\max}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) - \mathop {\min}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) \le T}\\{\frac{{\sum\nolimits_{c = r}^{c = b} {\sum\nolimits_{n = 1}^N {\left[{{W_{{S_{2,n,c}}}}\frac{{{S_{2,n,c}}}}{{{{\hat S}_{0,c}}}}} \right]}}}}{{\sum\nolimits_{c = r}^{c = b} {\sum\nolimits_{n = 1}^N {{W_{{S_{2,n,c}}}}}}}}}&{{\rm for}\mathop {\max}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) - \mathop {\min}\limits_{c \in r,g,b} ({{\rm DOLP}_c}) \gt T}\end{array}} \right..$$
The threshold $T$ is initially set to 0.20, but can be adjusted if prior knowledge about wavelength dependence of polarization data is given for a specific application. It should be noted that this weighting leads to equal Stokes elements among all the color channels for the pixel having been detected as highly noisy.

Visual results of applying the proposed method versus the standard one are shown in Fig. 5. The total intensity image ${S_0}$, along with ${S_0}$, ${S_1}$, ${S_2}$, AOLP, and DOLP for channel $c = r$ are visualized. The standard Stokes elements are computed from Eq. (2), where $M = N = 1$, which is the most employed way to compute Stokes components from four measurements. Figure 5(g) demonstrates the effect of the proposed methodology over a HpHi zone, where highlights are present. The dynamic range in highlight zones have been visually enhanced, especially on the white right side of the tape roll in ${S_0}$ and ${S_1}$. In DOLP images, the standard version shows more variance in the dark side of the tape roll and its shadow (LpLi). The same remarks can be made for the AOLP results. For the bottom right side of the tape roll (LpHi), the zone exhibits more smoothness in the AOLP image. A complete visualization for all spectral bands is available in Figs. S3, S4, and S5 in Supplement 1. The noise reduction is even better distinguished for the blue band whose pixel values are close to the noise level of the sensor.

 figure: Fig. 5.

Fig. 5. Top row: standard (std) Stokes computations. Bottom row: proposed Stokes computation [Eq. (7) with $T = 20\%$].

Download Full Size | PDF

A quantitative evaluation is done using reference HDR data. HDR data are calculated from five exposure times: $\Delta t = 10$, 20, 40, 80, and 160 ms. The middle exposure $\Delta t = 40\,\,\rm ms$ is used as a test image. Details about the HDR generation is given in Supplement 1. Then, the reference HDR Stokes images is computed from HDR data, using Eq. (2) with $M = N = 1$. It is important to note that the Stokes HDR images are generated from the same equations as for the standard one (i.e., without any averaging). peak signal to noise ratio (PSNR) results are shown in Table 1. We can see that the proposed method enhances significantly the Stokes components ${S_1}$ and ${S_2}$, especially where low irradiance is present (i.e., for $c = b$). The method does not enhance the ${S_0}$ component for all the spectral bands.

Tables Icon

Table 1. PSNR Results for Both Computation Methodsa

To conclude, future works on this HDR scheme applied to a single PFA image can be envisaged through an extended quantitative evaluation, along with studying the effect of changing $T$. Thus, a validation over a large amount of CPFA HDR data could benefit further improvement of the methodology. Moreover, to highlight the potential effects of the proposed weighted average, investigations can be done on computer vision applications that involve ground-truth data, such as for shape from polarization, illumination direction estimation, or diffuse/specular separation.

Funding

Agence Nationale de la Recherche (ANR-18-CE10-0005).

Disclosures

The author declares no conflicts of interest.

See Supplement 1 for supporting content.

REFERENCES

1. D. M. Rust, “Integrated dual imaging detector,” U.S. patent 5,438,414 (1 August , 1995).

2. “Polarsens polarization image sensor,” Technical Report (Sony, 2018).

3. T. Okawa, S. Ooki, H. Yamajo, M. Kawada, M. Tachi, K. Goi, T. Yamasaki, H. Iwashita, M. Nakamizo, T. Ogasahara, Y. Kitano, and K. Tatani, International Electron Devices Meeting (IEEE, 2019), p. 16-3.

4. J. S. Tyo, Appl. Opt. 41, 619 (2002). [CrossRef]  

5. S. Tominaga, H. Kadoi, K. Hirai, and T. Horiuchi, Proc. SPIE 8652, 86520E (2013). [CrossRef]  

6. J. S. Tyo, B. M. Ratliff, and A. S. Alenin, Opt. Lett. 41, 4759 (2016). [CrossRef]  

7. S.-S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, Opt. Express 14, 7099 (2006). [CrossRef]  

8. P. E. Debevec and J. Malik, ACM SIGGRAPH 2008 Classes (2008), pp. 1–10.

9. T. Mitsunaga and S. K. Nayar, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 374–380.

10. M. Granados, B. Ajdin, M. Wand, C. Theobalt, H. Seidel, and H. P. A. Lensch, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010), pp. 215–222.

11. Y. Gimenez, P.-J. Lapray, A. Foulonneau, and L. Bigué, J. Electron. Imaging 29, 1 (2020). [CrossRef]  

12. S. Mihoubi, P.-J. Lapray, and L. Bigué, Sensors 18, 3688 (2018). [CrossRef]  

13. E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (Morgan Kaufmann, 2010).

14. A. O. Akyüz and E. Reinhard, J. Vis. Commun. Image Represent. 18, 366 (2007). [CrossRef]  

15. F. Banterle, A. Artusi, K. Debattista, and A. Chalmers, Advanced High Dynamic Range Imaging, 2nd ed. (AK Peters (CRC Press), 2017).

16. B. M. Ratliff, J. S. Tyo, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, Opt. Express 15, 7596 (2007). [CrossRef]  

17. B. M. Ratliff, J. S. Tyo, W. T. Black, and C. F. LaCasse, Proc. SPIE 7461, 74610K (2009). [CrossRef]  

18. T. Mertens, J. Kautz, and F. Van Reeth, Computer Graphics Forum (Wiley Online Library, 2009), pp. 161–171.

19. J. R. Schott, Fundamentals of Polarimetric Remote Sensing (SPIE, 2009), Vol. 81.

20. M. Garcia, C. Edmiston, R. Marinov, A. Vail, and V. Gruev, Optica 4, 1263 (2017). [CrossRef]  

21. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2001), Vol. 1, p. i.

22. S. Qiu, Q. Fu, C. Wang, and W. Heidrich, Vision, Modeling and Visualization, H.-J. Schulz, M. Teschner, and M. Wimmer, eds. (The Eurographics Association, 2019), pp. 117–124.

Supplementary Material (1)

NameDescription
Supplement 1       Justification of the threshold selection, details about the High Dynamic Range reference image generation, and the visualization of the Stokes image results.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. CPFA sensor case. Each $2 \times 2$ super-pixel is covered by four linear polarization filters and one spectral filter.
Fig. 2.
Fig. 2. Simulated theoretical polarization camera responses, with a varying input signal according to the Malus law.
Fig. 3.
Fig. 3. Weighting function $\omega ({I_\theta})$ used in this Letter.
Fig. 4.
Fig. 4. (a)–(d) Sub-sampled polarization RGB images (without color correction) captured by an IMX250 MYR sensor. (e)–(p) Applying the weighting function over the 12 channels.
Fig. 5.
Fig. 5. Top row: standard (std) Stokes computations. Bottom row: proposed Stokes computation [Eq. (7) with $T = 20\%$].

Tables (1)

Tables Icon

Table 1. PSNR Results for Both Computation Methodsa

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ω(Iθ,c)=CRFθ,c1×CRFθ,c×[1(Iθ,c×21)12].
S0,1,c=I0,c+I90,c,S0,2,c=I45,c+I135,c,S1,1,c=I0,cI90,c,S1,2,c=2I0,cI45,cI135,c,S1,3,c=I45,c2I90,c+I135,c,S2,1,c=I45,cI135,c,S2,2,c=I0,c+2I45,cI90,c,S2,3,c=I0,c+I90,c2I135,c,
WS0,1,c=ω(I0,c)×ω(I90,c),WS0,2,c=ω(I45,c)×ω(I135,c),WS1,1,c=ω(I0,c)×ω(I90,c),WS1,2,c=ω(I0,c)×ω(I45,c)×ω(I135,c),WS1,3,c=ω(I45,c)×ω(I90,c)×ω(I135,c),WS2,1,c=ω(I45,c)×ω(I135,c),WS2,2,c=ω(I0,c)×ω(I45,c)×ω(I90,c),WS2,3,c=ω(I0,c)×ω(I90,c)×ω(I135,c).
S^0,c=m=1MWS0,m,cS0,m,cm=1MWS0,m,c,
S^1,c=n=1NWS1,n,c[S1,n,cS^0,c]n=1NWS1,n,c,S^2,c=n=1NWS2,n,c[S2,n,cS^0,c]n=1NWS2,n,c,
S^1,c={S^1,cformaxcr,g,b(DOLPc)mincr,g,b(DOLPc)Tc=rc=bn=1N[WS1,n,cS1,n,cS^0,c]c=rc=bn=1NWS1,n,cformaxcr,g,b(DOLPc)mincr,g,b(DOLPc)>T,
S^2,c={S^2,cformaxcr,g,b(DOLPc)mincr,g,b(DOLPc)Tc=rc=bn=1N[WS2,n,cS2,n,cS^0,c]c=rc=bn=1NWS2,n,cformaxcr,g,b(DOLPc)mincr,g,b(DOLPc)>T.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.