Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic 3-D shape measurement in an unlimited depth range based on adaptive pixel-by-pixel phase unwrapping

Open Access Open Access

Abstract

Pixel-by-pixel phase unwrapping (PPU) has been employed to rapidly achieve three-dimensional (3-D) shape measurement without additional projection patterns. However, the maximum measurement depth range that traditional PPU can handle is within 2π in phase domain; thus PPU fails to measure the dynamic object surface when the object moves in a large depth range. In this paper, we propose a novel adaptive pixel-by-pixel phase unwrapping (APPU), which extends PPU to an unlimited depth range. First, with PPU, temporary phase maps of objects are obtained referring to the absolute phase map of a background plane. Second, we quantify the difference between the image edges of the temporary phase maps and the practical depth edges of dynamic objects. Moreover, according to the degree of the edge difference, the temporary phase maps are categorized into two classes: failed phase maps and relative phase maps. Third, by combining a mobile reference phase map and the edge difference quantization technique, the failed phase maps are correspondently converted into relative phase maps. Finally, the relative phase maps are innovatively transformed into the absolute phase maps using a new shadow-informed depth estimation method (SDEM). The proposed approach is suitable for high-speed 3-D shape measurement without depth limitations or additional projection patterns.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3-D) shape measurement of dynamic objects involves forward-looking sensing technology for 3-D telecommunication, human-computer interaction, intelligent manufacturing and so on [14]. Benefiting from the development of digital fringe projection (DFP) technique, Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP) have been broadly applied to reconstruct the 3-D surfaces of dynamic objects [57]. FTP has obvious speed advantages by projecting one fringe pattern, making it highly suitable for dynamic measurement. However, with the limited number of projection patterns, the accuracy of FTP is easily disturbed by uneven surface shapes and uncertain ambient light conditions. Instead of FTP, PSP normally projects at least three fringe patterns onto the surfaces of objects and generates a high-quality phase map, which is robust to complex surfaces and ambient illumination [8].

Moreover, to fully achieve the accuracy advantage of PSP, temporal phase unwrapping is commonly applied to absolutely unfold the wrapped phase, which requires additional projection fringes to determine the distribution of the phase order, such as multi-frequency phase unwrapping [9,10] and number-theoretical phase unwrapping [11,12]. However, projecting auxiliary fringes requires extra time, so temporal phase unwrapping cannot meet the requirement of fast 3-D shape recovery. Consequently, the absolute 3-D reconstruction of dynamic objects for PSP is currently challenging.

To maximize the efficiency of PSP for the absolute 3-D surface measurement of dynamic objects, stereo phase unwrapping (SPU) was developed by introducing stereo geometric constraints into the DFP system [13,14]. Whereas, the high-frequency phase-shifting fringes are adopted in PSP, so the conventional SPU has trouble in removing phase ambiguities under ambient noise conditions. Although the presupposed depth range [15], composite fringe patterns [16,17], and spatio-temporal correlation [18] improve the robustness of the conventional SPU, the current SPU inevitably suffers from the time-consuming left-right consistency checking issue. In addition, introducing additional cameras into the DFP system significantly increases the hardware cost and system errors.

Without additional hardware components and auxiliary projection patterns, An et al. [19] analyzed the geometric constraints of a typical DFP system (i.e., one camera and one projector) and created a reference phase map according to a given artificial depth plane. With the geometric constraints and the known reference plane, pixel-to-pixel phase unwrapping (PPU) was presented to accurately acquire the absolute phase maps of objects. Shortly afterward, the PPU method was employed to complete the high-precision and high-efficiency 3-D reconstruction [20,21].

However, PPU only functions well when the maximum measurement depth range is within $2\pi$ in phase domain. Recently, various improvements have been made to increase the valid depth range of PPU. Jiang et al. [22] used the geometric information associated with surface features to divide the object surfaces into different depth regions. In addition, with the assistance of multiple PPU steps and specific prior knowledge of the surface geometry, absolute 3-D shape retrieval was performed in a large depth range. Dai et al. [23] mainly focused on replacing the artificial depth plane with a known-size object to provide a reference phase map for PPU, thus achieving the absolute phase unwrapping of a dynamic object moving in the depth direction. These approaches extend the measurement depth range of PPU and yield accurate 3-D surface reconstructions without additional projection patterns. Unfortunately, unique prior knowledge requirements [22] and the restricted movement accompanied by a known-size ball [23] may limit automated 3-D shape measurements of free-moving objects.

In our research, a novel adaptive pixel-by-pixel phase unwrapping (APPU) approach is proposed to extend the traditional PPU method to an unlimited depth range for the automated 3-D shape recovery of free-moving objects. An actual background plane provides the absolute reference phase map by using multi-frequency phase unwrapping. We categorize the temporary phase maps unwrapped by the traditional PPU into two classes: failed phase maps and relative phase maps. By combining a mobile reference phase map and the edge difference quantization technique, we convert the failed phase maps into relative phase maps. Based on the object shadow formed on the background plane, the relative phase maps are finally transformed into absolute phase maps, which are employed to reconstruct 3-D models of the measured objects. With the assistance of the 3-step PSP and the typical DFP system, our proposed phase unwrapping approach achieves automated 3-D shape measurements of dynamic objects without depth limitations or additional projection patterns.

Section 2 introduces the principle of PPU and analyzes the limitation of PPU for dynamic objects. Section 3 describes the generation and utilization of the mobile reference phase map, the process for converting relative unwrapped phase maps into absolute phase maps in a large depth range, and the image segmentation method for object and background areas. Section 4 explains the whole workflow of the proposed approach in detail. In Section 5, the experiment results demonstrate the effectiveness and accuracy of the proposed approach. Section 6 discusses the strengths and limitations of the approach, and Section 7 summarizes this paper.

2. Pixel-by-pixel phase unwrapping (PPU)

2.1 Principle of PPU for absolute 3-D measurement

An et al. [19] presented the PPU method for absolute phase retrieval using the geometric constraints of a structured light system. Because a projector is the inverse of a camera, the projector and camera can use the same pinhole model. When the calibrations for the projector and the camera are completed under the same global coordinate system $\left ( {{x}^{w}},{{y}^{w}},{{z}^{w}} \right )$, the projection matrices ${{\mathbf {P}}^{\mathbf {c}}}$ and ${{\mathbf {P}}^{\mathbf {p}}}$ are obtained. We can establish two sets of equations for the camera and the projector as follows:

$${{s}^{c}}{{[\begin{matrix} {{u}^{c}} & {{v}^{c}} & 1 \\ \end{matrix}]}^{t}}={{\mathbf{P}}^{\mathbf{c}}}{{[\begin{matrix} {{x}^{w}} & {{y}^{w}} & {{z}^{w}} & 1 \\ \end{matrix}]}^{t}},$$
$${{s}^{p}}{{[\begin{matrix} {{u}^{p}} & {{v}^{p}} & 1 \\ \end{matrix}]}^{t}}={{\mathbf{P}}^{\mathbf{p}}}{{[\begin{matrix} {{x}^{w}} & {{y}^{w}} & {{z}^{w}} & \textrm{1} \\ \end{matrix}]}^{t}},$$
where the superscripts, $^{\mathbf {c}}$ and $^{\mathbf {p}}$, denote the camera and the projector of the structured light system, respectively.

For each camera pixel $({{u}^{c}},{{v}^{c}})$, Eqs. (1)–(2) include 6 equations and 7 unknown parameters, $({{s}^{c}},{{s}^{p}},{{u}^{p}},{{v}^{p}},{{x}^{w}},{{y}^{w}},{{z}^{w}})$, thus requiring one additional constraint equation for unique 3-D coordinate $({{x}^{w}},{{y}^{w}},{{z}^{w}})$. If the fringe patterns of the projector vary sinusoidally in ${v}^{p}$ direction, the additional constraint equation is given as follows:

$${{v}^{p}}=\textrm{ }\!\!\Phi\!\!\textrm{ }\times T/\left( 2\textrm{ }\!\!\pi\!\!\textrm{ } \right),$$
where $\Phi$ is the known absolute phase and $T$ is the fringe period at pixel level.

Therefore, for a virtual reference plane at ${{z}^{w}}={{z}_{max}}$ in Fig. 1(a), an absolute reference phase map ${{\Phi }_{max }}$ can be correspondingly formed. The phase map ${{\Phi }_{max }}$ is a function of ${z}_{max}$, fringe period $T$, and two projection matrices, which can be expressed mathematically as Eq. (4):

$${{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( {{u}^{c}},{{v}^{c}} \right)=f\left( {{z}_{max}};T,{{\mathbf{P}}^{\mathbf{c}}},{{\mathbf{P}}^{\mathbf{p}}} \right).$$

 figure: Fig. 1.

Fig. 1. The principle of PPU. (a) Spatial distribution of the DFP system and the reference plane. (b) Converting the wrapped phase into the unwrapped phase.

Download Full Size | PDF

With the above reference phase map, the fringe order $k\left ( x,y \right )$ of the measured object can be calculated directly by Eq. (5):

$$k(x,y)=floor\left[ \frac{{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( x,y \right)-\phi \left( x,y \right)}{2\pi } \right],$$
where $(x,y)$ is an image pixel, and $floor[\cdot ]$ is a special operator that gets the nearest lower integer. Here, $\phi \left ( x,y \right )$ is the wrapped phase value of the measured object.

Therefore, the absolute unwrapped phase can be obtained from the corresponding wrapped phase, as shown in Fig. 1(b). We have

$$\textrm{ }\!\!\Phi\!\!\textrm{ }\left( x,y \right)=\phi \left( x,y \right)+2\pi \times k\left( x,y \right),$$
where $\Phi (x,y)$ is the absolute phase value of the object.

2.2 Limitation of applying PPU to dynamic objects

Based on the above discussion, PPU performs well for fast and accurate phase unwrapping. However, PPU performs poorly when a dynamic object is far from the given reference plane. Assuming that the reference plane is set at ${{z }^{w}}={{z }_{max }}$ in Fig. 2(a), the PPU can unwrap the wrapped phase within $[{{z}_{min }},{{z}_{max }})$ in global coordinate system and $\Delta {{z}_{max }}={{z}_{max }}-{{z}_{min }}$ is the maximum measurement depth range of PPU.

 figure: Fig. 2.

Fig. 2. The depth limitation of PPU. (a) Distribution of the measured hemisphere. (b) Absolute phase map and cross-section unwrapped by PPU. (c) Failed phase map and cross-section preprocessed by IPPU. (d) Relative phase map and cross-section preprocessed by RPPU.

Download Full Size | PDF

When a hemisphere is completely within the valid depth range, its spatial 3-D surface can be reconstructed completely, as shown in Fig. 2(b). Then, the hemisphere gradually moves away from the given reference plane. Part of the hemisphere is in the measurement range and the rest is outside the measurement range, but PPU fails to unfold the wrapped phase map in Fig. 2(c); in this case, PPU is called incorrect pixel-by-pixel phase unwrapping (IPPU), and the unwrapped phase map preprocessed by IPPU is deemed a failed phase map. When the moving hemisphere is entirely outside the maximum range, as shown in Fig. 2(a), PPU only retrieves a relative unwrapped phase map rather than the absolute phase map in Fig. 2(d); in this case, PPU is called relative pixel-by-pixel phase unwrapping (RPPU), and the unwrapped phase map pretreated by RPPU is deemed a relative phase map.

Consequently, PPU is only suitable for objects that are completely within the $2\pi$ depth range in phase domain. To quickly reconstruct dynamic objects based on PSP without depth limitations, APPU for an unlimited depth range is needed.

3. Adaptive pixel-to-pixel phase unwrapping (APPU)

3.1 Generation and utilization of a mobile reference phase map

In the proposed APPU approach, a mobile reference plane replaces the fixed reference plane to automatically convert the failed phase map into a relative phase map.

The absolute phase map of the mobile reference plane can be automatically obtained by

$$\textrm{ }\!\!~\!\!\textrm{ }{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{Am}}\left( x,y \right)={{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( x,y \right)-m\times \Delta \Phi,$$
$${{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( x,y \right)\ge {{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{Am}}\left( x,y \right)>{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( x,y \right)-2\pi,$$
where $\Delta \Phi$ is a constant phase increment, which is set to $0.05\pi$ in our experiments, and $m=0,1,2,\ldots 39$ is calculated by Eqs. (7) and (8).

As shown in Fig. 3(a), the mobile absolute phase map obtained from Eq. (7) is a virtual reference phase map for unwrapping the wrapped phase maps of objects.

 figure: Fig. 3.

Fig. 3. Generation and utilization of the mobile reference phase map. (a) Generation of the mobile reference phase map according to the given phase map ${{\Phi }_{max }}$. (b) Histogram statistics of the edges of the failed phase map.

Download Full Size | PDF

Although the reference plane defined in Eq. (7) is artificial and mobile, its valid measurement range is consistent with that of PPU. Therefore, beyond the measurement range $\Delta {{z}_{max }}$, the mobile reference plane cannot provide dynamic objects with absolute phase unwrapping, but the incorrect phase unwrapping is avoided, i.e., the mobile reference plane is effective for converting the failed phase maps into the relative phase maps. It is worth noting that IPPU generates extra height jumps relative to RPPU, which are represented as extra edges in the failed phase map.

As Marr’s vision theory [24] suggests, the edge of an object can be used to understand an image. In our research, the edges of phase maps are also essential for distinguishing failed phase maps from ambiguous phase maps.

For IPPU in Section 2.2, there are actual edges and additional edges in the failed phase map. The additional edges associated with IPPU are defined as private edges of the failed phase map. Moreover, the actual edges of the measured object are constant, so we call actual edges common edges.

To quantify the private edges, the additional edges are detected by using the Sobel operator, and the number of pixels occupied by the edges is counted. The private edges of the preprocessed phase map change as the mobile reference phase varies in Eq. (7). When the number of pixels along additional edges is less than $10\%$ of the number of pixels for all edges, as shown in Fig. 3(b), the failed phase map has been successfully converted into a relative phase map. Notably, $10\%$ is the threshold used to distinguish between failed phase maps and relative phase maps in our experiments.

However, either the combination of IPPU and the mobile reference plane, or RPPU only yields a relative phase map ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}$, which is within $\left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-4\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right ) \right ]$ because of Eq. (8).

3.2 Transforming relative phase maps into absolute phase maps

To accurately and robustly transform the relative unwrapped phase map ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}$ into the absolute phase map $\textrm { }\!\!\Phi \!\!\textrm { }$, a shadow-informed depth estimation method (SDEM) is presented in our research.

For the relative unwrapped map ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}$ of a measured object, a series of alternative absolute phase maps are calculated according to Eq. (9).

$${{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{\left( q \right)}}={{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{r}}-2\pi \left( q-2 \right),$$
where $q=\left \{ 1,2\ldots \left ( s+1 \right ) \right \}$, and the value of $s$ is equal to the desired magnification of the measurement range.

For the depth range within $6\pi$ changes in phase domain, $s=3$, and we can obtain the alternative absolute phase maps $\left \{ {{\textrm { }\!\!\Phi \!\!\textrm { }}_{\left ( 1 \right )}},{{\textrm { }\!\!\Phi \!\!\textrm { }}_{\left ( 2 \right )}}\textrm {,}{{\textrm { }\!\!\Phi \!\!\textrm { }}_{\left ( 3 \right )}}\textrm {, }\!\!~\!\!\textrm { }{{\textrm { }\!\!\Phi \!\!\textrm { }}_{\left ( 4 \right )}} \right \}$; these phase maps correspond to the objects $\left \{ {{O}_{1}},{{O}_{2}},{{O}_{3}},{{O}_{4}} \right \}$ in spatial coordinates. Figure 4(a) and Fig. 4(b) show the possible spatial distributions of ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}\in \left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-4\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi \right ]$ and ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}\in \left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}} \right ]$, respectively.

 figure: Fig. 4.

Fig. 4. The alternative reconstructed results and the corresponding shadows. (a) The relative phase map ${{\Phi }_{r}}$ is within $\left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-4\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi \right ]$. (b) The relative phase map ${{\Phi }_{r}}$ is within $\left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}} \right ]$.

Download Full Size | PDF

In Fig. 4, the pattern from the projector is projected onto the measured object, which forms shadows on the background plane ${{z }_{max }}$. The different spatial positions of the measured objects create the shadows $\left | {{S}_{0}}{{S}_{1}} \right |$, $\left | {{S}_{0}}{{S}_{2}} \right |$, and $\left | {{S}_{0}}{{S}_{3}} \right |$, as shown in Fig. 4(a) or Fig. 4(b). In addition, for an arbitrary absolute phase map, we can utilize a virtual phase map $\textrm { }\!\!~\!\!\textrm { }{{\textrm { }\!\!\Phi \!\!\textrm { }}_{Vf}}\left ( x,y \right )$ for fitting.

$$\textrm{ }\!\!~\!\!\textrm{ }{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{Vf}}\left( x,y \right)={{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}\left( x,y \right)-\Delta \varphi,$$
where the phase difference between ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )$ and $\textrm { }\!\!~\!\!\textrm { }{{\textrm { }\!\!\Phi \!\!\textrm { }}_{Vf}}\left ( x,y \right )$ is $\Delta \varphi$.

Therefore, the mean phase map of two objects in the adjacent ranges can be fitted by a virtual phase map $\textrm { }\!\!~\!\!\textrm { }{{\textrm { }\!\!\Phi \!\!\textrm { }}_{Vf}}$ as follows:

$$\Delta {{\varphi }_{w}}=argmin{{\left\{ \left[ \textrm{ }\!\!\Phi\!\!\textrm{ }\left( {{O}_{w}} \right)+\textrm{ }\!\!\Phi\!\!\textrm{ }\left( {{O}_{w+1}} \right) \right]/2-{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{Vf}}\left( x,y \right) \right\}}^{2}},$$
where $w=1,2,3$. Based on the phase differences $\Delta \varphi$, the fitting phase maps can be used to separately generate the spatial planes ${{z}_{f1}}$, ${{z}_{f2}}$, and $~{{z}_{f3}}$.

According to the distance distributions between these spatial planes and the background plane ${{z}_{max}}$ in Fig. 4, the relationship of the phase differences is denoted by

$$\Delta {{\varphi }_{3}}>\Delta {{\varphi }_{2}}>{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{max}}-{{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{r}}>\Delta {{\varphi }_{1}}.$$

Therefore, combining the relative unwrapped phase map ${{\textrm { }\!\!\Phi \!\!\textrm { }}_{r}}$ and the actual shadow length $\textrm { }\!\!|\!\!\textrm { }{{S}_{0}}S|$, the absolute phase ${{\textrm { }\!\!\Phi \!\!\textrm { }}}$ of the measured object is determined by Eq. (13).

$$\textrm{ }\!\!\Phi\!\!\textrm{ }={{\textrm{ }\!\!\Phi\!\!\textrm{ }}_{r}}+c\times 2\pi,$$
where $c$ is determined by
$$c=\left\{ \begin{array}{l} -2,{{\Phi }_{max }}(|{{S}_{0}}S|)\ge \Delta {{\varphi }_{3}}, \\ -1,\Delta {{\varphi }_{3}}>{{\Phi }_{max }}(|{{S}_{0}}S|)\ge \Delta {{\varphi }_{2}}, \\ 0,\Delta {{\varphi }_{2}}>{{\Phi }_{max }}(|{{S}_{0}}S|)\ge \Delta {{\varphi }_{1}}, \\ 1,{{\Phi }_{max }}(|{{S}_{0}}S|)<\Delta {{\varphi }_{1}}. \\ \end{array} \right.$$

Based on the above analysis, we have verified that the method is valid to $6\pi$ changes (i.e., $s=3$) in phase domain. Alternatively, by changing the value of $s$ in Eq. (9), this approach is shown to be effective for other desired depth ranges including but not limited to $4\pi$ and $8\pi$.

3.3 Robust object detection and segmentation

Because an actual reference plane is introduced as the background, the phase map of the measured object is accompanied by that of the background plane. To achieve the 3-D reconstruction of the object independently, we developed a PSP-based object detection method.

With 3-step PSP, the deformed fringe patterns of the measured object are expressed mathematically as follows:

$${{I}_{n}}\left( x,y \right)=A\left( x,y \right)+B\left( x,y \right)\textrm{cos}\left[ \phi \left( x,y \right)+2\pi \left( n-1 \right)/N \right],$$
where $n=\left \{ 1,2,3 \right \}$, $N=3$, $A\left ( x,y \right )$ and $B\left ( x,y \right )$ are both influenced by the ambient light and surface reflectivity conditions [25]. Here, $\phi \left ( x,y \right )$ represents the wrapped phase, which contains the depth information for the measured object.

Additionally, the deformed fringe images on the actual background plane are obtained before measuring the objects and denoted as $I_{n}^{R}\left ( x,y \right )$.

$$I_{n}^{R}\left( x,y \right)={{A}^{R}}\left( x,y \right)+{{B}^{R}}\left( x,y \right)\textrm{cos}\left[ {{\phi }^{R}}\left( x,y \right)+2\pi \left( n-1 \right)/N \right],$$
where superscript $^{R}$ expresses the fringe parameters of the actual background plane.

Based on the intensity differences between the deformed fringes ${I}_{n}$ and $I_{n}^{R}$, the regions of the dynamic objects and object-induced shadows can be robustly detected. Subsequently, the mask combination of objects and shadows is extracted according to the following equation:

$${{I}_{diff}}=imbw\left\{ abs\left[ \underset{n=1}{\overset{n=3}{\mathop \sum }}\,\left( {{I}_{n}}-I_{n}^{R} \right) \right],\epsilon \right\},$$
where $imbw(\cdot )$ represents a binarization operator and $\epsilon$ is a predefined intensity threshold.

Because the object blocks light from reaching the shadow area, it is easy to determine the mask of shadow regions from the modulation image $B\left ( x,y \right )$ in Eq. (15) by using intensity segmentation. Therefore, we finally acquire the valid mask of the object without the shadow regions according to

$${{I}_{mask}}\left( x,y \right)={{I}_{diff}}\left( x,y \right)-{{I}_{s}}\left( x,y \right),$$
where ${{I}_{s}}$ is the mask of the shadow region.

4. Overall workflow of APPU

The workflow of the proposed approach is shown in Fig. 5, and the process mainly includes the following steps:

 figure: Fig. 5.

Fig. 5. The overall workflow of the proposed APPU approach. With the assistance of the dithering defocusing technique and 3-step PSP, the wrapped phase map and the modulation image of the measured object are both obtained. When the number of pixels on private edges is greater than $10\%$ of the number of pixels for all edges of the temporary unwrapped phase map, the temporary phase map is divided into a failed phase map; otherwise, it is divided into a relative phase map. Referring to the mobile reference phase map, the failed phase map is converted to the relative phase map until the number of pixels of private edges is less than $10\%$ of the number of pixels for all edges. Using the valid shadow length and the proposed SDEM, the relative phase map is transformed to the final absolute phase map of the measured object.

Download Full Size | PDF

Step 1: Extraction of the wrapped phase map and the modulation image. With the traditional 3-step PSP, the deformed fringe patterns are captured by the camera. Without background regions and shadow areas, the wrapped phase of an object is calculated by

$$\phi \left( x,y \right)={{\tan}^{-1}}\left\{ -\frac{\mathop{\sum }_{n=1}^{N}{{I}_{n}}\left( x,y \right)\sin \left[ 2\pi \left( n-1 \right) \right]/N}{\mathop{\sum }_{n=1}^{N}{{I}_{n}}\left( x,y \right)\cos \left[ 2\pi \left( n-1 \right) \right]/N} \right\}.$$
Moreover, the intensity modulation image is derived from ${{I}_{n}}$ [26] as:
$$B\left( x,y \right)=\frac{2}{N}{{\left\{ {{\left[ \underset{n=1}{\overset{3}{\mathop \sum }}\,{{I}_{n}}\left( x,y \right)\textrm{sin}\left( \frac{2\pi n}{N} \right) \right]}^{2}}+{{\left[ \underset{n=1}{\overset{3}{\mathop \sum }}\,{{I}_{n}}\left( x,y \right)\textrm{cos}\left( \frac{2\pi n}{N} \right) \right]}^{2}} \right\}}^{0.5}}.$$

Step 2: Conversion of the failed phase map to a relative phase map. First, by combining PPU and the given background plane ${z}_{max}$, the wrapped phase in Eq. (19) is preliminarily unwrapped as the temporary phase map. Second, by determining whether the number of pixels of private edges is less than $10\%$ of the total number of pixels for all edges, our approach divides the unwrapped results of PPU into two categories: the failed phase map and the relative phase map.

Finally, to convert the failed phase map into a relative phase map, the mobile reference plane replaces the given background plane ${z}_{max}$ to generate a new unwrapped phase map of objects. When the pixel number of pixels of private edges is less than $10\%$ of the total number of pixels for all edges, we successfully obtain a relative phase map according to the failed phase map or the wrapped phase map.

Step 3: Calculation of the valid shadow length. In Section 3.3, according to Eq. (16), we can obtain the mask combination, which includes object regions and shadow regions. Based on the modulation image, the object region and the shadow region are detected, and they are expressed as ${{I}_{o}}$ and ${{I}_{s}}$, respectively. Then, we determine the top pixel $S$ of the upper boundary of the shadow region by using a linear search algorithm. In the column of the pixel $S$, the corresponding pixel ${{S}_{0}}$ of the upper boundary of the object region is determined by searching, as shown in Fig. 6. Therefore, the valid shadow length is the pixel distance between ${{S}_{0}}$ and $S$ in the column of the image.

Step 4: Fast 3-D reconstruction of a dynamic object. By combining the relative phase map with the proposed SDEM, we can independently retrieve the absolute unwrapped phase map of the dynamic object, which is described in Section 3.2 in detail. Moreover, with the aid of the dithering defocusing technique for the projector [27], the dynamic object is rapidly reconstructed without depth limitations or additional projection patterns.

 figure: Fig. 6.

Fig. 6. In the modulation image, the object region ${{I}_{o}}$ and the shadow area ${{I}_{s}}$ are determined by detection and segmentation in Section 3.3. The upper boundaries of the object region and the shadow area are separately extracted to calculate the valid shadow length $\textrm { }\!\!|\!\!\textrm { }{{S}_{0}}S|$.

Download Full Size | PDF

5. Experiments

We construct a typical 3-D shape measurement system to verify the practical performance of the proposed approach, as shown in Fig. 7. The DFP system includes a camera with a reduced image resolution of $1280\times 960$ (Grasshopper3, Point Grey), a Digital Light Processing (DLP) projector with $1140\times 912$ resolution (LightCrafter4500, Ti), a smooth background plane, and a linear stage. The DLP projector projects dithering binary patterns at the speed of 140 Hz, which is synchronized with the frequency of the camera. Before measuring objects, the absolute phase map of the background plane is obtained using 3-step PSP and multi-frequency phase unwrapping [10].

 figure: Fig. 7.

Fig. 7. The DFP system includes a DLP projector, a CMOS camera, a background plane, and a linear stage.

Download Full Size | PDF

5.1 Reliability verification

To test the reliability of the proposed approach for a large depth measurement range, we measure objects at different distances from the background plane. Figure 8(a) describes two isolated statues in front of the background plane. As the two statues move from spatial position ${{P}_{1}}$ to spatial position ${{P}_{2}}$ and subsequently to spatial position ${{P}_{3}}$, the distance between the statues and background plane ${z}_{max}$ gradually increases, which is shown in Fig. 8(b). Figure 8(c) and Fig. 8(d) display the masks and the absolute phase maps of the two statues for one of three spatial positions, respectively.

 figure: Fig. 8.

Fig. 8. Spatial distribution of the statues and the background plane. (a) Two isolated statues in front of the background plane. (b) The statues in three different spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (c) The mask of the two statues for one of three spatial positions. (d) The absolute phase map of the two statues for one of three spatial positions.

Download Full Size | PDF

When the two statues remain quasi-stationary at the three spatial positions, the APPU and PPU approaches [19] are applied to retrieve the absolute phase maps of two statues. Moreover, multi-frequency phase unwrapping [10] is utilized to measure the surface profile of objects by projecting auxiliary fringes, and its results are regarded as the ground truth. The reconstructed results of the three approaches are presented in Fig. 9. Figures 9(a)–9(c) show the 3-D shapes of the two isolated statues at the spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$ by using our proposed APPU. The reconstructed models of PPU are shown in Figs. 9(d)–9(f). Compared with the true results of Figs. 9(g)–9(i) retrieved by multi-frequency phase unwrapping, APPU and PPU both perform the correct and unambiguous surfaces when the statues are at the spatial position ${{P}_{1}}$. For the statues at the spatial position ${{P}_{2}}$, APPU and PPU present the complete result and the failed result, which are described in Fig. 9(b) and Fig. 9(e), respectively. The statues move further away from the background plane and reach spatial position ${{P}_{3}}$. APPU achieves the correct 3-D reconstruction in Fig. 9(a), and the result of PPU in Fig. 9(b) is ambiguous compared to the true model result in Fig. 9(i). Therefore, the proposed APPU effectively achieves the 3-D shape measurement without depth range limitations.

 figure: Fig. 9.

Fig. 9. The reconstructed results of two isolated statues at three spatial positions. (a-c) Reconstructed 3-D surfaces using the proposed APPU approach at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (d-f) Reconstructed 3-D surfaces using the traditional PPU approach at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (g-i) Reconstructed 3-D surfaces using multi-frequency phase unwrapping at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$.

Download Full Size | PDF

Different from PPU and multi-frequency phase unwrapping, the APPU approach chooses an actual plane as the background plane. In APPU, the image segmentation of object areas and background areas influences the accuracy of the results, as described in Section 3.3 in detail. Table 1 lists the quantitative statistical results of Figs. 9(a)–9(c). The error rate and the missing rate represent the ratio of pixels with invalid phase recovery and the ratio of pixels with missing phase recovery, respectively. According to Table 1, the error ratio of APPU is less than $1.5\%$.

Tables Icon

Table 1. Reliability verification for the proposed APPU approach

5.2 Dynamic object reconstruction

Next, the proposed approach is applied to reconstruct dynamic objects moving within $6\pi$ changes in phase domain. Because only three fringe patterns are utilized, the proposed APPU approach is applicable for the high-speed 3-D shape measurement of free-moving objects.

For a free-moving hand, APPU achieves the dynamic 3-D shape recovery of the hand moving away from the measurement system. Figure 10 presents the reconstructed results of the hand portion and more details are shown in Visualization 1. Although the hand moves in the large depth range, the 3-D surface profiles are correctly and clearly retrieved throughout the process.

 figure: Fig. 10.

Fig. 10. The 3-D reconstruction results of a free-moving hand (see Visualization 1).

Download Full Size | PDF

For a moving lid with complex edge features, as shown in Fig. 11(a), the image segmentation method works well for separating object areas and background areas. The 3-D reconstruction model results are presented in Visualization 2 and Fig. 11(b). The dynamic measurements indicate that our approach is highly suitable for the high-speed 3-D shape measurement of arbitrary free-moving objects in a large depth measurement range.

 figure: Fig. 11.

Fig. 11. The measurement results for a lid. (a) A lid with complex edge features. (b) The reconstructed 3-D models of the free-moving lid (see Visualization 2).

Download Full Size | PDF

6. Discussion

This proposed absolute phase unwrapping approach APPU has the following advantages:

  • Simple measurement system setup and robust depth estimation method. The measurement system setup is simple because the reference phase map and the valid shadow length are directly calculated by means of an actual background plane. Moreover, the valid shadow length is used to estimate the distance between dynamic objects and the background plane, and the proposed SDEM robustly achieves the depth estimation of dynamic objects.
  • Accurate and absolute phase unwrapping in a large depth range. The reference phase map obtained by temporal phase unwrapping provides the fringe order determination for the wrapped phase map of objects. The proposed APPU pixel-by-pixel converts the wrapped phase maps into relative phase maps, and then into absolute phase maps using the SDEM, indicating that the absolute phase unwrapping is effective in a large depth range.
  • High-speed and automated 3-D shape measurement of free-moving objects. Without auxiliary projection patterns, the absolute 3-D recovery technique is suitable for high-speed 3-D shape measurement based on the traditional 3-step PSP. Unlike other PPU-based approaches, the proposed APPU is very reliable for automatically measuring free-moving objects with arbitrary surface shapes.

However, owing that the traditional PPU method is used to initially unwrap the wrapped phase in the proposed APPU approach, the entire measured surface of the dynamic objects must be less than $2\pi$ changes in phase domain. Meanwhile, the object-induced shadows on the background plane should be visible throughout the 3-D measurement process.

7. Conclusion

This paper presents a novel phase unwrapping approach APPU to extend the traditional PPU approach to an unlimited depth range for the absolute 3-D shape measurement of dynamic objects. The temporary phase maps unwrapped by PPU are categorized into two classes: failed phase maps and relative phase maps. Referring to the absolute phase map of a mobile reference plane, the failed phase maps are converted into relative phase maps using the edge difference quantization technique. Subsequently, the relative phase maps are transformed into the absolute phase map of dynamic objects by utilizing the proposed SDEM, which determines the correct phase map from the candidates. The proposed APPU approach is experimentally suitable for high-speed 3-D reconstruction without depth limitations or additional projection patterns.

Funding

Research on the Major Scientific Instrument of National Natural Science Foundation of China (61727809); Anhui Science and Technology Department (201903c08020002).

Disclosures

The authors declare no conflicts of interest.

References

1. S. Van Der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

2. M. Landmann, S. Heist, P. Dietrich, P. Lutzke, I. Gebhart, J. Templin, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed 3d thermography,” Opt. Lasers Eng. 121, 448–455 (2019). [CrossRef]  

3. Q. Zhou, X. R. Qiao, K. Ni, X. H. Li, and X. H. Wang, “Depth detection in interactive projection system based on one-shot black-and-white stripe pattern,” Opt. Express 25(5), 5341–5351 (2017). [CrossRef]  

4. B. W. Li and S. Zhang, “Superfast high-resolution absolute 3d recovery of a stabilized flapping flight process,” Opt. Express 25(22), 27270–27282 (2017). [CrossRef]  

5. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

6. L. Lu, J. T. Xi, Y. G. Yu, and Q. H. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39(23), 6715–6718 (2014). [CrossRef]  

7. L. Lu, Y. K. Yin, Z. L. Su, X. Z. Ren, Y. S. Luan, and J. T. Xi, “General model for phase shifting profilometry with an object in motion,” Appl. Opt. 57(36), 10364–10369 (2018). [CrossRef]  

8. C. Zuo, S. J. Feng, L. Huang, T. Y. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

9. C. E. Towers, D. P. Towers, and J. D. Jones, “Optimum frequency selection in multi-frequency interferometry,” Opt. Lett. 28(11), 887–889 (2003). [CrossRef]  

10. Y. J. Wang and S. Zhang, “Superfast multi-frequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]  

11. T. Pribanic, S. Mrvos, and J. Salvi, “Efficient multiple phase shift patterns for dense 3d acquisition in structured light scanning,” Image Vis. Comput. 28(8), 1255–1266 (2010). [CrossRef]  

12. J. W. Song, D. L. Lau, Y.-S. Ho, and K. Liu, “Automatic look-up table based real-time phase unwrapping for phase measuring profilometry and optimal reference frequency selection,” Opt. Express 27(9), 13357–13371 (2019). [CrossRef]  

13. D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” IEEE Conference on Computer Vision and Pattern Recognition pp. 195–202 (2003).

14. T. Weise, B. Leibe, and L. V. Gool, “Fast 3d scanning with automatic motion compensation,” IEEE Conference on Computer Vision and Pattern Recognition pp. 1–8 (2007).

15. Z. W. Li, K. Zhong, Y. F. Li, X. H. Zhou, and Y. S. Shi, “Multiview phase shifting: a full-resolution and high-speed 3d measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]  

16. T. Y. Tao, Q. Chen, J. Da, S. J. Feng, Y. Hu, and C. Zuo, “Real-time 3-d shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24(18), 20253–20269 (2016). [CrossRef]  

17. W. Yin, S. J. Feng, T. Y. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

18. T. Y. Tao, Q. Chen, S. J. Feng, J. M. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3d shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018). [CrossRef]  

19. Y. T. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

20. Y. Xing and C. Quan, “Reference-plane-based fast pixel-by-pixel absolute phase retrieval for height measurement,” Appl. Opt. 57(17), 4901–4908 (2018). [CrossRef]  

21. M. H. Duan, Y. Jin, C. M. Xu, X. B. Xu, C. A. Zhu, and E. H. Chen, “Phase-shifting profilometry for the robust 3-d shape measurement of moving objects,” Opt. Express 27(16), 22100–22115 (2019). [CrossRef]  

22. C. F. Jiang, B. W. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017). [CrossRef]  

23. J. F. Dai, Y. T. An, and S. Zhang, “Absolute three-dimensional shape measurement with a known object,” Opt. Express 25(9), 10384–10396 (2017). [CrossRef]  

24. D. Marr and E. Hildreth, “Theory of edge detection,” Proc. Royal Soc. London. Ser. B. Biol. Sci. 207(1167), 187–217 (1980). [CrossRef]  

25. C. Zuo, Q. Chen, G. H. Gu, S. J. Feng, and F. X. Y. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]  

26. K. Liu, Y. C. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-d shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]  

27. Y. J. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The 3-D reconstruction results of a free-moving hand.
Visualization 2       The reconstructed 3-D models of a free-moving lid.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The principle of PPU. (a) Spatial distribution of the DFP system and the reference plane. (b) Converting the wrapped phase into the unwrapped phase.
Fig. 2.
Fig. 2. The depth limitation of PPU. (a) Distribution of the measured hemisphere. (b) Absolute phase map and cross-section unwrapped by PPU. (c) Failed phase map and cross-section preprocessed by IPPU. (d) Relative phase map and cross-section preprocessed by RPPU.
Fig. 3.
Fig. 3. Generation and utilization of the mobile reference phase map. (a) Generation of the mobile reference phase map according to the given phase map ${{\Phi }_{max }}$. (b) Histogram statistics of the edges of the failed phase map.
Fig. 4.
Fig. 4. The alternative reconstructed results and the corresponding shadows. (a) The relative phase map ${{\Phi }_{r}}$ is within $\left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-4\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi \right ]$. (b) The relative phase map ${{\Phi }_{r}}$ is within $\left ( {{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}}\left ( x,y \right )-2\pi ,{{\textrm { }\!\!\Phi \!\!\textrm { }}_{max}} \right ]$.
Fig. 5.
Fig. 5. The overall workflow of the proposed APPU approach. With the assistance of the dithering defocusing technique and 3-step PSP, the wrapped phase map and the modulation image of the measured object are both obtained. When the number of pixels on private edges is greater than $10\%$ of the number of pixels for all edges of the temporary unwrapped phase map, the temporary phase map is divided into a failed phase map; otherwise, it is divided into a relative phase map. Referring to the mobile reference phase map, the failed phase map is converted to the relative phase map until the number of pixels of private edges is less than $10\%$ of the number of pixels for all edges. Using the valid shadow length and the proposed SDEM, the relative phase map is transformed to the final absolute phase map of the measured object.
Fig. 6.
Fig. 6. In the modulation image, the object region ${{I}_{o}}$ and the shadow area ${{I}_{s}}$ are determined by detection and segmentation in Section 3.3. The upper boundaries of the object region and the shadow area are separately extracted to calculate the valid shadow length $\textrm { }\!\!|\!\!\textrm { }{{S}_{0}}S|$.
Fig. 7.
Fig. 7. The DFP system includes a DLP projector, a CMOS camera, a background plane, and a linear stage.
Fig. 8.
Fig. 8. Spatial distribution of the statues and the background plane. (a) Two isolated statues in front of the background plane. (b) The statues in three different spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (c) The mask of the two statues for one of three spatial positions. (d) The absolute phase map of the two statues for one of three spatial positions.
Fig. 9.
Fig. 9. The reconstructed results of two isolated statues at three spatial positions. (a-c) Reconstructed 3-D surfaces using the proposed APPU approach at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (d-f) Reconstructed 3-D surfaces using the traditional PPU approach at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$. (g-i) Reconstructed 3-D surfaces using multi-frequency phase unwrapping at spatial positions ${{P}_{1}}$, ${{P}_{2}}$, and ${{P}_{3}}$.
Fig. 10.
Fig. 10. The 3-D reconstruction results of a free-moving hand (see Visualization 1).
Fig. 11.
Fig. 11. The measurement results for a lid. (a) A lid with complex edge features. (b) The reconstructed 3-D models of the free-moving lid (see Visualization 2).

Tables (1)

Tables Icon

Table 1. Reliability verification for the proposed APPU approach

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

s c [ u c v c 1 ] t = P c [ x w y w z w 1 ] t ,
s p [ u p v p 1 ] t = P p [ x w y w z w 1 ] t ,
v p =   Φ   × T / ( 2   π   ) ,
  Φ   m a x ( u c , v c ) = f ( z m a x ; T , P c , P p ) .
k ( x , y ) = f l o o r [   Φ   m a x ( x , y ) ϕ ( x , y ) 2 π ] ,
  Φ   ( x , y ) = ϕ ( x , y ) + 2 π × k ( x , y ) ,
        Φ   A m ( x , y ) =   Φ   m a x ( x , y ) m × Δ Φ ,
  Φ   m a x ( x , y )   Φ   A m ( x , y ) >   Φ   m a x ( x , y ) 2 π ,
  Φ   ( q ) =   Φ   r 2 π ( q 2 ) ,
        Φ   V f ( x , y ) =   Φ   m a x ( x , y ) Δ φ ,
Δ φ w = a r g m i n { [   Φ   ( O w ) +   Φ   ( O w + 1 ) ] / 2   Φ   V f ( x , y ) } 2 ,
Δ φ 3 > Δ φ 2 >   Φ   m a x   Φ   r > Δ φ 1 .
  Φ   =   Φ   r + c × 2 π ,
c = { 2 , Φ m a x ( | S 0 S | ) Δ φ 3 , 1 , Δ φ 3 > Φ m a x ( | S 0 S | ) Δ φ 2 , 0 , Δ φ 2 > Φ m a x ( | S 0 S | ) Δ φ 1 , 1 , Φ m a x ( | S 0 S | ) < Δ φ 1 .
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) + 2 π ( n 1 ) / N ] ,
I n R ( x , y ) = A R ( x , y ) + B R ( x , y ) cos [ ϕ R ( x , y ) + 2 π ( n 1 ) / N ] ,
I d i f f = i m b w { a b s [ n = 3 n = 1 ( I n I n R ) ] , ϵ } ,
I m a s k ( x , y ) = I d i f f ( x , y ) I s ( x , y ) ,
ϕ ( x , y ) = tan 1 { n = 1 N I n ( x , y ) sin [ 2 π ( n 1 ) ] / N n = 1 N I n ( x , y ) cos [ 2 π ( n 1 ) ] / N } .
B ( x , y ) = 2 N { [ 3 n = 1 I n ( x , y ) sin ( 2 π n N ) ] 2 + [ 3 n = 1 I n ( x , y ) cos ( 2 π n N ) ] 2 } 0.5 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.