Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Range-precision improvement of a time-of-flight range sensor using dual reference plane sampling

Open Access Open Access

Abstract

This study aimed to achieve high range precision in the sub-100 µm order with time-of-flight (TOF) range imaging for 3-D scanners. The precision of a TOF range imager was improved using dual reference plane sampling (DRPS). DRPS using two short-pulse lasers reduces driver jitter, which limits the range precision below sub-100 µm. A proof-of-concept measurement system implemented using a TOF range imager demonstrated the reduction in driver jitters, resulting in reduced column-to-column variation in range precision. The developed system also achieved a high precision of 52 µm using a single frame and 27 µm using a 10-frame average.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3-D) scanning systems are useful for industrial measurement applications, such as component inspection, reverse design, and reverse engineering, in combination with 3-D printers. Contactless 3-D scanners [13] typically use the active triangulation method because of their high accuracy/precision capability. For example, high-accurate models of 3D scanners [2,3] aimed for a coordinate measurement machine or a robot arm have sub-10 µm to several tens of µm precision (point cloud deviation) in depth direction where the measurable range 15-70 mm at the displacement of 100-200 mm. However, the method requires a baseline between the camera and the light source for high range precision. The baseline, e.g. over 140 mm in [3], provides a theoretical limitation for miniaturizing the scanner size and generating an occlusion.

An alternative approach is time-of-flight (TOF)-based 3-D measurement. TOF range imagers using CMOS image sensor technology have achieved strong growth since the 2010s [414]. The mainstream of the growth is aimed at applications, such as gaming and touchless human interfaces, based on gesture/object recognition. The realization of TOF-based 3-D scanners provides attractive features, such as a coaxial system enabling occlusion-free 3-D measurement and small head size. The major challenge is the improvement of range precision in the TOF range imaging system. Typical TOF range cameras with pulse modulation have a range precision of a few millimeters, which is inadequate for 3-D scanners for industrial measurement. Recently, a frequency-modulated continuous-wave (FMCW) TOF [15,16] or a pulse coherent TOF system [17] that has a sub-10µm precision were presented for high-precision 3D scanners. However, they have only a single or a few receiver channels. Thus, high-speed 2D mechanical scanning is required for a 3D scanning system.

For higher range precision in the pulse-based TOF range imagers, we proposed a TOF method using an impulse photocurrent response and high-speed lock-in pixels based on a lateral electric field modulator (LEFM) [18], achieving a range precision of 250 µm [19,20]. At such a high range precision of 200 µm, clock jitter becomes a critical issue. Thus, we proposed reference plane sampling (RPS) [21] to reduce the jitter on the light trigger. In this method, a part of the pixel array other than the main pixels is used as reference pixels to measure the reflected light from a reference plane located at a known fixed distance. As the TOF computed at the reference pixel includes the jitter component of the light source, the jitter of the light source is canceled by taking the difference in TOF calculated from the main and reference pixels. With this method, the prototype TOF range sensor achieved a high range precision of 64 µm [21].

The residual jitter, i.e., gate driver jitter, needs to be reduced to improve the range precision further. In this study, we propose dual reference plane sampling (DRPS) to reduce the gate driver jitters. As the gate drivers are implemented in column-parallel architectures, the pixels in each column have the same fluctuation owing to the gate driver jitter. In DRPS, an additional laser is used to generate another reference light in the column-parallel reference pixel in addition to the conventional RPS. Therefore, the driver jitter component is extracted from the column-parallel reference pixels and is reduced by taking the difference. This study aims to discuss the improvement of range precision after the gate driver jitter reduction using a proof-of-principle measurement system of DRPS. In the implemented system, the same TOF range imager as [21] was used.

The remainder of this paper is organized as follows. Section 2 describes the TOF range image sensor and the indirect TOF method used in the measurement. In Section 3, the concepts of the conventional RPS and the newly proposed DRPS are presented. Section 4 describes the implemented DRPS measurement system, evaluation results, and discussion. Finally, conclusions are presented in Section 5.

2. TOF range imaging

2.1 TOF range imager and indirect TOF with impulse photocurrent response

Figure 1 shows the TOF range imager and the indirect TOF method using the impulse photocurrent response used in this study. The TOF range imager has 192(H) × 8(V) effective pixels that consist of LEFM-based three-tap lock-in pixels. The gate drivers were implemented in each pixel column. In the indirect TOF method, a short-pulse laser with a pulse width of less than 100 ps was used as the light source, unlike the pulse- or continuous-wave indirect TOF. The very short pulse generates the photocurrent at the photodiode, and its response determines the equivalent pulse width for the TOF calculation. The equivalent pulse width is designed to be short using a three-tap LEFM lock-in pixel with high-speed charge modulation.

 figure: Fig. 1.

Fig. 1. TOF measurement technique and the prototype TOF sensor used in this study. The indirect TOF measurement technique is based on the impulse photocurrent response using three-tap lock-in pixels. Three-tap LEFM with a drain and its potential diagram are shown (G1=high (2.0 V), G2, G3, and GD = low (−1.0 V)).

Download Full Size | PDF

The operation of the indirect TOF is as follows: The light pulse, including the background light, was distributed to G1 and G2, whereas G3 captured only the background light. The signal light accumulated by the two gates (G1 and G2) contributed to the TOF calculation. Between the gate clocks, a non-overlap time (${\textrm{T}_{NOV}}$) was created in the driver circuits. This prevents the partitioning of the photogenerated charge and reduces the distortion of the modulation characteristics. As the falling edges of each clock switch the integration of photogenerated electrons, the gate jitter of the G1 falling edge only influences the range precision.

2.2 Distance calculation and range precision

When the photocurrent response is approximated by a Gaussian function, the calculated distance ${D_{tof}}$ is expressed as [21]

$${D_{tof}} = \frac{c}{2}\left( {\sqrt {2\pi } \tau X + {T_{ofs}}} \right),$$
where c is the speed of light, $\tau $ is the intrinsic photocurrent response, and ${T_{ofs}}$ is the time offset. X is the ratio of the signal charges, which is calculated by
$$X = \frac{{{N_2} - {N_3}}}{{{N_1} + {N_2} - 2{N_3}}},$$
where ${N_1}$, ${N_2}$, and ${N_3}$ are the numbers of signal electrons obtained by FD1, FD2, and FD3, respectively. When the background light is negligibly small (${N_3} \approx 0$), the range precision, ${\sigma _D}$, can be obtained by applying the error propagation formula to Eq. (1) as follows [21]:
$${\sigma ^2}_D = \frac{{{D^2}_{MAX}}}{{\overline {{N_{SM}}} }}\left[ {X({1 - X} )+ \frac{{2{N^2}_R}}{{{N_{SM}}}}\{{3X({X - 1} )+ 2} \}} \right] + {{\left( {\frac{c}{2}} \right)}^2}{\sigma ^2}_{Tjitter},$$
where ${D_{max}}\; \left( { = \frac{c}{2}\sqrt {2\pi } \tau } \right)$ corresponds to the measurable range, ${N_R}$ is the number of electrons due to dark noise, ${N_{SM}}$ is the sum of the effective number of signal electrons ($\overline {{N_{SM}}} = \overline {{N_1}} + \overline {{N_2}} $), and ${\sigma _{Tjitter}}$ is a jitter component induced in ${T_{ofs}}$. In standard TOF range imagers, the signal shot noise is dominant; the second term is negligibly small compared with the first term. Therefore, the range precision is proportional to the photocurrent response ($\tau $) and is inversely proportional to the square root of the acquired signal electrons ($\overline {{N_{SM}}} $).

3. Improvement technique of range precision

3.1 Reference plane sampling

The range precision is determined by the photocurrent response and signal shot noise. However, in the range of precision better than sub-millimeter order, the second term in Eq. (3) becomes non-negligible, and the jitter of the gate clock and the light source trigger become dominant. Therefore, we proposed an RPS to reduce the jitter influence of the light source trigger [21]. The concept of RPS is shown in Fig. 2. In this method, the laser beam is divided into two directions: main and reference pixels. The light incident on the reference pixels originates from a reference plane fixed at a known distance.

 figure: Fig. 2.

Fig. 2. Conceptual schematic of RPS. The reflected light from the reference plane is incident onto the reference pixels. ${\delta _L}$ and ${\delta _d}(i )$ represent the jitters of the light source and the driver in the i-th column, respectively.

Download Full Size | PDF

In the following, the TOFs calculated from the main and reference pixels in the $i$-th and ${i_R}$-th columns, ${\textrm{T}_{main}}(i )$, and ${\textrm{T}_{Ref}}(i )$ are expressed as follows:

$${T_{main}}(i )= {T_{tof}} + {\delta _{L1}} + {\delta _d}(i ),$$
$${T_{Ref}}({{i_R}} )= {T_{R1}} + {\delta _{L1}} + {\delta _d}({{i_R}} ),$$
where ${T_{tof}}(i )$ and ${T_{R1}}$ are the TOFs corresponding to the distances to the target and reference planes, respectively. In the equation, $\delta $ represents a time fluctuation due to jitters, ${\delta _{L1}}$ is the jitter of the laser, and ${\delta _d}(i )$ is the driver jitter component of the $i$-th column. The TOF obtained by RPS, ${\textrm{T}_{tof,RPS}}$ (i), is expressed as
$${T_{tof,RPS}}(i )= {T_{main}}(i )- {T_{Ref}}({{i_R}} )= {T_{tof}}(i )- {T_{R1}} - ({{\delta_d}(i )- {\delta_d}({{i_R}} )} ).$$

Thus, the light source jitter ${\mathrm{\delta }_{\textrm{L}1}}$ is canceled, whereas the driver jitter remains, becoming dominant in the high precision range.

3.2 Dual reference plane sampling (DRPS)

As described above, the RPS offers a reduction in the light source jitter, but it has the problem of residual driver jitter ${\delta _d}$. This problem needs to be solved to improve the range precision further. To suppress the driver jitter, we propose a DRPS that uses additional row-wise pixels as a column reference pixel array in the same chip. Figure 3 shows the concept of the DRPS. In addition to the same reference pixels (Ref1) as the RPS, the DRPS has additional reference pixels: column reference (ColRef) and reference 2 (Ref2). These pixels capture reflected light from reference plane 2, which is also placed at a fixed distance. The TOFs obtained by the ColRef. and Ref2 pixels in the $i$-th and ${i_R}$-th columns, ${\textrm{T}_{ColRef}}(i )$ and ${\textrm{T}_{Ref2}}({{i_R}} )$, respectively, are given by

$${T_{ColRef}}(i )= {T_{R2}} + {\delta _{L2}} + {\delta _d}(i ),$$
$${T_{Ref2}}({{i_R}} )= {T_{R2}} + {\delta _{L2}} + {\delta _d}({{i_R}} ),$$
where ${T_{R2}}$ is the TOF associated with the distance to reference plane 2 and ${\delta _{L2}}$ is the jitter of laser 2. The difference between Eqs. (7) and (8) is calculated as
$${T_{ColRef}}(i )- {T_{Ref2}}({{i_R}} )= {\delta _d}(i )- {\delta _d}({{i_R}} ).$$

Thus, the driver jitters, ${\delta _d}(i )$ and $\delta ({{i_R}} )$, are extracted by calculating the TOFs of the ColRef and Ref2 pixels. In the same manner as the RPS, the TOFs of the main and reference pixels are given by Eqs. (4) and (5), respectively: The TOF of DRPS, ${T_{tof,DRPS}}$, is calculated by taking the difference between Eqs. (6) and (9) as follows:

$${T_{\textrm{tof},\textrm{DRPS}}}(i )= {T_{main}}(i )- {T_{Ref}} - \{{{T_{ColRef}}(i )- {T_{Ref2}}} \}= {T_{tof}}(i )- {T_{R1}}.$$

Consequently, the distance calculated using DRPS is independent of the driver jitter (${\delta _d}(i )\; \textrm{and}\; {\delta _d}({{i_R}} )$) and ${T_{R2}}$, resulting in high precision.

 figure: Fig. 3.

Fig. 3. Conceptual schematic of DRPS. An additional light source (laser 2) is irradiated onto reference plane 2, and the reflected light is captured by column reference (ColRef) and reference 2 (Ref2) pixels. ${\delta _{L1}},$ ${\delta _{L2}}$, and ${\delta _d}(i )$ represent the jitters of laser 1, laser 2, and the driver in the i-th column, respectively. Note that the driver jitters of the main and column reference pixels are identical in each column.

Download Full Size | PDF

In reality, the measured TOFs (${T_{main}},\; {T_{ColRef}},\; {T_{R1}},$ and ${T_{R2}})$ exhibit fluctuations owing to the photon shot noise of each pixel. To reduce the influences on ${T_{R1}}$ and ${T_{R2}}$, the TOFs are calculated by averaging the reference pixels as

$${T_{\textrm{tof},\textrm{DRPS}}}(i )= {T_{main}}(i )- {T_{ColRef}}(i )- \frac{1}{{{N_R}}}\mathop \sum \limits_{{i_R}}^{{N_R}} ({{T_{Ref}}({{i_R}} )- {T_{Ref2}}({{i_R}} )} ),$$
where ${N_R}$ is the number of columns for the reference pixels.

In fact, the above calculation can be performed without using the DRPS. The column-parallel (row-wise) reference pixels enable the extraction of driver jitter, and jitter reduction can be performed using the extracted data. Surface irradiation (two-dimensional illumination), which is typically used in standard TOF cameras, is also applicable to make column-parallel reference pixels. However, the laser module used in this study has limited power, and light should be shaped to a line to achieve such high precision in the 100-µm order. The separation and optical masking of line-shaped light is difficult or causes a notable loss of light power. Hence, we propose DRPS using an additional light source to demonstrate the proof-of-concept of driver jitter reduction.

The current TOF sensor using the impulse photocurrent response [21] has the limited measurable range of 20-30 mm. The measurable range can be expanded by using a range-shift technique in pulsed-based TOF measurements [22]. To capture a large depth scene, the measurable range is shifted by controlling the laser delay. In principle, the range shifting technique is applicable in the DRPS even when the reference planes are placed at fixed distances. For the range-shifting operation with the RPS, the delay for laser trigger (laser 1) or the gating signals is controlled so that the received light is within the measurable range, i.e., the falling edge of G1(i) is inside of its photocurrent response. When the distance to the reference plane is fixed, the TOF measurement using the reference pixel array becomes outside the measurable range. If an additional delay is inserted for the gating clocks of the reference pixels, i.e., G1(${i_R}$)-G3(${i_R}$), the measurable range can be individually adjusted for the reference plane. For the DRPS, the reference plane 2 is virtually shifted by changing the trigger delay for the laser 2, which is already used in the existing measurement system as shown in Fig. 4(a). Thus, the measurable range of main pixels, column reference pixels (ColRef), reference pixels (Ref. and Ref2) can be adjusted. The current TOF sensor [21] does not allow this range-shifting operation with the RPS, but it can be realized by adding a delay control circuit for the gating signals for the reference array, i.e., G1(${i_R}$)-G3(${i_R}$). One of the concerns in this operation is an additional jitter source due to adding the delay for the gating signals for the reference array, i.e., G1(${i_R}$)-G3(${i_R}$). However, the jitter is included in the driver jitter in column #(${i_R}$) and the influence of jitter is canceled out using the DRPS operation.

 figure: Fig. 4.

Fig. 4. Schematics of the measurement system. (a) Optical system for DRPS. (b) top: the optical path of laser 1. bottom: the optical path of laser 2. The beam splitter splits the emitted light from laser 1 into two directions: target plane and reference plane 1. On the other hand, the light from laser 2 is emitted to the diffuser, which is equivalent to reference plane 2. The transmitted light is captured by column reference pixels in the image sensor.

Download Full Size | PDF

4. Experimental setup and results

4.1 Measurement setup

The optical system of the DRPS is shown in Fig. 4(a), and the optical paths of lasers 1 and 2 are shown at the top and bottom of Fig. 4(b), respectively. The trigger generated in the sensor chip was input to the lasers with arbitrary delays. Both lasers (lasers 1 and 2) are shaped into lines by line generator lenses. The line widths correspond to one-pixel lines when they are in focus. The light emitted from laser 1 was split by a beam splitter into the target and reference planes. The split lights were masked by the respective optical masks placed in front of the target and the reference plane. These optical masks were arranged so that the reflected lights from the target and the reference plane were separately captured by the main and the reference pixel arrays of the sensor, respectively. A margin exists between the main and reference regions to avoid light mixing. The light emitted from laser 2 was irradiated to the diffuser, and the transmitted light was incident onto the column reference pixels of the sensor. Thus, the diffuser plane is equivalent to reference plane 2.

In the current configuration of DRPS, the beam splitter was used to form a reliable optical path for the references. The beam splitter reduces the laser power irradiated to the target to be a quarter with respect to the total emitted power. If the target distance is long enough, removing the beam splitter is realized to place the sensor and the light source close together. In this case, there are several possible ways to create the reference planes equivalently. For example, one easy method is to place flat marker objects near the target for the reference plane 1 and 2. The other method is to use optical fibers: a part of light (laser 1) is split and introduced to the reference pixel array directly using an optical fiber. In this case, the reflection/transmission ratio of the beam splitter is not limited to 50%/50%, but can be set freely, such as 90% (for target)/10% (for reference). The column reference can also be created in the same manner, though additional optics to create line-shaped is required. In this study, however, we adapted to use the beam splitter to prove the concept of DRPS for the gate driver jitter reduction.

Figure 5 shows the actual measurement setup of DRPS. Short-pulse lasers (LDB-160, Tama Electric Inc.) were used. The wavelength and pulse width were 473 nm and <80 ps, respectively. Although one of the optical masks was placed at the target plane for simplicity, unlike in Fig. 4(a), this simplification did not influence the proof-of-concept of the DRPS. In the distance measurement presented in Section 4.2, the main pixels of 134(H) × 1(V) and the reference pixels of 32(H) × 1(V) are used. Since the required number of reference pixels is independent of the total pixel counts, the reduction of the spatial resolution due to the reference pixels can be small when the total pixel number becomes larger. A matte white board (EDU-VS1 manufactured by Thorlabs) was used as the target. The board has different flatness on the front and back, and the back side having higher flatness with was used in the experiment. The flatness is around <±2 µm for the area, corresponding to around 3 × 3 pixels when the target is placed at 270 mm. In distance measurements, the target was moved in the range of 260–290 mm, and the distance measurement was performed in steps of 1 mm.

 figure: Fig. 5.

Fig. 5. Measurement setup for DRPS. The white plate used as a target was moved in steps of 1 mm in the range of 260–290 mm for the measurement.

Download Full Size | PDF

4.2 Distance measurement

Figures 6(a) and 6(b) show the distance measurement results for the two pixels at columns #37 and #44 with large and small driver jitters, respectively. In Fig. 6, the measured distance (top), nonlinearity error (middle), and range precision (bottom) regarding the actual distance of the white target are shown. In the same manner as in [21], for the distance calculation, a fifth-order approximation is used to compensate for the nonlinearity due to the impulse response of the LEFM. The range precision is defined as standard deviation calculated using 1,000 frames.

 figure: Fig. 6.

Fig. 6. Measured distance, linearity error, and range precision for two pixels. (a) Pixel at column #37 having a large driver jitter, (b) pixel at column #44 having a small driver jitter.

Download Full Size | PDF

The nonlinearity was calculated using 1,000 frame average. Theoretically, the RPS and DRPS have no influence, i.e., neither positive nor negative effects, on the systematic errors such as nonlinearity. In reality, however, the measured error with DRPS at column #37 is slightly improved as shown in Fig. 6(a). For column #44, the measured error is improved both with RPS and with DRPS. These improvements are because the measured errors are as low as the measurement limit determined by the range precision. Note that the range precision without RPS and DRPS is determined by low-frequency noise such as 1/f noise, and therefore averaging using 1000 frames is less effective to reduce the random error due to range precision.

As shown in Fig. 6, the precision improvements with RPS and with DRPS depend on the column’s driver jitter. For instance, column #37, having the large driver jitter, has less improvement of the range precision with RPS, while the precision is much improved with DRPS. On the other hand, column #44, having the small driver jitter, has a large improvement even with RPS only. To show this dependence more clearly, Fig. 7 shows the column-to-column deviation of range precision in the measurable range without RPS, with RPS, and with DRPS. Similar to [21], the range precision is improved with RPS, but it has a large column-to-column deviation due to the driver jitter. By using DRPS, the column-to-column deviation is notably reduced owing to the reduction in driver jitter. For example, 34 pixels (25% of the main pixels) have a range precision that is worse than 70 µm in RPS, although all the pixels have an improved range precision compared with 66 µm in DRPS. The top of Fig. 7 shows the measured distance as a function of the frame number for the typical column (column #44) and the noisy column (column #37). In column #37, notable fluctuations, such as random telegraph noise (RTN), were observed, and the noise was canceled by the application of DRPS. As shown in Fig. 6(a), the range precision was improved for column #37 in all the measurable ranges. The range precision using DRPS is 54.2 µm for a median of all the pixels in the measurable range of 22 mm. Although it appears that DRPS has no effect on pixels having a small driver jitter, such as column #44, the effectiveness of DRPS is observed by frame averaging, as discussed later.

 figure: Fig. 7.

Fig. 7. Dependence of range precision on the column number and distance change in two columns (columns #37 and #44)

Download Full Size | PDF

Figure 8 shows the distance fluctuation in the pixel at column #37 for 20,000 consecutive frames corresponding to approximately 20 min as the sensor is operated at 16.4 fps. Without RPS, large fluctuations are observed due to drifts, which are introduced by the trigger circuit and the laser diode, and act as jitters. The RPS technique enables the cancelation of large fluctuations, and the range precision is notably improved. However, the calculated distance after RPS has large noises at around the 1,000th and 16,000th frames. These noises are induced most likely in the column driver and are removed with DRPS.

 figure: Fig. 8.

Fig. 8. Measured distance versus frame when the target is fixed (column #44). Here, 20,000 frames were captured at 16.4 fps, corresponding to a time of approximately 20 min.

Download Full Size | PDF

Figure 9 shows the range precision with frame averaging up to 20 frames, where the precision is calculated using 1,000 data after frame averaging. The range precision without RPS deteriorates with the frame averaging. As shown in Fig. 8, the calculated distance without RPS exhibits large fluctuations owing to drifts or low-frequency jitters that have a 1/f noise spectrum [23]. As frame averaging increases the equivalent acquired time, such low-frequency noises deteriorate the range precision. As shown in Fig. 9, even with the RPS, frame averaging has little effect. By contrast, the range precision with DRPS is improved by frame averaging owing to the reduction in driver jitter. A range precision of 27 µm (median) was achieved with a 10-frame average, corresponding to a time precision of 180 fs. The frame averaging is equivalent to increasing the signal electrons; very high range precision is expected even at a single frame by changing the sensor and system specifications, such as increasing the full well capacity of pixels and laser power. As shown in Fig. 9, the averaging is less effective at over 10 frames. This is likely caused by imperfection of the jitter cancellation due to the nonlinearity of the distance calculation. It should be solved for further improvement of the range precision.

 figure: Fig. 9.

Fig. 9. Range precision versus the number of frames for averaging. Green lines indicate the range precision for all main pixels, and the red line shows its median.

Download Full Size | PDF

Figure 10 shows sampled point clouds with DRPS (b) and RPS (c). The object is a white ball with a diameter of 20-mm as shown in Fig. 10(a). A 1-D mechanical stage was used and the step was set to 0.2 mm. Any post-processing filters except 10-frame averaging are not applied in these data. Owing to reduced column-to-column variation in the range precision, the accurate point cloud was obtained with DRPS.

 figure: Fig. 10.

Fig. 10. Captured point clouds. (a) The target is a white ball with a 20-mm diameter. (a) With DRPS. (b) With RPS. These data were calculated with 10-frame averaging.

Download Full Size | PDF

5. Conclusion

This paper presented a range-precision enhancement technique for TOF range imagers aimed at 3-D scanning applications. The proposed DRPS reduces the driver jitter, which has limited the range precision in previous work. The range precisions after RPS and DRPS were measured to be 58.1 µm and 54.2 µm, respectively, at a measurable range of 22 mm, corresponding to the time precisions of 388 fs and 362 fs, respectively. The deviation of the range precision between columns was also reduced, indicating that DRPS is useful for reducing column-dependent jitter. With DRPS, the range precision is further improved by frame averaging; a high range precision of 27 µm was achieved with a 10-frame average, corresponding to a time precision of 180 fs.

Funding

Japan Society for the Promotion of Science (18H05240, 19H02194); Center of Innovation Program (JPMJCE1311).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. GOM GmbH. ATOS CompactScan. Accessed: Oct. 7, 2021. [Online]. Available: https://www.gom.com/metrology-systems/atos/atos-compactscan.html

2. VECTORON, ApiScan, Accessed: Oct. 7, 2021. [Online]. Available: https://www.tbts.co.jp/e/product/p_detail.html?pdid=299

3. Pulstec Industrial. TDS-L 3D Scanner Series. Accessed: Oct. 7, 2021. [Online]. Available: https://www.pulstec.co.jp/pdf/tds-l.pdf

4. T. Spirig, P. Seitz, O. Vietze, and F. Heitger, “The lock-in CCD-two-dimensional synchronous detection of light,” IEEE J. Quantum Electron. 31(9), 1705–1708 (1995). [CrossRef]  

5. R. Schwarte, Z. Xu, H.-G. Heinol, J. Olk, R. Klein, B. Buxbaum, H. Fischer, and J. Schulte, “New electro-optical mixing and correlating sensor: facilities and applications of the photonic mixer device (PMD),” Sensors, Sensor Sys. Sen. Data Process. 3100, 245–253 (1997). [CrossRef]  

6. S.-M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A Time-of-Flight Range Image Sensor With Background Canceling Lock-in Pixels Based on Lateral Electric Field Charge Modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015). [CrossRef]  

7. R. Miyagawa and T. Kanade, “CCD-based range-finding sensor,” IEEE Trans. Electron Devices 44(10), 1648–1652 (1997). [CrossRef]  

8. D. Stoppa, N. Massari, L. Pancheri, M. Malfatti, M. Perenzoni, and L. Gonzo, “A Range Image Sensor Based on 10-m Lock-In Pixels in 0.18-m CMOS Imaging Technology,” IEEE J. Solid-State Circuits 46(1), 248–258 (2011). [CrossRef]  

9. C. S. Bamji, P. O’Connor, T. Elkhatib, S. Mehta, B. Thompson, L. A. Prather, D. Snow, O. C. Akkaya, A. Daniel, A. D. Payne, T. Perry, M. Fenton, and V.-H. Chen, “A 0.13 m CMOS System-on-Chip for a 512 × 424 Time-of-Flight Image Sensor With Multi-Frequency Photo-Demodulation up to 130 MHz and 2 GS/s ADC,” IEEE J. Solid-State Circuits 50(1), 303–319 (2015). [CrossRef]  

10. C. S. Bamji, S. Mehta, B. Thompson, T. Elkhatib, S. Wurster, O. Akkaya, A. Payne, J. Godbaz, M. Fenton, V. Rajasekaran, L. Prather, S. Nagaraja, V. Mogallapu, D. Snow, R. McCauley, M. Mukadam, I. Agi, S. McCarthy, Z. Xu, T. Perry, W. Qian, V.-H. Chen, P. Adepu, G. Ali, M. Ahmed, A. Mukherjee, S. Nayak, D. Gampell, S. Achrya, L. Kordus, and P. O’Connor, “IMpixel 65 nm BSI 320 MHz demodulated TOF Image sensor with 3µm global shutter pixels and analog binning,” in 2018 IEEE International Solid - State Circuits Conference - (ISSCC), 94–96 (2018).

11. Y. Kato, T. Sano, Y. Moriyama, S. Maeda, T. Yamazaki, A. Nose, K. Shina, Y. Yasu, W. V. D. Tempel, A. Ercan, and Y. Ebiko, “320 240 Back-Illuminated 10m CAPD Pixels for High-Speed Modulation Time-of-Flight CMOS Image Sensor,” IEEE J. Solid-State Circuits 53(4), 1071–1078 (2018). [CrossRef]  

12. S. Lee, D. Park, S. Lee, J. Choi, and S.-J. Kim, “Design of a Time-of-Flight Sensor With Standard Pinned-Photodiode Devices Toward 100-MHz Modulation Frequency,” IEEE Access 7, 130451–130459 (2019). [CrossRef]  

13. M.-S. Keel, Y.-G. Jin, Y. Kim, D. Kim, Y. Kim, M. Bae, B. Chung, S. Son, H. Kim, T. An, S.-H. Choi, T. Jung, Y. Kwon, S. Seo, S.-Y. Kim, K. Bae, S.-C. Shin, M. Ki, S. Yoo, C.-R. Moon, H. Ryu, and J. Kim, “A VGA Indirect Time-of-Flight CMOS Image Sensor With 4-Tap 7-m Global-Shutter Pixel and Fixed-Pattern Phase Noise Self-Compensation,” IEEE J. Solid-State Circuits 55(4), 889–897 (2020). [CrossRef]  

14. D. Kim, S. Lee, D. Park, C. Piao, J. Park, Y. Ahn, K. Cho, J. Shin, S. M. Song, S.-J. Kim, J.-H. Chun, and J. Choi, “Indirect Time-of-Flight CMOS Image Sensor With On-Chip Background Light Cancelling and Pseudo-Four-Tap/Two-Tap Hybrid Imaging for Motion Artifact Suppression,” IEEE J. Solid-State Circuits 55(11), 2849–2865 (2020). [CrossRef]  

15. B. Behroozpour, B. Behroozpour, P. A. M. Sandborn, N. Quack, T.-J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52(1), 161–172 (2017). [CrossRef]  

16. F. Aflatouni, B. Abiri, A. Rekhi, and A. Hajimiri, “Nanophotonic coherent imager,” Opt. Express 23(4), 5117 (2015). [CrossRef]  

17. L.-Y. Chen, A. K. Vinod, J. McMillan, C. W. Wong, and C.-K. K. Yang, “A 9-µm Precision 5-MSa/s Pulsed-Coherent Lidar System With Subsampling Receiver,” IEEE Solid-State Circuits Lett. 3, 262–265 (2020). [CrossRef]  

18. S. Kawahito, G. Baek, Z. Li, S.-M. Han, and K. Yasutomi, and Kagawa, “CMOS Lock-in Pixel Image Sensors with Lateral Eelectric Field Cotrol for Time-Resolved Imaging,” 2013 International Image Sensor Workshop, 361–364 (2013).

19. K. Yasutomi, T. Usui, S.-M. Han, T. Takasawa, K. Kagawa, and S. Kawahito, “An indirect time-of-flight measurement technique with impulse photocurrent response for sub-millimeter range resolved imaging,” Opt. Express 22(16), 18904–18913 (2014). [CrossRef]  

20. K. Yasutomi, T. Usui, S.-M. Han, T. Takasawa, K. Kagawa, and S. Kawahito, “A Submillimeter Range Resolution Time-of-Flight Range Imager With Column-Wise Skew Calibration,” IEEE Trans. Electron Devices 63(1), 182–188 (2016). [CrossRef]  

21. K. Yasutomi, Y. Okura, K. Kagawa, and S. Kawahito, “A Sub-100 m-Range-Resolution Time-of-Flight Range Image Sensor With Three-Tap Lock-In Pixels, Non-Overlapping Gate Clock, and Reference Plane Sampling,” IEEE J. Solid-State Circuits 54(8), 2291–2303 (2019). [CrossRef]  

22. T. Sawada, K. Ito, M. Nakayama, and S. Kawahito, “TOF range image sensor using a range-shift technique,” IEEE Sensors, 1390–1393 (2008). [CrossRef]  

23. T. Usui, K. Yasutomi, H. S. Man, T. Takasawa, K. Kagawa, and S. Kawahito, “An indirect time-of-flight measurement technique for sub-mm range resolution using impulse photocurrent response,” Proc. SPIE 9022, 90220W (2014). [CrossRef]  

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. TOF measurement technique and the prototype TOF sensor used in this study. The indirect TOF measurement technique is based on the impulse photocurrent response using three-tap lock-in pixels. Three-tap LEFM with a drain and its potential diagram are shown (G1=high (2.0 V), G2, G3, and GD = low (−1.0 V)).
Fig. 2.
Fig. 2. Conceptual schematic of RPS. The reflected light from the reference plane is incident onto the reference pixels. ${\delta _L}$ and ${\delta _d}(i )$ represent the jitters of the light source and the driver in the i-th column, respectively.
Fig. 3.
Fig. 3. Conceptual schematic of DRPS. An additional light source (laser 2) is irradiated onto reference plane 2, and the reflected light is captured by column reference (ColRef) and reference 2 (Ref2) pixels. ${\delta _{L1}},$ ${\delta _{L2}}$, and ${\delta _d}(i )$ represent the jitters of laser 1, laser 2, and the driver in the i-th column, respectively. Note that the driver jitters of the main and column reference pixels are identical in each column.
Fig. 4.
Fig. 4. Schematics of the measurement system. (a) Optical system for DRPS. (b) top: the optical path of laser 1. bottom: the optical path of laser 2. The beam splitter splits the emitted light from laser 1 into two directions: target plane and reference plane 1. On the other hand, the light from laser 2 is emitted to the diffuser, which is equivalent to reference plane 2. The transmitted light is captured by column reference pixels in the image sensor.
Fig. 5.
Fig. 5. Measurement setup for DRPS. The white plate used as a target was moved in steps of 1 mm in the range of 260–290 mm for the measurement.
Fig. 6.
Fig. 6. Measured distance, linearity error, and range precision for two pixels. (a) Pixel at column #37 having a large driver jitter, (b) pixel at column #44 having a small driver jitter.
Fig. 7.
Fig. 7. Dependence of range precision on the column number and distance change in two columns (columns #37 and #44)
Fig. 8.
Fig. 8. Measured distance versus frame when the target is fixed (column #44). Here, 20,000 frames were captured at 16.4 fps, corresponding to a time of approximately 20 min.
Fig. 9.
Fig. 9. Range precision versus the number of frames for averaging. Green lines indicate the range precision for all main pixels, and the red line shows its median.
Fig. 10.
Fig. 10. Captured point clouds. (a) The target is a white ball with a 20-mm diameter. (a) With DRPS. (b) With RPS. These data were calculated with 10-frame averaging.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

D t o f = c 2 ( 2 π τ X + T o f s ) ,
X = N 2 N 3 N 1 + N 2 2 N 3 ,
σ 2 D = D 2 M A X N S M ¯ [ X ( 1 X ) + 2 N 2 R N S M { 3 X ( X 1 ) + 2 } ] + ( c 2 ) 2 σ 2 T j i t t e r ,
T m a i n ( i ) = T t o f + δ L 1 + δ d ( i ) ,
T R e f ( i R ) = T R 1 + δ L 1 + δ d ( i R ) ,
T t o f , R P S ( i ) = T m a i n ( i ) T R e f ( i R ) = T t o f ( i ) T R 1 ( δ d ( i ) δ d ( i R ) ) .
T C o l R e f ( i ) = T R 2 + δ L 2 + δ d ( i ) ,
T R e f 2 ( i R ) = T R 2 + δ L 2 + δ d ( i R ) ,
T C o l R e f ( i ) T R e f 2 ( i R ) = δ d ( i ) δ d ( i R ) .
T tof , DRPS ( i ) = T m a i n ( i ) T R e f { T C o l R e f ( i ) T R e f 2 } = T t o f ( i ) T R 1 .
T tof , DRPS ( i ) = T m a i n ( i ) T C o l R e f ( i ) 1 N R i R N R ( T R e f ( i R ) T R e f 2 ( i R ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.