Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LIDAR pulse coding for high resolution range imaging at improved refresh rate

Open Access Open Access

Abstract

In this study, a light detection and ranging system (LIDAR) was designed that codes pixel location information in its laser pulses using the direct- sequence optical code division multiple access (DS-OCDMA) method in conjunction with a scanning-based microelectromechanical system (MEMS) mirror. This LIDAR can constantly measure the distance without idle listening time for the return of reflected waves because its laser pulses include pixel location information encoded by applying the DS-OCDMA. Therefore, this emits in each bearing direction without waiting for the reflected wave to return. The MEMS mirror is used to deflect and steer the coded laser pulses in the desired bearing direction. The receiver digitizes the received reflected pulses using a low-temperature-grown (LTG) indium gallium arsenide (InGaAs) based photoconductive antenna (PCA) and the time-to-digital converter (TDC) and demodulates them using the DS-OCDMA. When all of the reflected waves corresponding to the pixels forming a range image are received, the proposed LIDAR generates a point cloud based on the time-of-flight (ToF) of each reflected wave. The results of simulations performed on the proposed LIDAR are compared with simulations of existing LIDARs.

© 2016 Optical Society of America

1. Introduction

A light detection and ranging (LIDAR) system is a absolute distance measuring system by emitting a target with a laser light. It is populary used as a technology to make high resolution maps with airborne, terrestrial, and mobile applications. Airborne LIDAR is attatched to a plane during flight, and creates digital elevation models of the landscape [1]. Terrestrial LIDAR is most common as a survey method for monitoring of atmospheric [2], cultural heritage documentation [3], forensics [4], agriculture [5], and 3D object recognition [6]. Mobile LIDAR is when one or more LIDARs are attatched to a moving vehicle to collect surrounding environments along a path [7–11]. The algorithms for automated classification, manipulation, editing, detection, and recognition work better with high resolution LIDAR data because all of the features are better defined, so they resolve feature extraction much better and with more accuracy [12–14]. High resolution, high refresh rate, and long range LIDARs provide a vast amount of additional applications and uses that are limited by low resolution, low refresh rate, and short range LIDARs [15].

A LIDAR first measures the roundtrip time by emitting a laser and detecting its reflection from an object, and then calculates the distance to the object based on the measured time interval [16–21]. As shown in Eq. (1), LIDARs are classified into the following methods based on how they measure range (d): a pulsed time-of-flight (ToF) method measures the time difference (Δt) by emitting a pulse wave and detecting it [16–21]; an amplitude-modulated continuous wave (AMCW) method uses the phase difference (ϕ) between the moments of emitting and detecting a continuous wave [16,17,21]; and a frequency-modulated continuous wave (FMCW) method uses the frequency difference (Δf) between the moments of emitting and detecting a continuous wave [16,17]. The pulsed ToF, AMCW, and FMCW methods are appropriate for long, middle, and short distance measurements, respectively. However, LIDAR accuracy decreases as the measurement point becomes increasingly distant, regardless of the measurement method. For the pulsed ToF, AMCW, and FMCW methods, the range to the object is calculated as follows [16,17]

d=cΔt2=ϕc4πf=cBT4Δf
where c is the speed of light, B is the beat frequency, T is the period of the frequency sweep, and f is the modulation frequency.

LIDARs are also classified into scanning type and flash type according to the method by which the laser is emitted in the bearing direction, referring to the direction in which the distance from the LIDAR is measured. Because micropulse LIDARs are manufactured to allow the maximum permissible exposure (MPE) of the laser, as with class 1 laser products, their maximum laser powers are similar regardless of the emission type [19]. A scanning type emits a laser beam in the bearing direction and measures the distance by analyzing the waveform reflected from an object; this type repeats the emission process in each bearing direction in which a distance measurement is desired, receiving and detecting reflected waves after an idle listening time, and analyzing these waves [16,19]. However, as the emission and detection processes are conducted in sequence in each bearing direction, the measurement time increases in proportion to the number of bearing directions in the entire measurable area. Furthermore, the idle listening time required to detect a reflected wave after emitting a laser beam increases in proportion to the maximum measured distance. Long distance measurements can be conducted because a collimated beam is emitted in each bearing direction [20,21].

The flash type uses a laser in a manner similar to that of an optical camera with a flash lamp, which takes an image by turning on the flash lamp in the desired direction. The flash type uses a beam expander to diffuse the laser so that it can be emitted in a wide area in the bearing direction, like the flash from a camera’s flash lamp; like the scanning type, this type then receives the reflected waves after an idle listening time and measures the distance by analyzing the received waves [19–21]. The flash type requires less time to measure the entire area than the scanning LIDAR because the latter must conduct measurements in each direction. However, the flash type can be applied only to a very short measurement distance because it diffuses a laser with similar power to that of the scanning type over the measurement area, thus generating significant laser attenuation at greater measurement distances. In addition, the idle listening time required for the flash type to detect a reflected wave after emitting a laser beam increases in proportion to the maximum measurement distance [22,23].

Many LIDARs combine one of the three methods for measuring time with one of the two methods for emitting a laser. The FMCW scanning LIDAR, the AMCW scanning LIDAR, and the pulsed scanning LIDAR combine the scanning type with the FMCW method, the AMCW method, the pulsed ToF method, respectively. The flash LIDAR combines the flash type with the pulsed ToF method. Many autonomous vehicles, including those operated by Google, generate a local map that includes information on the drivable areas using the pulsed scanning LIDAR method, which is more effective for long distance measurements. Velodyne’s HDL-64E system [24], which is a well known pulsed 3D scanning LIDAR, can simultaneously measure 64 point locations within a distance of 120 m and conduct 360° measurements by rotating in increments of 0.08°; its range imaging refresh rate per second is 5 Hz (hertz). The Peregrine system [25], by Advanced Scientific Concepts, Inc. (ASC), which is a flash LIDAR, can simultaneously measure 128 × 32 point locations within a distance of 70 m and conduct measurements in the forward direction based on a field of view (FoV) of 30° in the horizontal direction and 7.5° in the vertical direction; its range imaging refresh rate per second is 30 Hz.

Some researchers have developed scanning LIDARs based on a microelectromechanical system (MEMS) mirror. Hofman et al. [26] have been developed the omnidirectional 360° LIDAR concept based on the combination of an omnidirectional lens and a biaxial large aperture MEMS mirror. Stan et al. [27,28] made a short-range LIDAR imager based on MEMS mirror for use on small unmanned ground vehicle (UGV) and unmanned air vehicles (UAV). Lee and Wang [29] have been proposed a method of uniform scanning for MEMS-based 3D imaging LIDAR system. Niclass et al. [30] and Ito et al. [31] combined an MEMS mirror with a single-photon imager. This system is capable of acquiring full 256 × 64-pixel images at 10 frames/s with a distance range up to 20 m. In these systems, the MEMS mirror is used because of its advantages, including less power consumption, long lifetime, low cost, and high scanning speed. But these researches are focused on replacing the servo-motorized mirror with an MEMS mirror so that the system architecture of the scanning LIDAR is unchanged.

In this study, a scanning LIDAR is proposed that consecutively measures 848 × 480 point locations within a distance of 150 m at a refresh rate of 60 Hz. This LIDAR does not require an idle listening time to detect reflected waves because its laser pulses include pixel location information encoded by applying the direct-sequence optical code division multiple access (DS-OCDMA) technique. Therefore, this emits in each bearing direction without waiting for the reflected wave to return. The laser pulses emitted in each direction also include checksums that are used to detect errors. We replaced the servo-motorized mirror with an MEMS scanning mirror and changed the laser pathway in the scanning LIDAR. The MEMS mirror is used to deflect and steer the coded laser pulses in the desired bearing direction. The receiver digitizes the received reflected pulses using the time-to-digital converter (TDC).

2. Design of 3D scanning LIDAR using coded pulses

Existing scanning and flash LIDARs require idle listening time to detect reflected waves. Because LIDARs measure the distance to an object using the ToF, the idle listening time required to detect reflected waves increases in proportion to the maximum measurement distance. In the pulsed scanning LIDAR, the minimum time required to measure the distance in one direction is determined by the sum of the times required to rotate the laser to the bearing direction, to generate and emit a laser pulse, to idle and listen for the return of the reflected wave, and to process the reflected signals. Among these factors, the idle listening time, which is determined by the maximum measurement distance, has the greatest effect on the minimum time. In the performance of the scanning LIDAR, the 3D range imaging refresh rate per second, maximum measurement distance, and number of bearing directions are interrelated and bear on each other. Therefore, when one of the three factors increases, the other two factors decrease. For instance, when the maximum measurement distance increases, the minimum time required to measure the distances in all of the bearing directions increases, and thus the number of directions that can be measured per second decreases. Therefore, either the refresh rate per second or the number of bearing directions should be reduced.

As illustrated in Fig. 1, the scanning LIDAR proposed in this study emits laser pulses in which the pixel location information for the pulse’s bearing direction is coded using the DS-OCDMA technique. The pulses thus coded and emitted in each direction have the form of a macropulse that combines several fine micropulses, as shown in Fig. 1. Because the laser pulse includes an identification number that can be used to identify its bearing direction, this information can be found by demodulating the reflected waves when they are received. The proposed LIDAR calculates the distance to an object that reflected a laser by identifying the direction and time of the laser pulse emitted based on the coded pixel location. The minimum time required to measure the distance in any one direction is irrelevant to the maximum measurement distance because this LIDAR switches to a new bearing direction based on the pixel locations of the reflected waves without waiting for the reflected waves to be received after the laser’s emission. Therefore, the maximum measurement distance does not affect this system’s design, and only the 3D range imaging refresh rate per second and the number of bearing directions are in an inverse proportion to one another.

 figure: Fig. 1

Fig. 1 Overall system of the proposed 3D scanning LIDAR

Download Full Size | PDF

In the proposed LIDAR, the arrival time after emitting a laser pulse in a bearing direction is determined by the distance from the object. Thus, the reflected waves of laser pulses emitted from different bearing directions can be received at the same time. In the proposed system, an emitted laser pulse is coded by the DS-OCDMA technique and becomes orthogonal to another laser pulse [32–38]. As a result, all of the reflected waves that are received at the same time are individually identified and demodulated into original information. The proposed scanning LIDAR consists of a transmitter and a receiver, which operate independently of each other, as shown in Fig. 2. The transmitter codes the pixel location bit stream through the DS-OCDMA technique and generates laser pulses using a laser diode. Subsequently, the transmitter emits the laser pulses generated in a bearing direction using a scanning-based MEMS scanning mirror. The receiver digitalizes the received reflected waves using TDCs and demodulates them using the DS-OCDMA technique. The receiver then calculates the ToFs of the laser pulses and converts them to distances to the relevant objects based on the emission time of the pixel locations.

 figure: Fig. 2

Fig. 2 Data transmission and reception sequence

Download Full Size | PDF

We use the system time to record the emission time and arrival time. The system time is the 32-bit count of the number of ticks that represents the passing of time from the start. With 10 GHz (gigahertz) clock tick, a single clock tick occurs every 100 ps, and a rollover of the 32-bit timer occurs approximately every 430 ms. The global operations are described in Fig. 3.

 figure: Fig. 3

Fig. 3 Global operations in the proposed LIDAR.

Download Full Size | PDF

2.1. Composition of laser pulse

The proposed LIDAR generates a 13-bit stream with a 10-bit column identification number and a three-bit cyclic redundancy check (CRC) checksum [39,40] from the most significant bit (MSB) to the least significant bit (LSB), as shown in Fig. 4. The 10-bit column identification numbers represent the locations of corresponding pixels in each bearing direction. A range image generated using the proposed LIDAR is based on measurements from 848 × 480 points, and the column identification number used to identify each of the 848 columns requires 10 bits. The transmitter emits a laser pulse and simultaneously records its row identification number, column identification number, and emission time. Through this process, the column identification number included in the received reflected wave can be used to identify the row identification number and emission time.

 figure: Fig. 4

Fig. 4 13-bit stream consisting of a 10-bit column identification number and 3-bit cyclic redundancy check (CRC) checksum from the most significant bit (MSB) to the least significant bit (LSB).

Download Full Size | PDF

The CRC-3 checksum is an error-detecting code to detect accidental changes and validate the received bit stream. The CRC-3 calculation is applied to the 10-bit column identification number to determine the validity of the received bit stream. Any data including errors are discarded, and only data that do not include errors are processed using their column identification numbers. A polynomial of a single variable x in the three-bit CRC is represented using x3 + x + 1.

2.2. Transmitter

The transmitter of the scanning LIDAR proposed in this study carries out each process following the algorithm shown in Fig. 5. This algorithm generates a 13-bit stream that includes the 10-bit column identification number to distinguish each of the 848 columns of pixels corresponding to each bearing direction, as well as the three-bit checksum. A spread spectrum multiplexing is implemented on the bit stream using a one-dimensional unipolar asynchronous prime sequence code based on the DS-OCDMA technique [32–37] to make the bit stream orthogonal to other laser pulses. This spreading of the transmitted signal over a higher bandwidth makes the resulting wideband signal appear as a noise signal which allows greater resistance to intentional and unintentional interference with the transmitted signal [41–43]. For prime sequence code, the overall interference level or white background noise level is always lower than the autocorrelation peak. However once received and despreaded with the correct codes, it is possible to extract the required data. The bit stream is then digitally modulated using the non-return-to-zero on-off keying (NRZ-OOK) method [37,38]. In the unipolar DS-OCDMA technique, a corresponding binary code word is transmitted only when the bit to be coded is ’1’ [34,37]. Digital data generate and emit laser pulses at the chip rate of 400 GHz using a laser diode and deflect and steer the pulses in the desired bearing direction using the MEMS mirror.

 figure: Fig. 5

Fig. 5 Transmission procedure in the transmitter part.

Download Full Size | PDF

As the transmitter codes the laser pulses using the asynchronous prime sequence code, its receiver separates and recognizes each different laser pulse represented by several reflected waves that are received at the same time [32–37]. To measure the distance up to 150 m, it may receive laser pulses corresponding to up to 25 pixels in 1 μs [34,37]. To identify each of these laser pulses, more than 25 different code words should be used. Therefore, the one-dimensional unipolar asynchronous prime sequence code used in the proposed LIDAR establishes a Galois field, GF(29), that is 841 elements in length, 29 in weight, and represents 29 different code words. When the spread spectrum on a 13-bit stream is implemented by multiplying each bit with a binary code word of 841 chips, 10 933 signals are generated. When a bit of the 13-bit stream has a value of ’1,’ it is transformed into Ci, which is a binary code word corresponding to 841 chips; when the bit has a value of ’0,’ the 841 chips are converted to ’0’s. Each element (si,j) of the prime sequence (Si = (si,0, si,1, . . ., si,i, . . ., si,28)) based on the prime number 29 is determined using si,j = ij (mod 29) and presented in Table 1.

Tables Icon

Table 1. Prime sequences over GF(29) based on the prime number 29

Each element (ci,l) of the binary code word (Ci = (ci,0, ci,1, . . ., ci,i, . . ., ci,840)) that maps the prime sequence Si is determined following Eq. (2), and the code words Ci are presented in Table 2.

ci,l={1ifl=si,j+29jforj=0,1,,280otherwise

As indicated in Tables 1 and 2, every prime sequence Si over GF(29) based on the prime number 29 starts with binary code word C0, and the binary code word C0 starts with ‘1.’ Therefore, the laser pulse coded through the asynchronous prime sequence code always starts with ’1.’ Accordingly, when the reflected wave of the laser pulse coded through the DS-OCDMA technique is received, the state of the received reflected wave is recognized.

Tables Icon

Table 2. Each element of prime sequence Si is mapped into binary code words Ci

After the spread spectrum is implemented on the data and the data is digitally modulated using the NRZ-OOK method, the data generate and emit laser pulses at 400 GHz using a low-temperature-grown indium gallium arsenide (LTG InGaAs) based large area photoconductive antenna (PCA) with plasmonic contact electrode gatings [44–51]. A laser pulse emitted in a bearing direction contains 10 933 micropulses that comprise 377 ’1’s and 10556 ’0’s. Each micropulse shows a Gaussian energy distribution with a wavelength of 905 mm, diameter of 1 mm, and width of 2.5 ps, emitting a laser beam that exhibits a power same as or similar to the MPE of class 1 laser products. The transmitter saves information on the row and column identification numbers with the time the laser beam is emitted for use in calculating the ToF of the received reflected wave.

Existing 3D scanning LIDARs are designed to be coaxial because a single mirror is used to both transmit and receive a laser [16, 24, 52]. For this reason, in these LIDARs, the mirror is not rotated to receive the reflected waves based on the maximum measurement distance. In contrast, the LIDAR proposed in this study uses a MEMS mirror that has freedom on two axes, instead of a mirror equipped with a motor to deflect and steer a laser pulse in the desired bearing direction [53, 54]. In this LIDAR, the transmitter and receiver operate independently, and the MEMS mirror is only used to transmit laser pulses, but not to receive them. Because the MEMS mirror is used to transmit a collimated beam, the mirror has the minimum size required to reflect a collimated beam that is 1 mm in diameter. The MEMS mirror, manufactured by MicroVision [54], is 1.1 mm × 1.2 mm in size and has freedom on two axes, a horizontal field-of-view (HFoV) of 43.2°, and vertical field-of-view (VFoV) of 24.3°. The HFoV can be adjusted in 848 increments, with an angular change of 0.0509° per increment. The VFoV can be adjusted in 480 increments, with an angular change of 0.0506° per increment. Thus, this MEMS mirror enables a range image with a resolution of 848 × 480 point locations that can be measured 60 times per second, and the mirror remains at each pixel for approximately 40 ns. Therefore, the laser pulse should be emitted within 40 ns so that the MEMS mirror can deflect and steer it in the desired bearing direction. When the laser diode generates and emits a laser pulse that is 1 mm in diameter and 2.5 ps in width, 27.4875 ns are required to reflect a laser pulse that includes 10 933 chips. In other words, sufficient time is ensured to use the MEMS mirror. Following the algorithm shown in Fig. 5 once the row and column identification numbers are determined according to the bearing direction, the LIDAR proposed in this study adjusts the angle of the MEMS mirror based on these numbers, emits a laser pulse on which a spread spectrum is implemented through the DS-OCDMA technique, and deflect and steer the pulse using the MEMS mirror in the desired bearing direction.

2.3. Receiver

The receiver carries out its function following the algorithm shown in Fig. 6. The receiver is equipped with a lens that receives the reflected wave, which is then converted to a photocurrent using an LTG InGaAs based PCA with plasmonic contact electrode gratings [44–51]. The PCA is connected to a transimpedence amplifier (TIA) followed by a lock-in amplifier (LIA) [55–60]. The resulting photocurrent of PCA is preamplified and converted into a voltage signal by the TIA. The voltage signal is then rectified by the subsequence LIA that relies on phase sensitive detection to extract a weak signal buried in noisy background signals [61–65]. So, the receiver is sensitive enough to receive laser pulses reflected from a point location that is maximum 150 m away from a perfectly diffuse reflecting white surface corresponding to the definition of 100 % object remission [66–70]. The LIA is coupled with 1150 TDCs and has very high sensitivity and picosecond timing accuracy [71–74]. The 10-bit asynchronous pipelined TDC [71] with 300 MS/s (mega samples per second) conversion speed, 2.5 ps timing resolution, 2.56 ns conversion range and 3.3 ns dead-time, measures the time interval between the START signal and STOP signal. The detection chain is split into independent parallel chains, thus breaking the limit of the congestion-distortion. The LIA generates a STOP signal in correspondence to a photon detection that is routed directly to TDCs. Every time the LIA generates a STOP signal, one of the TDCs starts to compute the time difference, while the others are ready to start other conversions. In the worst case, it can receive 1149 pulses within the dead-time of the TDC. By using 1150 TDCs, 1150 pulses can be correctly processed simultaneously, thus reaching conversion rates up to 345 Gcps (giga chips per second) on a single detection stage. The 10 GHz clock tick is fed into the START signal of every TDC. After complete conversion, the TDC calculates the arrival time by adding the time difference to the system time and enqueues this time in memory queue.

 figure: Fig. 6

Fig. 6 Reception procedure in the receiver part.

Download Full Size | PDF

The first signal of the laser pulses emitted through the DS-OCDMA method is always ’1.’ Thus, when a signal regarded as ’1’ is received, the receiver dequeues the arrival times from the memory queue and converts the signal to data of 10 933 chips at 400 GHz using the NRZ-OOK demodulation method. Among the converted chip data, the receiver detects the data treated with the spread spectrum using an autocorrelation function and a crosscorrelation function [35–37], which are shown in Eqs. (3) and (4). When Ci (n) = 0 for n < 0 and n ≤ 841, for all i, Ci,j (l) is the aperiodic correlation function, Θi,j (l) is the periodic correlation function. These are autocorrelation functions when i = j, and crosscorrelation functions when i, j. The aperiodic correlation function is used for the matched filter output by an isolated ’1’ in the incoming data stream, while the periodic correlation function is used for adjacent ’1’s. The autocorrelation peak for the code is equal to the number of ’1’ it contains, or 29 in the case of our system. That is Θi,i (0) = Ci,i (0) = 29 for all i. The maximum number of coincidences of ’1’s between two distinct prime codes Ci and Cj is 2, so that all periodic crosscorrelation functions are bounded by Θi,j ≤ 2 for all l, and all i, j such as i, j. In the absense of (P −1)/2 = (29−1)/2 = 14 interfering signals or background noises, error-free transmission is possible provided the autocorrelation peak is always higher than the overall interference and background noise level. The despread spectrum process decodes the chip data by the unipolar asynchronous prime sequence code and converts them to a 13-bit stream.

Ci,j(l)=n=0840Ci(n)Cj(n+l)
Θi,j(l)=n=0840Ci(n)Cj((n+l)mod841)=Ci,j(lmod841)+Ci,j((lmod841)841)

The receiver generates the CRC through a three-bit CRC algorithm that uses the column identification number included in the coded 13-bit stream and compares the CRC thus derived with that included in the 13-bit stream. If the two CRCs are different, the receiver discards the 13-bit stream because there is an error. If the two CRCs match, the receiver uses the column identification number included in the 13-bit stream to identify its row number and the time at which the received pulse was emitted, and then calculates the ToF and distance between the LIDAR and the object by adding the time at which the leading-1 of the chip data was received. The receiver records the row and column identification numbers as well as the distance between the LIDAR and object. When these processes are conducted in the entire set of 848 × 480 directions, a point cloud image is formed [75].

3. Simulation

Simulations were conducted to verify the performance of the proposed LIDAR in comparison to existing LIDARs. The LIDAR models selected for comparison were Velodyne’s HDL-64E LIDAR, which is most frequently used among existing 3D scanning LIDARs, and ASC’s Peregrine LIDAR, which is most frequently used among existing flash LIDARs. To conduct a simulation for static scenarios, we programmed the operation in our simulator to observe the functionality accomplished by the three LIDAR models, as shown in Fig. 7 [76].

 figure: Fig. 7

Fig. 7 Flow chart of the simulation steps for static scenario. The green dot lines for Velodyne’s HDL-64E; The blue dot lines for ASC’s Peregrine; The red dot lines for the proposed LIDAR.

Download Full Size | PDF

The two green blocks show the source of laser pulses associated with each detector and additive white Gaussian noise included in the simulation. The upper blue block is the step used to calculate the cumulative probability that each detector will fire according to time. The lower blue block is the step used to calculate the effects of detector timing jitter as represented by a temporal response function. The red block uses a Monte Carlo technique to yield the time that each dectector fires on a laser pulse. The two green blocks are related to the emitter, the upper blue block is related to the transmission channel in which the laser pulse is transmitted, and the lower blue block and red block are related to the detector. A straight-through processing of all these steps represents the whole simulation of an emitter-detector pair. Velodyne’s HDL-64E LIDAR [24] features 64 independent pairs of emitters and detectors. These emitter and detector pairs are precisely aligned to provide maximum sensitivity while minimizing cross-talk. At each bearing direction, a straight-through processing of the flowchart is repeated for 64 emitter-detector pairs. ASC’s Peregrine LIDAR [25] has one laser flash emitter and 4096 independent detectors. In this case, we repeat the lower blue block and red block 4096 times. Our proposed LIDAR has one emitter and one detector, and they are operated independently. We repeat the green block and upper blue blocks for the receiver part, and the lower blue block and red block for the transmitter part. The parameters used to compare the LIDAR performances in simulation are presented in Table 3.

Tables Icon

Table 3. Simulation parameters

In the simulations, a 3D model of a vehicle that was 4.7 m in length, 1.42 m in height, and 1.82 m in width was placed at four different points ahead of the three LIDARs by 30 m, 50 m, 70 m, and 120 m, respectively, and point clouds and corresponding 3D range images were formed for comparison.

As shown in Fig. 8, when a 3D vehicle model was placed 30 m ahead of the LIDARs in the simulation, the range images generated by the three LIDARs clearly represent the shape of the measured target. Based on this shape, the target object is easily recognized as a vehicle. In the range images generated by the Peregrine LIDAR, the tires of the vehicle are partially invisible because of the system’s limited VFoV.

 figure: Fig. 8

Fig. 8 Simulation results of distnace measured by each LIDAR for a target object stopped at 30 m.

Download Full Size | PDF

Figure 9 shows the range images generated by the three LIDARs when the 3D vehicle model was placed 50 m ahead of these systems in the simulation. These images exhibit degraded accuracy compared to those generated when the 3D vehicle model was 30 m ahead of these systems. Nevertheless, the shape of the target object is clearly recognized as that of a vehicle in these images. Even the Peregrine LIDAR generated range images clearly show the entire vehicle, whereas at the 30 m distance, this LIDAR generated a range image in which the vehicle was only partially invisible because of the system’s limited VFoV. As the distances between the LIDARs and the vehicle model increase, the gaps between measurement points also increase in accordance with each system’s angular resolution.

 figure: Fig. 9

Fig. 9 Simulation results of distnace measured by each LIDAR for a target object stopped at 50 m.

Download Full Size | PDF

Figure 10 shows the range images generated by the three LIDARs when the 3D vehicle model was placed 70 m ahead of these systems in the simulation. As shown in this figure, the shape of the target object is less well defined compared to the previous simulations because of the degraded accuracy at this greater distance. In the range image generated by the HDL-64E LIDAR, the shape of the target object is unlikely to be recognized as a vehicle.

 figure: Fig. 10

Fig. 10 Simulation results of distnace measured by each LIDAR for a target object stopped at 70 m.

Download Full Size | PDF

Figure 11 shows the range images generated by the three LIDARs when the 3D vehicle model was placed 120 m ahead of these systems in the simulation. The Peregrine LIDAR does not show any result under this condition because its maximum measurement distance of 70 m was exceeded. The accuracy of all of these images is significantly degraded compared to the range images obtained when the 3D vehicle model was 70 m ahead of the LIDARs. The shape of the target object is unlikely to be recognized as a vehicle in either of the range images generated by the existing LIDARs, whereas the shape can be recognized from the image generated by the LIDAR proposed in this study.

 figure: Fig. 11

Fig. 11 Simulation results of distnace measured by each LIDAR for a target object stopped at 120 m.

Download Full Size | PDF

Table 4 compares the number of point clouds that are measured and generated by each LIDAR according to the distance between these systems and the 3D vehicle model. Higher numbers of points in the point clouds coincide with greater measurement accuracy. As can be seen from the table, the LIDAR proposed in this system generates 10 times more measurement points higher than the HDL-64E LIDAR and 20 times more than the Peregrine LIDAR.

Tables Icon

Table 4. Number of points in point cloud as a summary of simulation results

4. Conclusions

In this study, a 3D scanning LIDAR has been proposed that can sequentially measure the distance in each bearing direction without waiting for each signal to return, and simulations were conducted to compare the performances of the proposed and existing LIDARs.

The LIDAR proposed in this study emits laser pulses in the form of a macropulse in each bearing direction by applying the DS-OCDMA technique to the data, introducing column identification numbers that each correspond to a particular bearing direction. The proposed system transmits laser pulses using a MEMS mirror that operates at high speed, instead of a slower motorized mirror, to deflect and steer these pulses in the desired bearing direction. Because the emission time can be identified from the column identification number on the received reflected wave, this system does not require an idle listening time to receive the reflected waves after emitting a laser. Based on these characteristics, the proposed LIDAR can measure a range image with a resolution of 848 × 480 point locations at a refresh rate of 60 times per second.

Our proposed LIDAR is good for terrestrial and mobile applications. As a survey method, it is used for cultural heritage documentation, forensics, agriculture, and 3D object recognition. Autonomous vehicles and autonomous walking robots use 3D scanning LIDARs to recognize surrounding environments and areas where they can move. Because the range image resolution and refresh rate of the proposed LIDAR are high, surrounding environments can be recognized more easily for these applications without cameras.

The proposed LIDAR system can generate range images only for targets in front of the system because laser pulses are transmitted using the MEMS mirror. However, range images of the entire area surrounding the LIDAR are required for autonomous driving. Therefore, the system proposed in this study should be enhanced so that the LIDAR can measure distances in the entire surrounding area, at angles of up to 360° from the LIDAR position. Our LIDAR system heavily relies on the transmission and reception of half-terahertz laser pulses. It is necessary to settle lower down the speed of laser pulses. To do this, we need to take a proper method for adapting the optical bipolar synchronous code.

Funding

Ministry of Education (MOE) and the National Research Foundation of Korea (NRF) (2013H1B8A2031879); MOE and the NRF (NRF-2014R1A1A2055988); Ministry of Science, ICT and Future Planning (MSIP) (IITP-2016-R2718-16-0035)

References and links

1. M. Jaboyedoff, T. Oppikofer, A. Abellán, M. Derron, A. Loye, R. Metzger, and A. Pedrazzini, “Use of LIDAR in landslide investigation: a review,” Nat. Hazards 61(1), 5–28 (2012). [CrossRef]  

2. C. Weitkamp, Lidar: Range-Resolved Optical Remote Sensing of the Atmosphere (Springer, 2012).

3. S. G. Barsanti, F. Remondino, B. J. Fenández-Palacios, and D. Visintini, “Critical factors and guidelines for 3D surveying and modelling in cultural heritage,” Int. J. Herit. Digit. Era 3(1), 141–158 (2014). [CrossRef]  

4. K. Kianka, “3D documentation and printing in forensics,” in Proceedings of Forensic Engineering 2015: Performance of the Built Environment (ASCE, 2015).

5. U. Weiss and P. Biber, “Plant detection and mapping for agricultural robots using a 3D LIDAR sensor,” Robot Auton. Syst. 59(5), 265–273 (2011). [CrossRef]  

6. Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan, “3D object recoginition in cluttered scenes with local surface features: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2270–2287 (2014). [CrossRef]  

7. C. K. Toth, “R&D of mobile LIDAR mapping and future trends,” in Proceedings of ASPRS Annual Conference (ASPRS, 2009).

8. S. Thrun, “Toward robotic cars,” Commun. ACM 53(4), 99–106 (2010). [CrossRef]  

9. W. Burgard, D. Fox, and S. Thrun, “Probabilistic state estimation techniques for autonomous and decision support systems,” Informatik-Spektrum 34(5), 455–461 (2011). [CrossRef]  

10. J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, M. Sokolsky, G. Stanek, D. Stavens, A. Teichman, M. Werling, and S. Thrun, “Towards fully autonomous driving: Systems and algorithms,” in Proceedings of IEEE Intelligent Vehices Symposium (IEEE, 2011), pp. 163–168.

11. A. Wright, “Automotive autonomy,” Commun. ACM 54(7), 16–18 (2011). [CrossRef]  

12. X. Liu, “Airborne LiDAR for DEM generation: some critical issues,” Prog. in Phys. Geogr. 32(1), 31–49 (2008). [CrossRef]  

13. S. Gould, P. Baumstarck, M. Quigley, A. Y. Ng, and D. Koller, “Integration visual and range data for robotic object detection,” in Proceedings of Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, (2008), vol. 1, pp. 1–4.

14. A. Bulyshev, M. Vanekb, F. Amzajerdianb, D. Pierrottetc, G. Hinesb, and R. Reisseb, “A super-resolution algorithm for enahancement of FlASH LIDAR data,” Proc. SPIE 7873, 78730F (2011). [CrossRef]  

15. J. W. Young, “Advantages of high resolution LiDAR,” LiDAR News Magazine4(6), (2014).

16. J. Hancock, “Laser intensity-based obstacle detection and tracking,” Ph.D. thesis, Robotics Institute, Carnegie Mellon University (1999).

17. M. Amann, T. Bosch, M. Lescure, R. Myllylä, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40(1), 10–19 (2001). [CrossRef]  

18. R. D. Richmond and S. C. Cain, Direct–Detection LADAR Systems (SPIE, 2010).

19. P. F. McManamon, “Review of LADAR: A historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51(6), 060901 (2012). [CrossRef]  

20. P. F. McManamon, Field Guide to Lidar (SPIE, 2015). [CrossRef]  

21. A. Süss, V. Rochus, M. Rosmeulen, and X. Rottenberg, “Benchmarking time–of–flight based depth measurement techniques,” Proc. SPIE 9751, 975118 (2016). [CrossRef]  

22. S. Á. Guðmundsson, H. Aanæs, and R. Larsen, “Environmental effects on measurement uncertainties of time–of–flight cameras,” in Proceedings of IEEE International Symposium on Signals, Circuits and Systems (IEEE, 2007), vol. 1, pp. 1–4.

23. F. Remondino and D. Stoppa, ToF Range–Imaging Cameras (Springer, 2013). [CrossRef]  

24. HDL-64E S3 Users’ Manual and Programming Guide (Velodyne, 2013).

25. Peregrine 3D Flash LIDAR Vision System (ASC, 2014). http://www.advancedscientificconcepts.com/products/peregrine.html.

26. U. Hofmann, M. Aikio, J. Janes, F. Senger, V. Stenchly, J. Hagge, H. J. Quenzer, M. Weiss, T. von Wantoch, C. Mallas, B. Wagner, and W. Benecke, “Resonant biaxial 7-mm MEMS mirror for omnidirectional scanning,” J. Micro-Nanolith. Mem. 13(1), 011103 (2014). [CrossRef]  

27. B. L. Stann, J. F. Dammann, M. Del Giorno, C. DiBerardino, M. M. Giza, M. A. Powers, and N. Uzunovic, “Integration and demonstration of MEMS–scanned Ladar for robotic navigation,” Proc. SPIE 9084, 90840J (2014). [CrossRef]  

28. B. L. Stann, J. F. Dammann, and M. M. Giza, “Progress on MEMS–scanned Ladar,” Proc. SPIE 9832, 98320L (2016). [CrossRef]  

29. X. Lee and C. Wang, “Optical design for uniform scanning in MEMS–based 3D imaging Lidar,” Appl. Opt. 54(9), 2219–2223 (2015). [CrossRef]   [PubMed]  

30. C. Niclass, K. Ito, M. Soga, H. Matsubara, I. Aoyagi, S. Kato, and M. Kagami, “Design and characterization of a 256 × 64-pixel single-photon imager in CMOS for a MEMS-based laser scanning time-of-flight sensor,” Opt. Express 20(11), 11863–11881 (2012). [CrossRef]   [PubMed]  

31. K. Ito, C. Niclass, I. Aoyagi, H. Matsubara, M. Soga, S. Kato, M. Maeda, and M. Kagami, “System design and performance characterization of a MEMS-based laser scanning time-of-flight sensor based on a 256 × 64-pixel single-photon imager,” IEEE Photonics J. 5(2), 6800114 (2013). [CrossRef]  

32. F. R. K. Chung, J. A. Salehi, and V. K. Wei, “Optical orthogonal codes: Design, analysis and applications,” IEEE Trans. Inform. Thoery 35(3), 595–604 (1989). [CrossRef]  

33. A. S. Holmes and R. R. A. Syms, “All-optical CDMA using “Quasi-Prime” codes,” J. Lightwave Technol. 10(2), 279–286 (1992). [CrossRef]  

34. G.-C. Yang and W. C. Kwong, Prime Codes with Applications to CDMA Optical and Wireless Networks (Artech House, 2002).

35. C. Goursaud-Brugeaud, A. Julien-Vergonjanne, and J.-P. Cances, “Prime code efficiency in DS–OCDMA systems using parallel interference cancellation,” J. Commun. 2(3), 51–57 (2007). [CrossRef]  

36. A. M. Weiner, Z. Jiang, and D. E. Leaird, “Spectrally phase-coded O–CDMA,” J. Opt. Netw. 6(6), 728–755 (2007). [CrossRef]  

37. W. C. Kwong and G.-C. Yang, Optical Coding Theory with Prime (CRC Press, 2013).

38. P. Y. Ma, M. P. Fok, B. J. Shastri, B. Wu, and P. R. Prucnal, “Gigabit ethernet signal transmission using asynchronous optical code division multiple access,” Opt. Lett. 40(24), 5854–5857 (2015). [CrossRef]   [PubMed]  

39. P. Koopman and T. Chakravarty, “Cyclic redundancy code (CRC) polynomial selection for embedded networks,” in Proceedings of IEEE International Conference on Dependable Systems and Networks (IEEE, 2004), pp. 145–154.

40. J. Ho and E.-H. Yang, “Designing optimal multiresolution quantizers with error detecting codes,” IEEE Trans. Wirel. Commun. 12(7), 3588–3599 (2013). [CrossRef]  

41. R. A. Scholtz, “The origins of spread–sprectrum communications,” IEEE Trans. Commun. 30(5), 822–854 (1982). [CrossRef]  

42. T. S. Rappaport, Wireless Communications: Principles and Practice (Prentice Hall, 2002).

43. S. Haykin, Communication Systems (Wiley, 2009).

44. P. H. Siegel, “Terahertz Technology,” IEEE Trans. Microw. Theory Techn. 50(3), 910–928 (2002). [CrossRef]  

45. C. Jasen, S. Wietzke, O. Peters, M. Scheller, N. Vieweg, M. Salhi, N. Krumbholz, C. Jördens, T. Hochrein, and M. Koch, “Terahertz imaging: applications and perspectives,” Appl. Phys. 49(19), E48–E57 (2010).

46. A. Krotkus, “Semiconductors for terahertz photonics applications,” J. Phys. D: Appl. Phys. 43(27), 273001 (2010). [CrossRef]  

47. C. M. O’Sullivan and J. A. Murphy, Field Guide to Terahertz Sources, Detectors, and Optics (SPIE, 2012).

48. C. W. Berry, N. Wang, M. R. Hashemi, M. Unlu, and M. Jarrahi, “Significant performance enhancement in photoconductive terahertz optoelectronics by incorporating plasmonic contact electrodes,” Nat. Commun. 4, 1622 (2013). [CrossRef]   [PubMed]  

49. M. Jarrahi, “Advanced Photoconductive Terahertz Optoelectronics Based on nano-Antennas and Nano-Plasmonic Light Concentrators,” IEEE Trans. THz Sci. Technol. 5(3), 391–397 (2015). [CrossRef]  

50. Y. Ko, S. Sengupta, S. Tomasulo, P. Dutta, and I. Wilke, “Emission of terahertz-frequency electromagnetic radiation from bulk GaxIn1 − x As crystals,” Phys. Rev. B 78(3), 035201 (2008). [CrossRef]  

51. S. Winnerl, “Scalable microstructured photoconductive terahertz emitters,” J. Infrared Milli. Terahz. Waves 33(4), 431–454 (2012). [CrossRef]  

52. H. Dong, S. Anderson, and T. D. Barfoot, “Two–axis scanning Lidar geometric calibration using intensity imagery and distortion mapping,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 3672–3678.

53. S. T. Holmström, U. Baran, and H. Urey, “MEMS laser scanners: A review,” J. Microelectromech. S. 23(2), 259–275 (2014). [CrossRef]  

54. M. Freeman, M. Champion, and S. Madhavan, “Scanned laser pico–projectors: Seeing the big picture (with a small device),” Opt. Photon. News 20(5), 28–34 (2009). [CrossRef]  

55. L. Möller, J. Federici, A. Sinyukov, C. Xie, H. C. Lim, and R. C. Giles, “Data encoding on terahertz signals for communication and sensing,” Opt. Lett. 33(4), 393–395 (2008). [CrossRef]   [PubMed]  

56. J. Fererici and L. Möller, “Review of terahertz and subterahertz wireless communications,” Opt. Lett. 107(11), 111101 (2010).

57. F. Peter, S. Winnerl, S. Nitsche, A. Dreyhaupt, H. Schneider, and M. Helm, “Coherent terahertz detection with a large-area photoconductive antenna,” Appl. Phys. Lett. 91(8), 081109 (2007). [CrossRef]  

58. M. Xu, M. Mittendorff, R. J. B. Dietz, H. Künzel, B. Sartorius, T. Göbel, H. Schneider, M. Helm, and S. WInnerl, “Terahertz generation and detection with InGaAs-based large-area photoconductive devices excited at 1.55 μm,” Appl. Phys. Lett. 103(25), 251114 (2013). [CrossRef]  

59. T. Nagatsuma, S. Hisatake, and H. H. N. Pham, “Photonics for millimeter-wave and terahertz sensing and measurement,” IEICE Trans. Electron. 99(2), 173–180 (2016). [CrossRef]  

60. T. Nagatsuma, S. Hisatake, M. Fujita, H. H. N. Pham, K. Tsuruda, S. Kuwano, and J. Terada, “Millimeter-wave and terahertz-wave applications enabled by photonics,” IEEE J. Quantum Electron. 52(1), 0660912 (2016). [CrossRef]  

61. S. Kono, M. Tani, P. Gu, and K. Sakai, “Detection of up to 20 THz with a low-temperature-grown GaAs photoconductive antenna gated with 15 fs light pulses,” Appl. Phys. Lett. 77(25), 4104–4106 (2000). [CrossRef]  

62. C. Baker, I. S. Gregory, M. J. Evans, and W. R. Tribe, “All-optoelectronic terahertz system using low-temperature-grown InGaAs photomixers,” Opt. Express 13(23), 9639–9644 (2005). [CrossRef]   [PubMed]  

63. B. Pradarutti, R. Müller, W. Freese, G. Matthäus, S. Riehemann, G. Notni, S. Nolte, and A. Tünnermann, “Terahertz line detection by a microlens array coupled photoconductive antenna array,” Opt. Express 16(22), 18443–18450 (2008). [CrossRef]   [PubMed]  

64. B. Sartorius, M. Schlak, D. Stanze, H. Roehle, H. Küunzel, D. Schmidt, H.-G. Bach, R. Kunkel, and M. Schell, “Continuous wave terahertz systems exploiting 1.5 μm telecom technologies,” Opt. Express 17(17), 15001–15007 (2009). [CrossRef]   [PubMed]  

65. A. Hu and V. P. Chodavarapu, “CMOS optoelectronic lock-in amplifier with integrated phototransistor array,” IEEE Trans. Biomed. Circuits Syst. 4(5), 274–280 (2010). [CrossRef]  

66. F. Grum and T. E. Wightman, “Absolute reflectance of Eastman white reflectance standard,” Appl. Opt. 16, 2775–2776 (1977). [CrossRef]   [PubMed]  

67. S. Tominaga and B. A. Wandell, “Standard surface-reflectance model and illuminant estimation,” J. Opt. Soc. Am. A 6(4), 576–584 (1989). [CrossRef]  

68. F. K. Knight, D. I. Klick, D. P. Ryan-Howard, and J. R. Theriault Jr., “Visible laser radar: range tomography and angle-angle-range detection,” Opt. Eng. 30(1), 55–65 (1991). [CrossRef]  

69. A. Springsteen, “Standards for the measurement of diffuse reflectance – an overview of available materials and measurement laboratories,” Anal. Chim. Acta 380(2), 379–390 (1999). [CrossRef]  

70. R. Sabatini and M. A. Richardson, Airborne Laser Systems Testing and Analysis (RTO of NATO, 2010).

71. J.-S. Kim, Y.-H. Seo, Y. Suh, H.-J. Park, and J.-Y. Sim, “A 300–MS/s, 1.76–ps–resolution, 10–b asynchronous pipelined time–to–digital converter with on–chip digital background calibration in 0.13–μm CMOS,” IEEE J. Solid-St. Circ. 48i(2), 516–526 (2013). [CrossRef]  

72. C. Liu and Y. Wang, “A 128–channel, 710 m samples/second, and less than 10 ps RMS resolution time–to–digital converter implemented in a Kintex–7 FPGA,” IEEE Trans. Nucl. Sci. 62(3), 773–783 (2015). [CrossRef]  

73. A. Pifferi, D. Contini, A. D. Mora, A. Farina, L. Spinelli, and A. Torricelli, “New frontiers in time-domain diffuse optics, a review,” J. Biomed. Opt. 21(9), 091310 (2016). [CrossRef]   [PubMed]  

74. Z. Cheng, X. Zheng, M. J. Deen, and H. Peng, “Recent developments and design challenges of high–performance ring oscillator CMOS time–to–digital converters,” IEEE Trans. Electron Dev. 63(1), 235–251 (2016). [CrossRef]  

75. J. Shan and C. K. Toth, Topographic Laser Ranging and Scanning: Principles and Processing (CRC Press, 2008). [CrossRef]  

76. M. E. O’Brien and D. G. Fouche, “Simulation of 3D laser Radar systems,” Lincoln Laboratory Journal 15(1), 37–60 (2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Overall system of the proposed 3D scanning LIDAR
Fig. 2
Fig. 2 Data transmission and reception sequence
Fig. 3
Fig. 3 Global operations in the proposed LIDAR.
Fig. 4
Fig. 4 13-bit stream consisting of a 10-bit column identification number and 3-bit cyclic redundancy check (CRC) checksum from the most significant bit (MSB) to the least significant bit (LSB).
Fig. 5
Fig. 5 Transmission procedure in the transmitter part.
Fig. 6
Fig. 6 Reception procedure in the receiver part.
Fig. 7
Fig. 7 Flow chart of the simulation steps for static scenario. The green dot lines for Velodyne’s HDL-64E; The blue dot lines for ASC’s Peregrine; The red dot lines for the proposed LIDAR.
Fig. 8
Fig. 8 Simulation results of distnace measured by each LIDAR for a target object stopped at 30 m.
Fig. 9
Fig. 9 Simulation results of distnace measured by each LIDAR for a target object stopped at 50 m.
Fig. 10
Fig. 10 Simulation results of distnace measured by each LIDAR for a target object stopped at 70 m.
Fig. 11
Fig. 11 Simulation results of distnace measured by each LIDAR for a target object stopped at 120 m.

Tables (4)

Tables Icon

Table 1 Prime sequences over GF(29) based on the prime number 29

Tables Icon

Table 2 Each element of prime sequence Si is mapped into binary code words Ci

Tables Icon

Table 3 Simulation parameters

Tables Icon

Table 4 Number of points in point cloud as a summary of simulation results

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

d = c Δ t 2 = ϕ c 4 π f = c B T 4 Δ f
c i , l = { 1 if l = s i , j + 29 j for j = 0 , 1 , , 28 0 otherwise
C i , j ( l ) = n = 0 840 C i ( n ) C j ( n + l )
Θ i , j ( l ) = n = 0 840 C i ( n ) C j ( ( n + l ) mod 841 ) = C i , j ( l mod 841 ) + C i , j ( ( l mod 841 ) 841 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.