Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vehicle positioning scheme based on visible light communication using a CMOS camera

Open Access Open Access

Abstract

As an application of visible light communication (VLC), visible light positioning (VLP) technology has great potential for vehicle positioning due to its characteristics of no electromagnetic interference, low cost, and high positioning accuracy. In addition, the light emitting diode (LED) in street lights, traffic lights and vehicle lighting systems makes this positioning solution attractive for vehicular applications. However, the modulated LED signal will bring blooming effects on the images captured by a complementary metal oxide semiconductor (CMOS) camera. And it will decrease the positioning performance. Meanwhile, positioning errors will happen when the CMOS camera is tilted. In the paper, a vehicle positioning scheme based on VLC is proposed and experimentally demonstrated. It uses LED street light as a transmitter and the CMOS camera as a receiver. To mitigate the blooming effect in the CMOS camera based VLC, a bit length estimation (BLE) based sampling scheme is proposed to obtain the reference location information from the captured images. In addition, a novel angle compensation scheme combined with a particle filter is proposed to improve the accuracy of vehicle positioning when the CMOS camera is tilted. The experiments are performed under moving speeds of 40 to 80 cm/s and the measured distances of 80 to 115 cm. Assuming the performance of the proposed demonstrator is not modified when upscaling its size to a real scenario (such as speeds of 4 to 8 m/s and distances between the LED and camera of several meters), it can be concluded that as the speed of the moving vehicle is 8 m/s, the proposed vehicle positioning scheme based on VLC can achieve positioning accuracy of 0.128 m and 0.13 m for the tilt angles of 9° and 15.5°, respectively.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Accurate vehicle positioning is indispensable for Intelligent Transportation System (ITS). And it is essential to support safe driving, assisted parking, and autonomous navigation envisioned by Internet of Vehicles (IOVs) [13]. For vehicle positioning, traditional global positioning system (GPS) is the most popular technique because of its wide coverage and low price [4]. Due to the multipath effect and the signal transmission from satellites blocked by buildings, GPS has insufficient reliability in indoor environments, tunnels, and metropolitan areas. And the positioning accuracy of GPS is about 10 meters [5]. Other vehicle positioning techniques such as light detection and ranging (LiDAR) and radio detection and ranging (Radar) can provide better positioning accuracy with high implementation cost and extra complex equipment [6]. For VLP, as an emerging technique, it can reuse urban lighting facilities and vehicle lighting systems without extra installations for vehicle positioning so as to reduce the implementation cost [7]. For underground parking areas or other places without GPS signals, VLP can be utilized to provide navigation services for vehicles by receiving the parking information sent from LED ceiling lights [8]. And VLP is suitable for city areas where radio frequency based positioning technique worsens due to the signal interference from increasing vehicles. In addition, high positioning accuracy and reliability of VLP are obtained in vehicle positioning by collecting the traffic and vehicular information through vehicle-to-infrastructure (V2I)-VLC and vehicle-to-vehicle (V2V)-VLC link [9].

For vehicle VLP based on VLC, the receiver can be photodetector (PD) and CMOS camera. Due to strong direct solar radiation, PD is easily saturated by intense optical power and it is generally used in indoor positioning [10]. However, due to spatial separability of CMOS camera, it can effectively separate the interference noise such as sunlight and LED sources. And it is suitable for both indoor and outdoor positioning [1114]. In addition, the field of view (FOV) of CMOS camera is wider than that of PD, a wider range and a higher accuracy of positioning services can be achieved [15]. Therefore, vehicle VLP based on VLC with CMOS camera can receive location information as unique identification (ID) through optical wireless link of infrastructures, and it is illustrated in Fig. 1. It is attracting attention in outdoor vehicle positioning [1621].

 figure: Fig. 1.

Fig. 1. Vehicular positioning system based on VLC.

Download Full Size | PDF

As GPS signals are blocked, a V2I and V2V-VLC based VLP method is presented for vehicles using tunnel ceiling lights and vehicle tail lights in tunnel environments. It can achieve sub-meter positioning accuracy by simulation [16]. However, at least three LEDs and the image sensor with more than 1700 pixels per row are required. A V2V-VLC based vehicle VLP scheme is proposed and simulated [17]. It requires only one LED taillight to determine the distance between the front and rear vehicles. However, the VLP scheme uses two cameras and results in an increasing system cost. In addition, a positioning system based on V2I-VLC is proposed and experimentally demonstrated [18]. The location of the vehicle is obtained by receiving the identification (ID) information from LED street lights and processing with two cameras mounted on the vehicle. It needs to obtain the ID information of at least 6 LEDs simultaneously, and high calculation complexity is required. A V2V communication based VLP scheme using stereo vision and neural networks is proposed to achieve long-range vehicle positioning [19]. However, the positioning accuracy is tens of meters by simulation, and it is not practicable in the case of CMOS camera tilts.

For VLC-based vehicle positioning methods, LED street lights or car lights transmit the reference location signal which will be received by the CMOS camera. However, the modulated LED signal will have blooming effects on the images captured by the CMOS camera. And it will make the reference location recovery challenging and seriously reduce the positioning performance. In addition, positioning errors will occur when the CMOS camera is tilted.

In the paper, a high accuracy vehicle positioning scheme based on VLC is proposed and experimentally demonstrated. At the transmitter, the LED street lights transmit the location data. At the receiver, the vehicles with one CMOS camera receive the location data. To mitigate the blooming effect in CMOS camera based VLC, a sampling scheme based on bit length estimation (BLE) is proposed to effectively recover the location data from the LED street lights. For vehicle positioning, by calculating the distance between LED street lights and CMOS camera using photogrammetry, combined with the received data from the LED street lights, the vehicle position can be obtained. Moreover, considering that the accuracy of vehicle positioning is degraded due to tilted CMOS camera, an angle compensation (AC) positioning scheme with particle filter (PF) is proposed. Furthermore, the decoding accuracy rate and positioning error are experimentally investigated under different vehicle moving speeds and camera tilt angles.

2. Proposed vehicle positioning scheme

Figure 2 shows the proposed vehicle positioning scheme based on VLC using LED street light and CMOS camera. At the transmitter, each LED street light on both sides of the road broadcasts the signal with its location data. Due to the different geographical locations of each LED street light, there is unique location data for each one of them. At first, the location data for each LED street light is packaged in the form of data packets. And the structure of the data packet consists of 8-bits Header and 12-bits Payload. The 8-bits Header is inserted at the beginning of each data packet, and it carries information for detecting the start of each data packet. And the 12-bits Payload contains the location data. Subsequently, by changing the intensity of light, each LED street light connected to the drive circuit transmits the data packets with On-Off Keying (OOK) modulation.

 figure: Fig. 2.

Fig. 2. The proposed positioning scheme based on VLC.

Download Full Size | PDF

After transmission over free space channel, at the receiver, the signal with location data sent from LED street lights is captured by CMOS camera mounted on the vehicle and processed by a Central Processing Unit (CPU). Due to the rolling shutter of CMOS camera, the pixels of the CMOS camera are exposed sequentially by row or column [22]. Thus, the received LED signal in the captured images is composed of many black and white stripes. After LED detection, the digital signal processing techniques are used for decoding. In the way, the LED location data for each LED street light can be recovered. In addition, the distance between LED street lights and CMOS camera is calculated using photogrammetry analysis. Subsequently, combined with the location data from LED street lights and the distance between LED street lights and CMOS camera, the vehicle position can be obtained.

3. Digital signal processing for decoding

In the paper, after LED detection, the digital signal processing scheme is used to decode the received LED signal composed of black and white stripes and recover the location data for the corresponding LED street light. The digital signal processing scheme is shown in Fig. 3. It includes Grayscale conversion, Column matrix selection, low-pass filter (LPF) smoothing, Threshold decision, Header location and BLE based sampling. The decoding process is as follows. Firstly, Grayscale conversion is used to discard the chrominance component in the image for reducing the cost of computational time. Secondly, by using Column matrix selection, the gray values of all pixels on a column of the LED image are selected for decoding. And the information from the black and white stripes is converted into a discrete sequence of gray values between 0 and 255. Thirdly, to reduce the influence of noise and sharp pulses, an LPF is applied to smooth the discrete gray value sequence. Fourthly, third-order fitting is applied, and the output is used as a threshold to distinguish “0” and “1” from the discrete gray value sequence. For threshold decision, the value greater than the threshold is regarded as “1”, and the others are regarded as “0”. In the way, the discrete gray value sequence is converted into the binary data stream.

 figure: Fig. 3.

Fig. 3. The decoding processing schemes.

Download Full Size | PDF

Then Header location is utilized to distinguish the packet header in the binary data stream, and the clock information carried by the header is extracted to facilitate down-sampling. For down-sampling, the traditional clock recovery (CR) based sampling scheme estimates the bit length based on the clock information from the packet header, and uses it as the fixed sampling interval [2325]. Due to the blooming effect [26], it results in some bits of “0” turning into “1” in the binary data stream after threshold decision. Thus, the bit length estimated from the packet header is not accurate, which is generally non-integer. However, the binary data stream is a discrete sequence, and the sampling interval needs to be an integer. Thus, the traditional CR scheme will induce the sampling frequency offset (SFO) to affect the recovery of LED location data.

In the paper, BLE based sampling scheme is proposed to estimate the number of bits within the black and white stripes. Unlike CR based sampling scheme, since the estimated bit length is not accurate, BLE based sampling scheme uses it as an auxiliary information, and dynamically changes the length of the sampling interval according to the length of the black and white stripes in the image, to determine the number of bits each stripe represents. It sets a dynamic range for the number of bits within each stripe, which effectively reduces the impact of SFO due to inaccurate clock information recovery. The flowchart of BLE based sampling scheme is shown in Fig. 4. The discrete binary data sequence contains N short sequences with continuous bits of “1” or “0”, and the length of the short sequences represents the width of the black or white stripes. At first, the lengths BL of bit “1” and “0” estimated from the packet header is obtained, and it is denoted as BL1 and BL0, respectively. Then, the length SLi of i-th short sequence Si in the binary data sequence is calculated. As the first short sequence S1 is “0 0 0 0 0 0”, the length of SL1 is 6. Generally, if it is a continuous “1” sequence, the number of bits ni in the short sequence SLi is equal to SLi/BL1, and if it is a continuous “0” sequence, the number of bits ni in the short sequence SLi is SLi/BL0. And ni is an integer. However, due to the blooming effect, black and white stripes are distorted. The SLi, BL1, and BL0 are usually not accurate, and the calculated bit number ni is a non-integer, which is obviously incorrect. Thus, to estimate the number of bits with the main part Ii and rest part Di for the short sequences accurately, a FIX operation is firstly adopted to obtain the main part Ii of bit number ni. It is given by

$$I_{i} = FIX(\frac{{SL_{i}}}{{BL}}) = \left\lfloor {\frac{{SL_{i}}}{{BL}}} \right\rfloor$$
where $\lfloor{x^{\prime}} \rfloor$ represents the largest integer, and it is less than or equal to x’. BL equals to BL1 or BL0 according to whether Si is a continuous “1” sequence or a continuous “0” sequence.

 figure: Fig. 4.

Fig. 4. The flowchart of the BLE based sampling scheme.

Download Full Size | PDF

Subsequently, a MOD operation and G function are used to obtain the rest part Di of bit number ni. The MOD operation and G function are given by

$$x = MOD({SL_{i},BL} )= SL_{i}\%BL$$
$$D_{i} = G(x) = \left\{ {\begin{array}{{cc}} {1,}&{if\textrm{ }x \ge th \times BL}\\ {0,}&{\textrm{else}} \end{array}} \right.$$
where a%b gives the remainder of a divided by b, and th equals to 0.85 or 0.45 for continuous ‘1’sequence or continuous ‘0’ sequence, respectively.

Finally, the new short sequences with its length equal to the bit number is obtained, and the location data sent from the LED street lights is recovered.

4. Vehicle positioning based on LED street light and photogrammetry

According to the recovered location data from the two LED street lights and the centroid coordinates of the LED in the image, the position of the vehicle can also be obtained. A vehicle-to-vehicle positioning scheme using two LEDs is achieved [27]. However, the tilt of CMOS camera is not taken into account. In the paper, based on the LED location data from VLC, the vehicle positioning scheme with angle compensation (AC) is proposed. The geometric distance between the LED street lights and the CMOS camera is calculated based on photogrammetry. Subsequently, the vehicle position is obtained by combining the locations of LED street lights with the corresponding geometric distance.

4.1 Vehicle positioning scheme with angle compensation

Figure 5 shows the scene of the vehicle positioning scheme based on VLC. In the world coordinate system, the X axis is parallel to the road direction, the Y axis is perpendicular to the road direction, and the Z axis is perpendicular to the road surface. The recovered world coordinates of four LED street lights are (XL1, YL1, ZL1), (XL2, YL2, ZL2), (XL3, YL3, ZL3) and (XL4, YL4, ZL4) respectively. The centroid coordinates of four LED street lights in the image are (xL1, yL1), (xL2, yL2), (xL3, yL3) and (xL4, yL4) respectively. D1 and D2 are the distance between LED1 and LED2, and the distance between LED3 and LED4 in the world coordinate system, respectively. In addition, d1 and d2 are the distance between LED1 and LED2, and the distance between LED3 and LED4 in the image coordinate system, respectively. The center coordinates of the image are (xmid, ymid).

 figure: Fig. 5.

Fig. 5. The scene of vehicle positioning scheme based on VLC.

Download Full Size | PDF

For the proposed vehicle positioning scheme with angle compensation, the location data from two LED street lights are required. It is assumed that the two front LEDs such as LED1 and LED2 in Fig. 5 are chosen. Based on photogrammetry, the CMOS camera can be seen as a pinhole model. The position (X, Y, Z) of the CMOS camera as the vehicle in the world coordinate system can be obtained based on the triangle similarity principle. It is expressed as:

$$X = X_{L1} + \frac{{D_{1}}}{{d_{1}}} \times (x_{mid} - x_{L1}) = X_{L2} - \frac{{D_{1}}}{{d_{1}}} \times (x_{L2} - x_{mid})$$

And D1 and d1 can be calculated. It is given by:

$$D_{1} = \sqrt {{{(X_{L1} - X_{L2})}^2} + {{(Y_{L1} - Y_{L2})}^2}} $$
$$d_{1} = \sqrt {{{[dx \times (x_{L1} - x_{L2})]}^2} + {{[dy \times (y_{L1} - y_{L2})]}^2}} $$
where dx and dy are the lateral and vertical length of each pixel in the CMOS camera.

As the camera is tilted back at different angles, two cases are investigated, and it is shown in Fig. 6. For case 1, the LED1’ is located on the lower half of the image plane, as shown in Fig. 6(a). And for case 2, the LED1’ is located on the upper half of the image plane, as shown in Fig. 6(b). The LED1 in the world coordinate system projected onto the image plane is LED1’. It should be noted that there are two street lights as LED1 and LED2. As LED1 and LED2 are parallel to each other and LED2 is blocked by LED1, only LED1 is shown in the YOZ plane of the world coordinate system. M is the center of the CMOS camera, and O’ is the center of the image plane. The distance of O’M is the focal length f for the CMOS camera. The distance of CM’ is the projection of f on the Y-axis. And l is the physical distance between O’ and LED1’ in the y-direction of the image coordinate system, the distance of PC is the projection of l on the Y-axis. According to geometric principles, as the camera is tilted back and the projected LED1 is on the lower half of the image plane, it is expressed as:

$$Y = Y_{L1} - \frac{{D_{1}}}{{d_{1}}} \times (l \times \sin \alpha + f \times \cos \alpha ) = Y_{L2} - \frac{{D_{1}}}{{d_{1}}} \times (l \times \sin \alpha + f \times \cos \alpha )$$

As the camera is tilted back and the projected LED1 is on the upper half of the image plane, it is given by:

$$Y = Y_{L1} - \frac{{D_{1}}}{{d_{1}}} \times (f \times \cos \alpha - l \times \sin \alpha ) = Y_{L2} - \frac{{D_{1}}}{{d_{1}}} \times (f \times \cos \alpha - l \times \sin \alpha )$$

 figure: Fig. 6.

Fig. 6. (a) Case1: LED1’ on the lower half of the image plane, (b) Case2: LED1’ on the upper half of the image plane.

Download Full Size | PDF

And l can be calculated. It is expressed as:

$$l = E[\begin{array}{{cc}} {abs(y_{mid} - y_{L1})}&{abs(y_{mid} - y_{L2})} \end{array}]$$
where E[s] means the average of s, and abs(s) is the absolute value of s.

Based on the vehicle size, it is given by:

$$Z = h$$
where h is the vertical height of vehicle.

According to the proposed vehicle positioning scheme with angle compensation, the world coordinate (X, Y, Z) of the vehicle is obtained.

4.2 Particle filter aided vehicle positioning scheme

To improve the accuracy of vehicle positioning base on VLC, the vehicle positioning scheme aided with sampling importance resampling (SIR)-based particle filter is proposed to reduce the random error. It is assumed that the vehicle is moving at a variable speed and T0 is the sampling period. The vehicle state at sampling time kT0 is expressed as ${x_k} = {\left[ {\begin{array}{{cccc}} {px(k)}&{py(k)}&{vx(k)}&{vy(k)} \end{array}} \right]^\textrm{T}}$, where (px(k), py(k)) is the real vehicle position, and (vx(k) vy(k)) are the vehicle speeds on x and y coordinate axis. The vehicle state is given by:

$${x_{k + 1}} = {\boldsymbol \phi }{x_k} + {\boldsymbol \varphi }{a_k} + {\omega _k}$$
where ${\boldsymbol \phi }\textrm{ = }\left[ {\begin{array}{{cccc}} 1&0&{T_0}&0\\ 0&1&0&{T_0}\\ 0&0&1&0\\ 0&0&0&1 \end{array}} \right]$, ${\mathbf \varphi }\textrm{ = }\left[ {\begin{array}{{cc}} {0.5T{_0^2}}&0\\ 0&{0.5T{_0^2}}\\ {T_0}&0\\ 0&{T_0} \end{array}} \right]$, ${a_k}$ is the acceleration of the vehicle and ${\omega _k}$ is the process noise.

The observation value zk is obtained by the proposed vehicle positioning scheme with angle compensation, and it is expressed as:

$${z_k} = {\boldsymbol \psi }({x_k}) + {v_k}$$
where ${\boldsymbol \psi }({x_k})\textrm{ = }\left[ {\begin{array}{{cc}} {px(k)}&{py(k)} \end{array}} \right]$, and vk is the observation noise.

As the total number of particles is N, wk−1 is the weight value of particle xk−1. After obtaining the particle set $\textrm{\{ }x_{k - 1}^i,w_{k - 1}^i\textrm{\} }_{i = 1}^N$ at sampling time (k−1)T0 and the observation value zk at sampling time kT0, the SIR-based particle filter is used as follows. Firstly, based on the prior probability density, the predicted particles $\textrm{\{ }x_k^i\textrm{\} }_{i = 1}^N$ at sampling time kT0 is obtained. It is expressed as:

$$x_k^i\sim p({x_k}|x_{k - 1}^i)$$

Secondly, for i = 1, 2…N, the weight value of each particle is calculated according to the likelihood function. It is given by:

$$w_k^i = w_{k - 1}^ip({z_k}|x_k^i)$$

Thirdly, for i = 1, 2…N, the weight value of each particle is normalized. It is given by:

$$w_k^i = w_k^i\textrm{/}\sum\limits_{i = 1}^N {w_k^i}$$

Fourthly, from the particle set of $\textrm{\{ }x_k^i,w_k^i\textrm{\} }_{i = 1}^N$, resampling is applied to generate the new particle set of $\textrm{\{ }\tilde{x}_k^i,\tilde{x}_{k - 1}^i\textrm{\} }_{i = 1}^N$ based on the weight value, and sets the weight value of each particle to 1/N. Finally, the state is estimated at sampling time kT0 to achieve the optimal vehicle position. It is expressed as:

$$\hat{x} = \sum\limits_{i = 1}^N {\tilde{x}_k^i\tilde{w}_k^i}$$

Due to the different noise disturbances, many errors exist in the discrete positioning results by using the vehicle positioning scheme with angle compensation. And they are not generally Gaussian distributed. In this way, particle filter is utilized to handle Gaussian and non-Gaussian noises, and it is an effective technique for enhancing the accuracy of vehicle positioning based on VLC.

5. Experimental setup and results discussion

To evaluate the proposed vehicle positioning scheme based on VLC, a mobile platform with a ratio of 1:10 to the real scene is shown in Fig. 7. Four LEDs with the lampshade of 5 cm×5 cm as LED street lights are fixed on both sides of the mobile platform, and they transmit the LED location data, which is generated offline by a Personal Computer (PC). LED4 is regarded as the reference LED. A Field Programmable Gate Array (FPGA) is used to convert the data into OOK signal. To control the illuminance of the light, the LED driving circuit is utilized for signal transmission. After transmission over free space, the OOK signals are received by the CMOS camera on the mobile platform. And it can move along the Y axis on the mobile platform. The distance between LED1 and LED2 is set to 60 cm, and the distance between LED3 and LED4 is the same as 60 cm. The distance between LED1 and LED3 is set to 100 cm. Table 1 shows the key parameters in the experiment.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

Tables Icon

Table 1. The key parameters in the experiment

By varying the moving speed of CMOS camera from 4 m/s to 8 m/s and adjusting the tilt angle of the CMOS camera to 0°, 9°, and 15.5°, the decoding accuracy rate (DAR) for LED location data and mean positioning error are investigated using the proposed vehicle positioning scheme. Tilt angle of 9° and 15.5° are selected for two reasons. One is that a small tilt angle of camera basically has no obvious effect on the positioning accuracy. The other is that the LED signal source must be within the camera's FOV when the camera tilt angle is large enough.

5.1 DAR performance for vehicle positioning scheme based on VLC

The DAR performance of the proposed vehicle positioning scheme based on VLC at different distance between reference LED and CMOS camera is measured. The DAR is the ratio of the number of received pictures that can successfully decode the reference LED position data to the number of all received pictures. High DAR means low bit error rate for communication. It is shown in Fig. 8. In Fig. 8, by using the traditional CR based sampling scheme and the proposed BLE based sampling scheme at the receiver, the DAR performance for different distances is compared. It can be seen that, as the distance between reference LED and CMOS camera is 6.9 m, the DAR can achieve 0.9 using the BLE based sampling scheme, while it is 0.75 using the traditional CR based sampling scheme. The reason is that, by using the proposed BLE based sampling scheme, it can effectively resist the SFO impact caused by the blooming effect. In addition, as the distance increases, the contrast between black and white stripes is weakened. It results in decrease in DAR performance.

 figure: Fig. 8.

Fig. 8. DAR performance of the proposed vehicle positioning scheme.

Download Full Size | PDF

5.2 Mean positioning error for vehicle positioning scheme based on VLC

The mean positioning error is defined as follows:

$$\textrm{Mean positioning error = }\frac{1}{K}\sum\limits_{i = 1}^K {\sqrt {{{(dm_{x}(i) - da_{x}(i))}^2} + {{(dm_{y}(i) - da_{y}(i))}^2}} }$$
where the measured distance (dmx, dmy) is obtained using the proposed vehicle positioning scheme, and the actual distance (dax, day) between reference LED and CMOS camera along the X-axis and Y-axis in the coordinate system is calculated through the speed of the vehicle and the frame rate of the camera. And K is the number of samples. As the CMOS camera is not tilted, the mean positioning error of the proposed vehicle positioning scheme based on VLC at the moving speeds of 4 m/s, 5 m/s, 6 m/s, 7 m/s, 8 m/s are shown in Fig. 9(a). It can be seen that, as the moving speed is 8m/s, the mean positioning error using the proposed positioning scheme with AC and PF can be reduced to 2.30 cm. It is due to the noise in the movement and the changes of speed that affect the positioning errors. By using the AC and PF scheme, it can filter out the noise and interference to improve the positioning accuracy. As the distance of vehicle moves from 8.03 m to 11.25 m at the speed of 4 m/s, the percentage of measurement error based on the actual distance is shown in Fig. 9(b). It can be seen that it remains within 0.9% using the proposed positioning scheme with AC for most distances. Combined with the PF, the positioning accuracy is effectively enhanced, and the percentage of measurement error within 0.8% is achieved for most distances.

 figure: Fig. 9.

Fig. 9. (a) The mean positioning error when CMOS camera is not tilted, (b) The measurement positioning error when CMOS camera is not tilted.

Download Full Size | PDF

As the CMOS camera is tilted at 9°, the mean positioning error performance of the proposed vehicle positioning scheme based on VLC at the moving speeds of 4 m/s, 5 m/s, 6 m/s, 7 m/s, 8 m/s are shown in Fig. 10(a). The mean positioning error performance with the conventional positioning scheme [27], the proposed positioning scheme with AC, and the proposed positioning scheme with AC and PF are compared. It can be seen that, as the moving speed is 8 m/s, the mean positioning error using the proposed positioning scheme with AC can be reduced to 0.19 m. It is due to the compensation for the positioning errors caused by the tilted camera. Meanwhile, after the PF is applied to obtain the optimal location estimation of the vehicle, the mean positioning error is further reduced to 0.128 m, and the positioning accuracy is improved.

 figure: Fig. 10.

Fig. 10. (a) The mean positioning error with camera tilted at 9°, (b) The mean positioning error with camera tilted at 15.5°.

Download Full Size | PDF

In addition, as the CMOS camera is tilted at 15.5°, the mean positioning error performance of the proposed vehicle positioning scheme based on VLC at the different moving speeds are shown in Fig. 10(b). It can be seen that, as the moving speed is 8 m/s, the mean positioning error is decreased to 0.33 m by using the proposed positioning scheme with AC, and then it is reduced to 0.13 m after using the proposed positioning scheme with AC and PF. From Fig. 10(a) and (b), it can be seen that, compared with the conventional positioning scheme, the proposed positioning scheme with AC can achieve better positioning performance as the CMOS camera is tilted. Moreover, PF is an effective technique for noise reduction, and it can be used to improve the positioning accuracy.

Figure 11 shows the comparison of some vehicle positioning schemes based on VLC with CMOS camera. The x-axis defines the distance between the vehicle and the LEDs, and the y-axis represents the positioning error. It can be seen that, as the distance between the LED street lights and the CMOS camera is 0.65 m, the positioning error of 0.009 m is experimentally demonstrated [21]. And it uses the modulation format of spatial two phase-shift keying (S2-PSK) to realize the signal transmission between the LEDs and the vehicle through simulations. As the distance between the LED lights and the CMOS camera is 2.1 m, the positioning error of 0.07 m is achieved by experiment [27]. In addition, the positioning error of 0.5 m is experimentally demonstrated [18]. However, the signal transmission between the LED lights and the CMOS camera is not taken into account. In the paper, by using the proposed positioning scheme, as the distance between the LED street lights and the CMOS camera is 11 m, the positioning error of 0.17 m can be obtained. And OOK modulation is utilized to realize the transmission of LED location data. The data rate is 1.44kb/s.

 figure: Fig. 11.

Fig. 11. The comparison of some vehicle positioning schemes based on VLC with CMOS camera.

Download Full Size | PDF

6. Conclusion

In the paper, vehicle positioning scheme based on VLC using CMOS camera is proposed and experimentally demonstrated. The LED street light transmits the OOK signal with its location data. And the CMOS camera installed in moving vehicles receives and recovers the location data for the LED street lights. By calculating the distance between the LED street lights and the CMOS camera, and combining the location information from the LED street lights, the position of the moving vehicle can be obtained. To reduce the impact of the SFO caused by the blooming effect, a BLE based sampling scheme is proposed. Meanwhile, a particle filter-aided angle compensation positioning scheme is proposed to improve the positioning accuracy when CMOS camera tilts. Experimental results show that, as the speed of the moving vehicle is 8 m/s, the proposed positioning scheme can achieve positioning accuracy of 0.13 m for the camera tilt angle of 15.5°.

Funding

Natural Science Foundation of Hunan Province (2020JJ4210); National Natural Science Foundation of China (61775054).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Tang, Y. Kawamoto, N. Kato, and J. Liu, “Future Intelligent and Secure Vehicular Network toward 6G: Machine-Learning Approaches,” in Proceedings of the IEEE (2020), pp. 292–307.

2. F. Yang, S. Wang, J. Li, Z. Liu, and Q. Sun, “An Overview of Internet of Vehicles,” China Commun. 11(10), 1–15 (2014). [CrossRef]  

3. A. Bazzi, B. M. Masini, A. Zanella, and A. Calisti, “Visible Light Communications as a Complementary Technology for the Internet of Vehicles,” Computer Communications 93, 39–51 (2016). [CrossRef]  

4. Y. Zhuang, L. Hua, L. N. Qi, J. Yang, P. Cao, Y. Cao, Y. P. Wu, J. Thompson, and H. Hass, “A Survey of Positioning Systems Using Visible LED Lights,” IEEE Commun. Surv. Tutorials 20(3), 1963–1988 (2018). [CrossRef]  

5. S. Savasta, M. Pini, and G. Marfia, “Performance Assessment of a Commercial GPS Receiver for Networking Applications,” in IEEE Consumer Communications and Networking Conference (2008), pp. 613–617.

6. T. Kim, J. Lee, and T. Park, “Fusing Lidar, Radar, and Camera Using Extended Kalman Filter for Estimating the Forward Position of Vehicles,” in IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2019), pp. 374–379.

7. D. Trong-Hop and Y. Myungsik, “An in-Depth Survey of Visible Light Communication Based Positioning Systems,” Sensors 16(5), 678 (2016). [CrossRef]  

8. J. He, Z. Jiang, J. Shi, and Q. Tang, “An Effective Mapping Scheme for Visible Light Communication with Smartphone Camera,” IEEE Photonics Technol. Lett. 32(10), 557–560 (2020). [CrossRef]  

9. A. Căilean and M. Dimian, “Current Challenges for Visible Light Communications Usage in Vehicle Applications: A Survey,” IEEE Communications Surveys & Tutorials 19(4), 2681–2703 (2017). [CrossRef]  

10. W. Xu, J. Wang, H. Shen, H. Zhang, and X. You, “Indoor Positioning for Multiphotodiode Device Using Visible-Light Communications,” IEEE Photonics J. 8(1), 1–11 (2016). [CrossRef]  

11. N. Saha, M. S. Ifthekhar, N. T. Le, and Y. M. Jang, “Survey on Optical Camera communications: Challenges and Opportunities,” IET Optoelectron. 9(5), 172–183 (2015). [CrossRef]  

12. X. Liu, X. Wei, and L. Guo, “DIMLOC: Enabling High-Precision Visible Light Localization Under Dimmable LEDs in Smart Buildings,” IEEE Internet Things J. 6(2), 3912–3924 (2019). [CrossRef]  

13. Y. Li, Z. Ghassemlooy, X. Tang, B. Lin, and Y. Zhang, “A VLC Smartphone Camera Based Indoor Positioning System,” IEEE Photonics Technol. Lett. 30(13), 1171–1174 (2018). [CrossRef]  

14. J. Lain, L. Chen, and S. Lin, “Indoor Localization Using K-Pairwise Light Emitting Diode Image-Sensor-Based Visible Light Positioning,” IEEE Photonics J. 10(6), 1–9 (2018). [CrossRef]  

15. X. Zhao and J. M. Lin, “Maximum Likelihood Estimation of Vehicle Position for Outdoor Image Sensor-based Visible Light Positioning System,” Opt. Eng. 55(4), 043104 (2016). [CrossRef]  

16. B. W. Kim and S. Jung, “Vehicle Positioning Scheme Using V2V and V2I Visible Light Communications,” in IEEE 83rd Vehicular Technology Conference (VTC) (2016), pp. 1–5.

17. V. T. B. Tram and M. Yoo, “Vehicle-to-Vehicle Distance Estimation Using a Low-Resolution Camera Based on Visible Light Communications,” IEEE Access 6, 4521–4527 (2018). [CrossRef]  

18. T. H. Do and M. Yoo, “Visible light Communication Based Vehicle Positioning Using LED Street Light and Rolling Shutter CMOS Sensors,” Opt. Commun. 407, 112–126 (2018). [CrossRef]  

19. M. S. Ifthekhar, N. Saha, and Y. M. Jang, “Stereo-Vision-Based Cooperative-Vehicle Positioning Using OCC and Neural Networks,” Opt. Commun. 352, 166–180 (2015). [CrossRef]  

20. T. Do and M. Yoo, “Visible Light Communication-Based Vehicle-to-Vehicle Tracking Using CMOS Camera,” IEEE Access 7, 7218–7227 (2019). [CrossRef]  

21. M. T. Hossan, M. Z. Chowdhury, M. K. Hasan, M. Shahjalal, T. Nguyen, N. T. Le, and Y. M. Jang, “A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry,” Mobile Information Systems 2018, 8501898 (2018).

22. J. Shi, J. He, J. He, Z. W. Jiang, Y. D. Zhou, and Y. Q. Xiao, “Enabling user mobility for optical camera communication using mobile phone,” Opt. Express 26(17), 21762–21767 (2018). [CrossRef]  

23. J. He, Y. Zhou, R. Deng, J. Shi, J. He, Z. Jiang, and Q. Tang, “Efficient Sampling Scheme Based on Length Estimation for Optical Camera Communication,” IEEE Photonics Technol. Lett. 31(11), 841–844 (2019). [CrossRef]  

24. J. He, Z. Jiang, J. Shi, Y. Zhou, and J. He, “A Novel Column Matrix Selection Scheme for VLC System with Mobile Phone Camera,” IEEE Photonics Technol. Lett. 31(2), 149–152 (2019). [CrossRef]  

25. K. Yu, J. He, and Z. Huang, “Decoding scheme based on CNN for mobile optical camera communication,” Appl. Opt. 59(23), 7109–7113 (2020). [CrossRef]  

26. C. W. Chow, C. Y. Chen, and S. H. Chen, “Visible Light Communication Using Mobile-Phone Camera with Data Rate Higher than Frame Rate,” Opt. Express 23(20), 26080–26085 (2015). [CrossRef]  

27. J. He, K. Tang, J. He, and J. Shi, “Effective Vehicle-to-Vehicle Positioning Method Using Monocular Camera Based on VLC,” Opt. Express 28(4), 4433–4443 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Vehicular positioning system based on VLC.
Fig. 2.
Fig. 2. The proposed positioning scheme based on VLC.
Fig. 3.
Fig. 3. The decoding processing schemes.
Fig. 4.
Fig. 4. The flowchart of the BLE based sampling scheme.
Fig. 5.
Fig. 5. The scene of vehicle positioning scheme based on VLC.
Fig. 6.
Fig. 6. (a) Case1: LED1’ on the lower half of the image plane, (b) Case2: LED1’ on the upper half of the image plane.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. DAR performance of the proposed vehicle positioning scheme.
Fig. 9.
Fig. 9. (a) The mean positioning error when CMOS camera is not tilted, (b) The measurement positioning error when CMOS camera is not tilted.
Fig. 10.
Fig. 10. (a) The mean positioning error with camera tilted at 9°, (b) The mean positioning error with camera tilted at 15.5°.
Fig. 11.
Fig. 11. The comparison of some vehicle positioning schemes based on VLC with CMOS camera.

Tables (1)

Tables Icon

Table 1. The key parameters in the experiment

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

I i = F I X ( S L i B L ) = S L i B L
x = M O D ( S L i , B L ) = S L i % B L
D i = G ( x ) = { 1 , i f   x t h × B L 0 , else
X = X L 1 + D 1 d 1 × ( x m i d x L 1 ) = X L 2 D 1 d 1 × ( x L 2 x m i d )
D 1 = ( X L 1 X L 2 ) 2 + ( Y L 1 Y L 2 ) 2
d 1 = [ d x × ( x L 1 x L 2 ) ] 2 + [ d y × ( y L 1 y L 2 ) ] 2
Y = Y L 1 D 1 d 1 × ( l × sin α + f × cos α ) = Y L 2 D 1 d 1 × ( l × sin α + f × cos α )
Y = Y L 1 D 1 d 1 × ( f × cos α l × sin α ) = Y L 2 D 1 d 1 × ( f × cos α l × sin α )
l = E [ a b s ( y m i d y L 1 ) a b s ( y m i d y L 2 ) ]
Z = h
x k + 1 = ϕ x k + φ a k + ω k
z k = ψ ( x k ) + v k
x k i p ( x k | x k 1 i )
w k i = w k 1 i p ( z k | x k i )
w k i = w k i / i = 1 N w k i
x ^ = i = 1 N x ~ k i w ~ k i
Mean positioning error =  1 K i = 1 K ( d m x ( i ) d a x ( i ) ) 2 + ( d m y ( i ) d a y ( i ) ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.