Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Effective vehicle-to-vehicle positioning method using monocular camera based on VLC

Open Access Open Access

Abstract

In this paper, an effective vehicle-to-vehicle (V2V) positioning method using monocular camera based on visible light communication (VLC) is proposed and experimentally demonstrated. As we all know, one of the key impacts of the accuracy of monocular positioning is the baseline which is always unfixed. To improve the accuracy of monocular positioning, the known length of taillights is used as the fixed baseline. Moreover, Kalman filter (KF) is applied to reduce random errors and enhance positioning accuracy for the vehicle position. In addition, to verify the feasibility of the method, a controllable mobile platform is built. By varying the distance between estimating vehicle and target vehicle, and the relative speeds of the two vehicles, the performance of the proposed positioning method based on VLC is investigated. The experimental results show that it can achieve centimeter level of accuracy using the proposed vehicle-to-vehicle VLC based positioning method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Autonomous vehicle has attracted a lot of attention because it provides a comfortable driving and proper vehicle control [1]. To satisfy the security of autonomous vehicle, vehicle-to-vehicle (V2V) positioning is an essential research. Due to low cost and easy accessibility, global positioning system (GPS) has been a popular solution. However, GPS has inadequate accuracy due to the signal blockage by buildings and multipath effect [2]. Besides GPS, RADAR and LIDAR can achieve quite high accuracy, but they also suffer from a high cost [3]. Recently, visible light communication (VLC) based positioning scheme is becoming an alternative due to its advantages in low cost, immunity to electromagnetic interference and low power consumption [46]. The VLC based positioning method is able to provide three functionalities of illumination, communication and positioning. The structure of the VLC system is composed of transmitter and receiver. It uses light emitting diodes (LEDs) as the transmitter which can transmit the information for communication and positioning. Meanwhile, the camera is placed on each car as the receiver. And there is a CPU (central processing unit) on the estimating vehicle to process and decode the signal. Thus, VLC based positioning scheme can be utilized to measure the distance between the estimating vehicle and the target vehicle. The V2V positioning system based on VLC is shown as Fig. 1.

 figure: Fig. 1.

Fig. 1. V2 V positioning system based on VLC.

Download Full Size | PDF

For positioning methods based on VLC, it is mainly divided into two categories: binocular positioning [7] and monocular positioning [810]. Binocular positioning can achieve higher accuracy in long distance measurement. However, the computational complexity of binocular positioning, such as calibration and matching between two cameras, causes it difficult to achieve real-time performance, which is vital in the V2V positioning. Therefore, due to the low cost and the real-time performance for V2V positioning [8], monocular positioning is more attractive for automobile vehicle. Usually, monocular positioning method is used to measure the distance between estimating vehicle and target vehicle through a baseline. The shadows underneath the vehicle, license plate and lane mark lines are used as baselines [910]. However, due to the lack of a fixed baseline and lighting equipment, the inevitable scale drift makes it difficult to obtain high-accuracy positioning result [11] and it is not easy to measure the distance in a dark environment. Moreover, there are some random errors. It will severely degrade the positioning performance.

In this paper, an effective monocular positioning method based on VLC is proposed and experimentally demonstrated. To address the issues of scale drift and taken into account the dark environment, the prior knowledge of fixed length between the taillights of target vehicle as baseline is used. To enhance the positioning accuracy, Kalman filter (KF) is adopted to reduce the impact of error variance for positioning. In addition, the impacts of distance, KF and different speed are evaluated for the accuracy of positioning.

2. Principles

2.1 The pinhole models

For positioning methods using camera based on VLC, pinhole model is used [12]. As shown in Fig. 2. It includes three coordinate systems. There are world coordinate system, camera coordinate system and image coordinate system, respectively.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the pinhole model.

Download Full Size | PDF

In the pinhole model, each point A (XW, YW, ZW) in the world coordinate system can be mapped to the point A’ (x, y) in the image coordinate system. It is given by:

$${Z_W}\left[ {\begin{array}{c} x\\ y\\ 1 \end{array}} \right] = K \cdot \left[ {\begin{array}{cc} R&T\\ 0&1 \end{array}} \right] \cdot \left[ {\begin{array}{c} {{X_W}}\\ {{Y_W}}\\ {{Z_W}}\\ 1 \end{array}} \right]$$
Where the matric K (3${\times} $4) is the camera intrinsic matrix, R (3${\times} $3) is the rotation matrix and T (3${\times} $1) is a translation vector.

For the camera intrinsic matric K, it is including intrinsic parameters of the camera, such as focal length and pixel size, and it can be expressed as:

$$K = \left[ {\begin{array}{cccc} {{f_x}}&0&{{u_0}}&0 \\ 0&{{f_y}}& {{v_0}}&0 \\ 0&0& {\; 1}&{\; 0} \end{array}} \right]$$
Where ${f_x}$ is the ratio of the focal length and width of a pixel, ${f_y}$ is the ratio of the focal length and height of a pixel, ${u_0}$ and ${v_0}$ are defined as the center point of the image plane.

For the rotation matrix R and translation vector T, it is used to transfer the point in the world coordinate system into the point in the camera coordinate system. In this paper, the camera coordinate system and the world coordinate system coincide. Thus, R and T can be described as:

$$\textrm{R} = \left[ {\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&1 \end{array}} \right]$$
$$\textrm{T} = \left[ {\begin{array}{ccc} 0&0&0 \end{array}} \right]$$
In this way, a point in the image coordinate system is mapped in the world coordinate system. Due to the unknown value of ZW, the point in the world coordinate system can’t be calculated by the known point in the image coordinate system with one camera. Thus, in the paper, an effective method is proposed to address the issue and it is experimentally demonstrated in the V2V positioning method using monocular camera based on VLC.

2.2 The proposed V2V positioning method

For monocular positioning based on VLC, due to the unfixed baseline, it will impact on the accuracy of monocular positioning. In [910], the baselines are shadow, license plate or lane mark lines for monocular positioning in V2V. It is susceptible to light. In a dark environment, such as the street without light at night, the accuracy of positioning is difficult to guarantee. In the paper, the distance between taillights in the world coordinate system as the baseline is used. It is always fixed and not affected by the light. The proposed V2V positioning method using monocular camera based on VLC is shown in Fig. 3. A monocular camera is placed on the estimating vehicle, and it is used to take pictures from the two taillights of LEDs on the target vehicle. Then, a CPU on the estimating vehicle is utilized to process the captured image and calculate the distance between the estimating vehicle and the target vehicle.

 figure: Fig. 3.

Fig. 3. The proposed V2 V positioning method using monocular camera based on VLC.

Download Full Size | PDF

In Fig. 3, it includes two coordinate systems, one is the world coordinate system and the other is the image coordinate system. The world coordinate consists of X, Y and Z axis, the image coordinate consists of X’ and Y’ axis, O and O’ are their origin point, respectively. L1 and L2 are represented as the two taillights of LEDs on the target vehicle. L'1 and L'2 are the corresponding mapping of L1 and L2 in the image coordinate, respectively. The distance between L1 and L2 is denoted as DL, and the distance between L'1 and L'2 is denoted as DL’. Thus, DL and DL’ are represented as the distance between the two taillights in the world coordinate system and in the image coordinate system, respectively. O is the center of the camera that is installed on the estimating vehicle. Assuming that the DL, focal length DOO’ and image sensor resolution are given. Focal length DOO’ and image sensor resolution are obtained through the instructions of camera. The proposed V2V positioning method using monocular camera based on VLC can be described as follows.

Firstly, M’ is set as the midpoint of segment, whose two end points are L'1 and L'2, M is the midpoint of segment, whose two end points are L1 and L2. Assuming that the target vehicle is in front of the estimating vehicle, hence, use the similar triangle properties in triangular OL1L2, it is given by:

$$\frac{{{D_{L^{\prime}}}}}{{{D_L}}} = \frac{{{D_{OM^{\prime}}}}}{{{D_{OM}}}}$$
Subsequently, in the triangular OMN, it is expressed as:
$$\frac{{{D_{OM^{\prime}}}}}{{{D_{OM}}}} = \frac{{{D_{OO^{\prime}}}}}{{{D_{ON}}}}$$
Where N is the foot point of line ON and NM. In Fig. 3, DON is the distance between the estimating vehicle and the target vehicle. And the distance DON is given by:
$${D_{ON}} = \frac{{{D_L}{D_{oo^{\prime}}}}}{{{D_{L^{\prime}}}}}$$
To obtain the value of DL’, the image processing scheme is utilized in the VLC based monocular positioning method.

The image processing scheme is shown in Fig. 4. At the receiver, the camera is installed on the estimating vehicle and it is used to capture the two LEDs on the target vehicle. Assuming that the luminance of two LEDs is much higher than the background in the image, after implementing grayscale conversion and contrast enhancement schemes, the image is shown in Fig. 4(a). Then, a global gray threshold value is chosen to extract the LEDs. However, there is still image noise, the combination of morphology open-operation and close-operation can be used to eliminate those noises. Subsequently, the image coordinates of the two LEDs center are obtained through contour extraction and center point extraction. They are marked as L'1 (x1, y1) and L'2 (x2, y2) in Fig. 4(b), respectively. The distance DL’ between the L'1 and L'2 is given by:

$${D_{L^{\prime}}} = \sqrt {{{({dx({{x_1} - {x_2}} )} )}^2} + {{(dy({{y_1} - {y_2}} ))}^2}} $$
Where dx and dy are the lateral and vertical length of each pixel in image sensor, respectively. Based on the Eq. 7, the position result between estimating vehicle and target vehicle can be calculated.

 figure: Fig. 4.

Fig. 4. The schematic of image processing scheme. (a) the frame after contrast enhancement, (b) the frame after contour extraction and center points extraction.

Download Full Size | PDF

2.2 Kalman filter aided positioning scheme

The flowchart of Kalman filter aided V2V positioning scheme based on VLC is shown in Fig. 5. At the receiver, firstly, the video frame containing two LEDs of target vehicle is captured by a monocular camera. Secondly, the video frame is processed to extract the two image coordinates of the center of LEDs. Then, the positioning result can be calculated using the proposed VLC based positioning method. To reduce the random errors in positioning result, KF is applied in the proposed V2V positioning method using monocular camera based on VLC.

 figure: Fig. 5.

Fig. 5. The flowchart of Kalman filter aided V2 V positioning scheme based on VLC.

Download Full Size | PDF

In Fig. 5, the value of initial state for KF is set to the state of the target vehicle. Subsequently, using Kalman filter, the current vehicle state is predicted by last vehicle state, the predicted position and camera-based positioning results are fed to the KF to obtain the optimal estimated state. The optimal estimated vehicle state is provided as the prior vehicle state for the next round of the loop until the positioning end. For each round of the loop, the output is the optimal state of target vehicle.

In the Kalman filter aided V2V positioning scheme, at time k, the state of target vehicle, includes its position and velocity. It can be written as:

$${x_k} = \left[ {\begin{array}{{c}} {{p_k}}\\ {{v_k}} \end{array}} \right]$$
Where ${p_k}$ is the position of target vehicle, and ${v_k}$ is the relative velocity between target vehicle and estimating vehicle.

KF is implemented with two stages [13]. In the first stage, it is “prediction” stage, the predicted state of target vehicle is denoted by ${\hat{x}_{k|k - 1}}$, and the covariance matrix of predicted state is denoted by ${P_{k|k - 1}}$. The prediction model is expressed as:

$${{\hat{x}}_{k|k - 1}} = F{{\hat{x}}_{k - 1}}$$
$${P_{k|k - 1}} = F{P_{k - 1}}F + Q$$
Where F is the prediction matrix and it is expressed as:
$$F = \left[ {\begin{array}{cc} 1&{\varDelta t}\\ 0&1 \end{array}} \right]$$
Where ${\hat{x}_{k - 1}}$ and ${P_{k - 1}}$ are the optimal estimated state and covariance matrix of target vehicle, which are obtained through KF at time k-1, respectively. Q is the covariance of the noise. $\Delta t$ is the interval between two image frames.

In the second stage, it is “update” stage, the positioning result is denoted by ${z_k}$, and the covariance of the positioning error at the current state of the target vehicle is denoted by R. Thus, the Kalman gain is given by:

$${K_k} = {P_{k|k - 1}}{H^T}{{({H{P_{k|k - 1}}{H^T} + {R_k}} )}^{ - 1}}$$
Where H is a matrix that mapping state value ${x_k}$ to the positioning value ${z_k}$:
$$H = \left[ {\begin{array}{cc} 1&0 \end{array}} \right]$$
Then, the Kalman gain is used to calculate the optimal estimated state of target vehicle:
$${{\hat{x}}_k} = {{\hat{x}}_{k|k - 1}} + {K_k}({{z_k} - H{{\hat{x}}_{k|k - 1}}} )$$
Also, using Kalman gain, the covariance matrix of optimal estimated state of target vehicle:
$${P_k} = {P_{k|k - 1}} - {K_k}H{P_{k|k - 1}}\# $$
Meanwhile, ${\hat{x}_k}$ and ${P_k}$ will be used in the time k + 1.

3. Experimental setup and results

3.1 Experimental setup

The experimental setup of the proposed monocular positioning scheme based on VLC is shown in Fig. 6. To emulate real scenario, the ratio of the experimental setup to real scene is 1:25. Assuming that the distance between the two taillights is 1.5m, the distance between two LEDs in the experiment is set as 0.06m according to the ratio of 1:25. And the other parameters, such as the distance and the relative speed between target vehicle and estimating vehicle, are also set to the corresponding value according to the ratio of 1:25.

 figure: Fig. 6.

Fig. 6. Experimental setup of the proposed monocular positioning scheme based on VLC

Download Full Size | PDF

In Fig. 6, LED1 and LED2 are presented as the transmitter of the target vehicle, they are fixed. And the monocular camera (iPhone 7) is presented as the receiver on the estimating vehicle, it can move at a constant velocity on the mobile platform. Furthermore, the velocity of the receiver can be controlled by the mobile platform. Thus, the controllable mobile platform of V2V monocular positioning based on VLC is built. In addition, the distance between LED1 and LED2 is denoted by DL, it is set to 0.06m. And the distance between transmitter and receiver, marked as Ltotal. It consists of two parts. One part is the distance between the bottom of mobile platform and the top of the mobile platform, which is denoted by L. It is the length of the mobile platform, and it is fixed as 0.6 m. The other part is the distance between the transmitter and the bottom of the mobile platform, which is denoted by L’. The length of L’ can be changed. In the experiment, to emulate the performance of different distance positioning, as L’ is set to 0.6 m, it is short distance positioning (SDP). And the distance of Ltotal for SDP is in the case that the length of L’ is 0.6m and the length of L is 0.6m. In addition, as L’ is set to1.5 m, it is long distance positioning (LDP). And the distance of Ltotal for LDP is in the case that the length of L’ is 1.5m and the length of L is 0.6m. Thus, the length of Ltotal can be given by:

$${L_{total}} = L + L^{\prime}$$
In this way, Ltotal are 1.2 m and 2.1 m in SDP and LDP, respectively.

According to the ratio of 1:25, the experimental parameters are given in Table 1.

Tables Icon

Table 1. The experimental parameters

3.2 Experimental results and discussions

As the receiver moves in the mobile platform, the range of the positioning is the distance of L. In the experiment, the impacts of KF and the speed of between transmitter and receiver are evaluated. Five measurements are performed at each speed in both SDP and LDP. For each measurement, the video frames are obtained. And it is used to process and calculate the positioning information. To evaluate the accuracy of the positioning, the speed of the receiver marked as v, it is given and can be controlled. Thus, the real distance marked as Dreal, it can be obtained as follows:

$${D_{real}} = v\ast \; \varDelta t$$
Where Δt is the interval between the two image frames, and the value is 1/60. In the paper, the accuracy of the positioning is obtained by comparing the positioning results and the real distance Dreal. To evaluate the positioning error, the mean square error (MSE) and average position error (APE) is adopted.

In Fig. 7 and Fig. 8, the positioning error is the average error value from five measurements of positioning results. Figure 7 shows the positioning error performance for SDP with four different relative speeds. Figures 7(a)–7(d) shows the comparison of positioning error with KF and without KF in SDP, at the speed of 10.5m/s, 13.3m/s, 18.9m/s and 21.7m/s, respectively. It can be seen that, at the case of without KF in SDP, the positioning error fluctuates between −0.2 to 0.2 m. For the positioning-error-curve with KF in SDP, it is almost a horizontal line near zero.

 figure: Fig. 7.

Fig. 7. The positioning error in the case of SDP with the speed of 10.5 m/s (a), 13.3 m/s (b), 18.9 m/s (c) and 21.7 m/s (d), respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The positioning error in the case of SDP with the speed of 10.5 m/s (a), 13.3 m/s (b), 18.9 m/s (c) and 21.7 m/s (d), respectively.

Download Full Size | PDF

Figure 8 shows the positioning error performance for LDP with four different relative speeds. Figures 8(a)–8(d) shows the comparison of positioning error with KF and without KF in LDP, at the speed of 10.5m/s, 13.3m/s, 18.9m/s and 21.7m/s, respectively. From Fig. 8, it can be seen that the positioning error variance increases obviously in LDP, compared with the case of SDP. The reason is that the random error will increase as the distance increases. To reduce the random error, the KF is applied in the monocular positioning scheme based on VLC. It indicates that, using the proposed VLC based positioning method with KF, it can improve the accuracy greatly in LDP for V2V positioning with different relative speed.

From Fig. 7 and Fig. 8, it can be seen that the proposed VLC based positioning method can estimate the distance between the receiver and transmitter precisely in both SDP and LDP. It is due to the fixed baseline for monocular positioning, and it can guarantee the effectiveness of proposed method. In addition, due to the random error from the image process and the vibration of the mobile platform, the performance is affected. In this way, KF is applied and it can be used to improve the accuracy of the proposed positioning method based on VLC effectively.

Furthermore, the impact of different relative speed is also investigated for the proposed monocular positioning scheme based on VLC. The APE and MSE performance at the relative speed of 10.5, 13.3, 18.9 and 21.7 m/s are shown in Fig. 9 and Fig. 10, respectively. In this paper, the smaller the MSE and APE, the more accuracy the positioning. From Fig. 9, in the case of both LDP and SDP, the MSE are less than 0.03 at all speed. And it indicates that there is no much effect on the performance of MSE with a different speed due to the fixed baseline. From Fig. 10, it can be seen that, the proposed VLC based monocular positioning method can achieve centimeter level of accuracy in SDP. Moreover, the centimeter level of accuracy can be achieved in both SDP and LDP with KF. The positioning performance is improved in LDP with KF. Thus, KF can be used to eliminate the random noise effectively.

 figure: Fig. 9.

Fig. 9. The APE performance of positioning scheme under different speed.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The MSE performance of positioning scheme under different speed.

Download Full Size | PDF

4. Conclusion

In this paper, an effective V2V positioning method using a monocular camera based on VLC was proposed and experimentally demonstrated. The proposed monocular camera positioning method could provide the fixed baseline and it could work in the dark environment. Meanwhile, the impact of different speed and distance based on the positioning method were investigated, respectively. Moreover, the KF was used to eliminate the random error in the positioning process and improve the accuracy of positioning. The experimental results showed that the VLC based positioning result could achieve centimeter level of accuracy in SDP, and the centimeter level of accuracy could also be achieved in LDP when the KF was applied. In addition, using the proposed method, it is robust to different relative speed between two vehicles.

Funding

National Natural Science Foundation of China (61775054); Hunan Provincial Science and Technology Department (2016GK2011).

Disclosures

The authors declare no conflicts of interest.

References

1. G. Xie, Y. Chen, Y. Liu, R. Li, and K. Li, “Minimizing Development Cost With Reliability Goal for Automotive Functional Safety During Design Phase,” IEEE Trans. Reliab. 67(1), 196–211 (2018). [CrossRef]  

2. G. M. Djuknic and R. E. Richton, “Geolocation and assisted GPS,” Computer 34(3), 123–125 (2001). [CrossRef]  

3. D. Vivet, F. Gérossier, P. Checchin, L. Trassoudaine, and R. Chapuis, “Mobile ground-based radar sensor for localization and mapping: An evaluation of two approaches,” Int. J. Adv. Robot. Syst. 10(8), 307–318 (2013). [CrossRef]  

4. J. He, Z. Li, J. He, and J. Shi, “Visible Laser Light Communication based on LDPC-Coded Multiband CAP and Adaptive Modulation,” J. Lightwave Technol. 37(4), 1207–1213 (2019). [CrossRef]  

5. J. He, Z. Jiang, J. Shi, Y. Zhou, and J. He, “A Novel Column Matrix Selection Scheme for VLC System with Mobile Phone Camera,” IEEE Photonics Technol. Lett. 31(2), 149–152 (2019). [CrossRef]  

6. J. Shi, J. He, J. He, Z. Jiang, Y. Zhou, and Y. Xiao, “Enabling user mobility for optical camera communication using mobile phone,” Opt. Express 26(17), 21762–21767 (2018). [CrossRef]  

7. V. T. B. Tram and M. Yoo, “Vehicle-to-vehicle distance estimation using a low-resolution camera based on visible light communications,” IEEE Access 6, 4521–4527 (2018). [CrossRef]  

8. S. Liu, Z. Li, Y. Zhang, and X. Cheng, “Introduction of key problems in long-distance learning and training,” Mobile Netw. Appl. 24(1), 1–4 (2019). [CrossRef]  

9. J. Wang, F. Zou, M. Zhang, and Y. Li, “A monocular ranging algorithm for detecting illegal vehicle jumping,” 2017 International Conference on Green Informatics, pp. 25–29, 2017.

10. J. Xue, S. Xu, and S. Wang, “Research of vehicle monocular measurement system based on computer vision,” Proceedings of the 2013 International Conference on Machine Learning and Cybernetics, pp. 957–961, 2013.

11. S. Song, M. Chandraker, and C. C. Guest, “High Accuracy Monocular SFM and Scale Correction for Autonomous Driving,” IEEE Transactions on Pattern Analysis and Machine Intelligence 38(4), 730–743 (2016). [CrossRef]  

12. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge Univ. Press, Cambridge, U.K., 2004).

13. T.-H. Do and M. Yoo, “Visible Light Communication-Based Vehicle-to-Vehicle Tracking Using CMOS Camera,” IEEE Access 7, 7218–7227 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. V2 V positioning system based on VLC.
Fig. 2.
Fig. 2. Schematic diagram of the pinhole model.
Fig. 3.
Fig. 3. The proposed V2 V positioning method using monocular camera based on VLC.
Fig. 4.
Fig. 4. The schematic of image processing scheme. (a) the frame after contrast enhancement, (b) the frame after contour extraction and center points extraction.
Fig. 5.
Fig. 5. The flowchart of Kalman filter aided V2 V positioning scheme based on VLC.
Fig. 6.
Fig. 6. Experimental setup of the proposed monocular positioning scheme based on VLC
Fig. 7.
Fig. 7. The positioning error in the case of SDP with the speed of 10.5 m/s (a), 13.3 m/s (b), 18.9 m/s (c) and 21.7 m/s (d), respectively.
Fig. 8.
Fig. 8. The positioning error in the case of SDP with the speed of 10.5 m/s (a), 13.3 m/s (b), 18.9 m/s (c) and 21.7 m/s (d), respectively.
Fig. 9.
Fig. 9. The APE performance of positioning scheme under different speed.
Fig. 10.
Fig. 10. The MSE performance of positioning scheme under different speed.

Tables (1)

Tables Icon

Table 1. The experimental parameters

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

Z W [ x y 1 ] = K [ R T 0 1 ] [ X W Y W Z W 1 ]
K = [ f x 0 u 0 0 0 f y v 0 0 0 0 1 0 ]
R = [ 1 0 0 0 1 0 0 0 1 ]
T = [ 0 0 0 ]
D L D L = D O M D O M
D O M D O M = D O O D O N
D O N = D L D o o D L
D L = ( d x ( x 1 x 2 ) ) 2 + ( d y ( y 1 y 2 ) ) 2
x k = [ p k v k ]
x ^ k | k 1 = F x ^ k 1
P k | k 1 = F P k 1 F + Q
F = [ 1 Δ t 0 1 ]
K k = P k | k 1 H T ( H P k | k 1 H T + R k ) 1
H = [ 1 0 ]
x ^ k = x ^ k | k 1 + K k ( z k H x ^ k | k 1 )
P k = P k | k 1 K k H P k | k 1 #
L t o t a l = L + L
D r e a l = v Δ t
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.