Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Using lighting design tool to simplify the visible light positioning plan and reduce the deep learning loading

Open Access Open Access

Abstract

We put forward and transform the commercially available lighting design software into an indoor visible light positioning (VLP) design tool. The proposed scheme can work well with different deep learning methods for reducing the loading of training data set collection. The indoor VLP models under evaluation include second order regression, fully-connected neural-network (FC-NN), and convolutional neural-network (CNN). Experimental results show that the similar positioning accuracy can be obtained when the indoor VLP models are trained with experimentally acquired data set or trained with software obtained data set. Hence, the proposed method can reduce the training loading for the indoor VLP.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The recent advances in solid-state design and fabrication technologies allow the development of light-emitting diodes (LED) with low power consumption, low cost, low heat dissipation and long life expectancy. As the LED lighting is more environmental friendly, the incandescent and fluorescent light sources are phasing out gradually and are replacing by the LEDs. Besides, the lighting purpose, LED can provide the function of wireless data transmission, known as visible light communication (VLC) or optical wireless communication (OWC). VLC or OWC provide many transmission merits. They can provide abundant communication spectrum, which is > ∼2000 times than the traditional radio-frequency (RF) communication spectrum. VLC uses the unlicensed visible spectrum (400 nm to 700 nm), which does not interfere with the nearby RF devices. Hence, VLC is positioned as a complementary wireless access technology to RF techniques particularly in indoor area. The high directionality and spatial confinement characteristics enable VLC to provide high data rate and secure transmission. Recently different techniques have been proposed to further increase the data rate of the VLC systems, such as employing equalizer circuits [13]; applying advanced modulations (e.g., orthogonal frequency division multiplexing (OFDM)) [46]; implementing wavelength division multiplexing [7,5] or spatial multiplexing [8,9]; developing high speed transmitters (e.g., laser-diode [10,11] or micro-LED light sources [12,13]).

Besides providing lighting and communication, LED can also provide high accuracy indoor positioning, while the Global Positioning System (GPS) cannot work efficiently. They are known as visible light positioning (VLP). Several indoor positioning scheme are proposed, including Bluetooth, Wireless Fidelity (WiFi) or hybrid VLP [14]. Proximity based VLP system [15] is simple; however, the positioning error is unsatisfactory for high precision applications. Time-of-arrival (TOA) [16] and time-difference-of-arrival (TDOA) [17] based VLC systems are also proposed; however, strict synchronization between the transmitter (Tx) and receiver (Rx) is required. Angle-of-arrival (AOA) based VLP system [18] is demonstrated; however, the angular diversified Rx is needed, which has large footprint. To achieve positioning accuracy up to centimeter level, received-signal-strength (RSS) [19,20] VLP system based trilateration are also realized. This system relies on analyzing optical powers from different light sources for estimating the LED Txs and Rx distances.

VLP can provide many unique applications. As discussed in [21], the possible future applications of VLP include asset tracking, mobile robot navigation, as well as aid for the visually impaired people. The VLP accuracy will be significantly influenced by the room dimension, LED emitting field-of-view (FOV), LED arrangement, etc. It will be interesting to have a simple, low cost and efficient VLP simulation tool for the designers and engineers to establish an accurate VLP system. In this work, we put forward and transform a free-of-charge lighting design software DIALux into an indoor VLP design tool. For the first time, we show that the proposed scheme can work well with different deep learning methods for reducing the burden of training data set collection. The indoor VLP models under evaluation include second order regression, fully-connected neural-network (FC-NN), and convolutional neural-network (CNN). Experimental results show that the similar positioning accuracy can be obtained when the indoor VLP models are trained with experimentally acquired data set or trained with DIALux simulation software obtained data set. Hence, the proposed scheme could be an indoor VLP designing tool, as well as reducing the training loading for the indoor VLP systems. It is worth to point out that there are many optical design and simulation software in the market, such as AGI32 and Relux. Reference [22] evaluates different artificial lighting simulation tools with virtual building reference. It illustrates that AGI32, Relux and DIALux are capable of modeling 3D environments. All the simulation programs can accept luminaire definition provided by manufacturers. When comparing the output features of different simulation programs, all three programs can support the view of working plane with iso-illuminance contours and false color in camera view [22]. Only Relux can support the virtual reality markup language; and only AGI32 can support the walkthrough animation. Moreover, both Relux and DIALux are free-of-charge. Besides, the illuminance value calculation is found within the acceptable precisions for all the three simulation software in case of simple geometric description and direct lighting.

2. Experiment and algorithms

Collecting the RSS data from LEDs at different exact indoor coordinates in the VLP systems are very time-consuming. Besides, owing to aging or replacing different LED brands, the RSS data will change. The RSS data set should be updated. As mentioned above, DIALux software are used [23,24]. In this work, two cases are employed: room without and with an obstacle. Figures 1(a) and (b) illustrate the photos of a room for the VLP with the unit cell dimensions of about 300 cm × 220 cm × 155 cm; and an obstacle located at coordinates: 89 cm, 91 cm with dimensions of 46 cm × 37 cm × 131 cm. Figure 1(c) shows the experimental setup. We use commercially available LED (TOA LDL030C) with 13 W output power. The four LEDs are modulated by home-made driving circuits with suitable direct-current (DC) biases and specified frequencies of 47, 59, 83 and 101 kHz. These frequencies are selected to avoid the spectral overlapping of harmonic frequencies. It is worth to note that synchronization among different LEDs are not required. The Rx side is a silicon-based photodiode (PD) attached to a real-time oscilloscope (RTO, PicoTechnology 5243D). The vertical distance between the PD and the ground is 30 cm. The RTO is used as an analog-to-digital converter (ADC) converting the captured analog signal waveforms form the PD into digital domain, which is then stored in a computer. The RTO has an analog bandwidth and sampling rate of 100 MHz and 500 MS/s respectively. In this work, DIALux evo 9.2 version is employed. It is installed in a laptop computer (Acer Aspire A515-51G) with Windows 10. During the simulation, the calculation surface is defined in the DIALux program with the location precision the same as the experimental condition (i.e., the training and testing locations in real experimental unit cell). The DIALux simulation output results are in PDF file, and we collect and import these data to the subsequence machine learning and deep learning programs for the VLP location prediction.

 figure: Fig. 1.

Fig. 1. Photos of (a) a room for the VLP with the unit cell dimensions of about 300 cm × 220 cm × 155 cm, (b) an obstacle. (c) Experimental setup of the VLP system. PD: photodiode; RTO: real-time oscilloscope.

Download Full Size | PDF

Figures 2(a) and (b) show the top-views of the rooms with dimensions about 220 cm × 155 cm without and with an obstacle. The training points, testing points and LED positions are also shown in the figures. The LEDs in the actual room are arranged in an irregular rectangle to emulate the non-uniform LED arrangement inside a room. Figure 3 shows the flow diagram of DIALux simulation. We first built a virtual room with same conditions as the experiment room in DIALux. Then, we set up a calculation plane, which matches the experiment with the same height and dimensions to collect data. In the calculation plane, we have to decide how many calculation points are needed on the plane. After this, the LED lamp emission profile file is embedded into the DIALux software. The Illuminating Engineering Society (IES) emission profile can be provided by different LED lamp companies. Then, we collect and record the power data of all the calculation points on the plane file. Data augmentation (i.e., increasing the number of data) is performed by adding Gaussian random noise. It is worth to mention that no Gaussian noise is added to the experimental data. The Gaussian noise is only added to the DIALux simulation data for data augmentation (i.e., increase the number of data for the machine learning and deep learning models). As the DIALux program cannot include noise in the simulation, in order to emulate the noises by the LED and PD, Gaussian noise is added in the simulation data to increase the simulated data set for data augmentation. Gaussian noise with the mean equals to the DIALux output value and standard deviation equals to 2 is added to the data.

 figure: Fig. 2.

Fig. 2. Top-views of the rooms (a) without and (b) with an obstacle. The training points, testing points, and LED positions are illustrated.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Flow diagram of DIALux simulation.

Download Full Size | PDF

Then, the second order regression machine learning, FC-NN and CNN algorithms are applied. In this work, we would like to show that our proposed scheme can be applicable in several popular machine learning model and deep learning model. Regression is a simple method in machine learning. FC-NN and CNN are also commonly used neural network and deep learning algorithms. These models will be trained by the training data set collected in a virtual room built by DIALux. These models will also be trained by the experimental training data set collected in the real room as discussed above. Then, both models will be studied by the experimental testing data set to analyze positioning accuracy.

To verify the VLP model obtained by DIALux, we first apply the second order regression model [25] as shown in Eq. (1). F is the matrix of coordinate matrix, Ф is the matrix of design RSS, WML is the weight, and D is the dimension (as 4 LEDs are used here, D = 4). In Eq. (1), i and j are the LED indexes. i and j mean the ith and jth LED respectively; and pi and pj are the RSS values of the ith and jth LED respectively. Since there are 4 LEDs in this study, i or j = 1, …, 4.

$${\mathbf F} = {w^{(0)}} + \sum\limits_{i = 1}^D {{w^{(i)}}{p_i}} + \sum\limits_{i = 1}^D {\sum\limits_{j = 1}^D {{w^{(i,j)}}{p_i}} } {p_j} = {\mathbf \Phi }{{\mathbf W}_{\textrm{ML}}}$$

The matrix of design RSS Ф contains the linear regression terms p1, p2, p3, p4, which are the fast Fourier transform (FFT) values of 47, 59, 83 and 101 kHz signals from the 4 LEDs respectively; as well as the cross terms as illustrated in Eq. (2).

$$\scalebox{0.9}{$\displaystyle {\mathbf \Phi } = {[{\phi _p}\textrm{(1)},{\phi _p}\textrm{(2)},\ldots ,{\phi _p}\textrm{(}N\textrm{)}]^\textrm{T}}\textrm{; }{\phi _p}(n) = [1,{p_1}(n),{p_2}(n),{p_3}(n),{p_4}(n),\ldots ,p_3^2(n),{p_3}(n){p_4}(n),p_4^2(n)]$}$$

In Eq. (2), n represents the training data index. Hence, n in pk(n) means RSS value of kth LED lamp from the nth training data, where n = 1, …, N. The N is the number of training data. In this study, we select 68 training locations inside an unit cell and each location is measured by 20 times; hence, N = 68 × 20 = 1,360

Since the F is the coordinate matrix, it can be further expanded into the x- and y- coordinates separately, as shown in Eq. (3).

$$\left\{ {\begin{array}{@{}c@{}} {{F_x} = w_x^{(0)} + w_x^{(1)}{p_1} + w_x^{(2)}{p_2} + w_x^{(3)}{p_3} + w_x^{(4)}{p_4} + w_x^{(11)}p_1^2 + \cdots + w_x^{(33)}p_3^2 + w_x^{(34)}{p_3}{p_4} + w_x^{(44)}p_4^2}\\ {{F_y} = w_y^{(0)} + w_y^{(1)}{p_1} + w_y^{(2)}{p_2} + w_y^{(3)}{p_3} + w_y^{(4)}{p_4} + w_y^{(11)}p_1^2 + \cdots + w_y^{(33)}p_3^2 + w_y^{(34)}{p_3}{p_4} + w_y^{(44)}p_4^2} \end{array}} \right.$$

We can observe that the RSS matrix Φ can be represented as the first order term, the second order term and the cross terms of RSS which are as shown in Eq. (2). Equation (4) shows the target vector t obtained from the training location x- and y-coordinates.

$${\mathbf t} = {\left[ \begin{array}{l} {x_1},{x_2}, \cdots ,{x_{N - 1}},{x_N}\\ {y_1},{y_2}, \cdots ,{y_{N - 1}},{y_N} \end{array} \right]^T}$$

The WML can be found out in Eq. (5). After the WML is obtained after the training phase, we can predict the Rx location based on Eq. (1).

$${{\mathbf W}_{{\mathbf ML}}}\textrm{ = (}{{\mathbf \Phi }^\textrm{T}}{\mathbf \Phi }{)^{ - 1}}{{\mathbf \Phi }^\textrm{T}}{\mathbf t}$$

We also apply FC-NN and CNN models. Figure 4 shows the architecture of the FC-NN model [26]. There are five layers in the model. The first layer is the input layer having 14 nodes. The inputs are the four z-score normalized RSS power values p1, p2, p3, p4, with the cross the cross terms. The three hidden layers have different nodes which are 32, 16 and 8. Finally, the output layer has one node, outputting the x- and y-coordinates.

 figure: Fig. 4.

Fig. 4. Architecture of the FC-NN model.

Download Full Size | PDF

Figures 5(a) and (b) show the architecture and the flow diagram of CNN model. The input is the four RSS after z-score normalization. The first convolutional layer has 8 filters with kernel of 3, stride of 1; and the first max-pooling layer has the size of 2 and stride of 1. Then the second convolutional layer has 32 filters and second max-pooling layer has the size of 2 and stride of 1. After the flatten layer, there are three fully connected layers with nodes 64, 16 and 1 respectively as shown in Fig. 5(a). Finally, we can get the output x- and y-coordinates at the output.

 figure: Fig. 5.

Fig. 5. (a) Architecture and (b) flow diagram of CNN model.

Download Full Size | PDF

It is worth to note that second order regression and second order term input FC-NN are utilized since the positioning errors can be significantly reduced when compared with the first order cases. As the CNN is a kind of deep learning algorithm, which has multi-layered neural network. The first and second order term input CNN models show similar accuracy; hence, the second order term input CNN is not necessary. For designing the FC-NN structure, we attempt using 1 hidden layer to 5 hidden layers, and we discover that using 3 hidden layers has the best accuracy. For designing the CNN structure, we attempt different combinations of convolution layers and pooling layers, and we discover that two times of the combination of one convolution layer and one pooling layer are good enough for the accuracy.

3. Results and discussion

Figures 6(a) and (b) illustrate the virtual rooms built in DIALux without and with the obstacle when all the four LEDs are switched on. The virtual room in DIALux is based on the practical room as revealed in Fig. 1(a). Figure 7(a) illustrates the room error distributions trained by experimentally measured training data and assessed by the experimentally measured testing data using second order regression. The red dot and black circles are the testing locations and maximum positioning error respectively. The training locations are shown in Figs. 2(a) and 2(b). The average positioning error is 8.9 cm. Then, we train the model by DIALux simulation data and test the model by experimentally measured testing data, and the average positioning error is 9.62 cm as shown in Fig. 7(b). The results are match with the model built by experimental data set. Then, we also apply the FC-NN and CNN models. Figures 7(c) and (d) illustrate the room error distributions trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the FC-NN model. The average position errors are 7.57 cm and 8.93 cm respectively. Figures 7(e) and (f) show the room error distributions trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimental data respectively using the CNN model. The average position errors are 8.82 cm and 8.66 cm respectively. We can observe that by employing the CNN model, the deviation between using the experimentally measured training data or the DIALux simulation training data is 1.8%. Figure 8 shows the light intensity distribution profile against horizontal distance obtained in actual experimental scene and in DIALux simulation. We can observe that the curve obtained in experimental scene is similar to that obtained in DIALux simulation. Hence, the VLP prediction results obtained from the VLP model trained with experimental data and from the VLP model trained with DIALux data can match with each other mostly.

 figure: Fig. 6.

Fig. 6. Virtual rooms built in DIALux (a) without and (b) with the obstacle.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Measured room error distributions without obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Light intensity distribution profile against horizontal distance obtained in actual experimental scene and in DIALux simulation.

Download Full Size | PDF

As respectively compared to Figs. 7(a), (c), and (e), the models using DIALux data in Figs. 7(b), (d), and (f) seem to perform less accurately at the center of the unit cell (i.e., x ≈ 100 cm and y ≈ 150 cm). As discussed above, Gaussian noise is added to the DIALux simulation data for data augmentation. We can observe in Fig. 8 that the two curves are similar except at the horizontal distance of between 100 to 150 cm. Since the models used in Figs. 7(a), (c), and (e) are trained by the experimental data, while the models used in Figs. 7(b), (d), and (f) are trained by the DIALux simulation data, this is the reason why highest mismatch will be observed at the center of the unit cell (i.e., x ≈ 100 cm and y ≈ 150 cm).

In this work, we use two simple rooms with and without obstacle as proof-of-concept illustrations to show the feasibility of the proposed scheme. As the positioning accuracy depends on the received optical powers (i.e., the RSS values) emitted by the LEDs, we can observe in Fig. 8 that the curve obtained in experimental scene is similar to that obtained in DIALux simulation. In addition, Ref. [27] employs an luminance meter driven by the SkyWatcher Virtuoso goniometric system to measure the luminance values of a complicated classroom and compare with the results obtained via DIALux simulation. It shows that the results obtained via DIALux are within the acceptable range (i.e., within 10%) than those measured with the SkyWatcher system. Based on above discussion, we think that the proposed scheme can be applied in the realistic room.

In this work, we separately predict the x- and y- coordinates using the FC-NN and CNN models. Although the total training time for the separated prediction (i.e., 1 output node at a time in the model) is about twice when compared to the training time of x- and y- coordinates combined prediction (i.e., 2 output nodes in the model), the average errors can be significantly reduced. We use the FC-NN and CNN models trained with experimental data without obstacle as the examples, and the results are shown in Table 1. We can observe that for the FC-NN model and the CNN model, the mean errors in x- and y- axis are much lower in the separated prediction. Besides, the root mean square error is reduced from 11.50 cm to 7.57 cm (reduction by 34.2%) using the FC-NN model, and from 11.58 cm to 8.82 cm (reduction by 23.8%) using the CNN model. The reason for the higher error in the combined prediction is that the loss function taking the errors of two axes into consideration will cause higher probability error in the backpropagation process, such as the wrong gradient update in weights and gradient vanishing. Hence, in this work, separated prediction is utilized.

Tables Icon

Table 1. Comparison of positioning errors when using separated or combined predictions in FC-NN and CNN models

Figures 9(a) and (b) reveal the measured positioning error cumulative distribution function (CDF) curves trained and assessed by experimentally measured data; and trained by DIALux data and assessed by experimentally measured data. 80% of the testing points have positioning error < 15 cm in both scenario. Figures 9(c) and (d) show the measured positioning error CDF curves trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the FC-NN model. Figures 9(e) and (f) show the measured positioning error CDF curves trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the CNN model. We can observe that by employing both the FC-NN and CNN models, 80% of the testing points have position error < 14 cm with similar trends no matter the VLP model is built by experimental data or DIALux data. Generally speaking, second order regression is the simplest VLP model, which takes less time to complete training. However, the positioning accuracy performance of second order regression is poorer than the others. FC-NN and CNN show similar performance and complexity, they need more times to train and have better accuracy performance than the second order regression. Our results show that the performance of FC-NN and CNN are similar; however, some literature show that CNN model performs better [28] since CNN can take the advantage of the local spatial coherence of the room via the convolution layers.

 figure: Fig. 9.

Fig. 9. Measured positioning error CDF of room without obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.

Download Full Size | PDF

We also analyze the room with obstacle as discussed above. Figures 10(a) and (b) illustrate the room error distributions trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the second order regression model. The average position errors are 9.70 cm and 10.60 cm respectively. Figures 10(c) and (d) illustrate the room error distributions trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimental data respectively using the FC-NN model. The average position errors are 8.85 cm and 10.78 cm respectively. Figures 10(e) and (f) illustrate the room error distributions trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimental data respectively using the CNN model. The average position errors are 10.0 cm and 10.61 cm respectively. We can observe that the error distributions and average position errors of the model trained via experimental measurement and DIALux data are similar to each other. The average position error with obstacle is larger since some optical signals are blocked by the obstacle during the training process. We can observe that by employing the CNN model with the obstacle, the deviation between using this two training methods is 6.1%. As respectively compared to Figs. 10(a), (c), and (e), the models using DIALux data in Figs. 10(b), (d), and (f) seem to perform less accurately at the center of the unit cell (i.e., x ≈ 100 cm and y ≈ 150 cm). The reason is the same as we explained in Fig. 7.

 figure: Fig. 10.

Fig. 10. Measured room error distributions with obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.

Download Full Size | PDF

Figures 11(a) and (b) reveal the measured positioning error CDF curve in room with obstacle trained and assessed by experimentally measured data; and trained by DIALux data and assessed by experimentally data set. 80% of the testing points have position error < 16 cm in both scenario. Figures 11(c) and (d) reveal the measured positioning error CDF curves trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the FC-NN model. Figures 11(e) and (f) reveal the measured positioning error CDF curves trained and assessed by experimentally measured data; and trained by the DIALux data and assessed by experimentally measured data respectively using the CNN model. We can observe that in both FC-NN and CNN models, 80% of the testing points have position error < 17 cm with similar trends in the room with the obstacle.

 figure: Fig. 11.

Fig. 11. Measured positioning error CDF of room with obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.

Download Full Size | PDF

4. Conclusion

VLP can provide many possible future applications, including asset tracking, mobile robot navigation, as well as aid for the visually impaired people. Here, we put forward and transform the commercially available lighting design software DIALux into a VLP design tool. We show that this scheme can work well with different deep learning methods for reducing the loading of training data set collection. The indoor VLP models under evaluation include second order regression, FC-NN, and CNN. Experimental results show that the similar and high positioning accuracy can be obtained when the indoor VLP models are trained with experimentally acquired data set or trained with software obtained data set. In a practical room with unit cell dimensions of about 300 cm × 220 cm × 155 cm; the average positioning errors of the CNN VLP model trained and assessed by experimentally measured data set; and trained by the DIALux data and assessed by experimental data are 8.82 cm and 8.66 cm respectively. The deviation is 1.8%. When obstacle presents in the room, the average position errors are 10.0 cm and 10.61 cm respectively. The deviation increases to 6.1% since some optical signals are blocked by the obstacle during the training process.

Funding

Ministry of Science and Technology, Taiwan (MOST-109-2221-E-009-155-MY3, MOST-110-2221-E-A49-057-MY3).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

References

1. H. L. Minh, D. O’Brien, G. Faulkner, L. Zeng, K. Lee, D. Jung, Y. J. Oh, and E. T. Won, “100-Mb/s NRZ visible light communications using a post-equalized white LED,” IEEE Photonics Technol. Lett. 21(15), 1063–1065 (2009). [CrossRef]  

2. C. W. Chow, C. H. Yeh, Y. F. Liu, and Y. Liu, “Improved modulation speed of LED visible light communication system integrated to main electricity network,” Electron. Lett. 47(15), 867–868 (2011). [CrossRef]  

3. C. H. Yeh, Y. L. Liu, and C. W. Chow, “Real-time white-light phosphor-LED visible light communication (VLC) with compact size,” Opt. Express 21(22), 26192–26197 (2013). [CrossRef]  

4. J. Vučić, C. Kottke, S. Nerreter, K. D. Langer, and J. W. Walewski, “513 Mbit/s visible light communications link based on DMT-modulation of a white LED,” J. Lightwave Technol. 28(24), 3512–3518 (2010). [CrossRef]  

5. J. Y. Sung, C. W. Chow, and C. H. Yeh, “Dimming-discrete-multi-tone (DMT) for simultaneous color control and high speed visible light communication,” Opt. Express 22(7), 7538–7543 (2014). [CrossRef]  

6. X. Huang, S. Chen, Z. Wang, J. Shi, Y. Wang, J. Xiao, and N. Chi, “2.0-Gb/s Visible light link based on adaptive bit allocation OFDM of a single phosphorescent white LED,” IEEE Photonics J. 7(5), 1–8 (2015). [CrossRef]  

7. B. Janjua, H. M. Oubei, J. R. Durán Retamal, T. K. Ng, C. T. Tsai, H. Y. Wang, Y. C. Chi, H. C. Kuo, G. R. Lin, J. H. He, and B. S. Ooi, “Going beyond 4 Gbps data rate by employing RGB laser diodes for visible light communication,” Opt. Express 23(14), 18746–18753 (2015). [CrossRef]  

8. H. H. Lu, Y. P. Lin, P. Y. Wu, C. Y. Chen, M. C. Chen, and T. W. Jhang, “A multiple-input-multiple-output visible light communication system based on VCSELs and spatial light modulators,” Opt. Express 22(3), 3468–3474 (2014). [CrossRef]  

9. C. H. Hsu, C. W. Chow, I. C. Lu, Y. L. Liu, C. H. Yeh, and Y. Liu, “High speed imaging 3 × 3 MIMO phosphor white-light LED based visible light communication system,” IEEE Photonics J. 8(6), 1–6 (2016). [CrossRef]  

10. C. L. Ying, H. H. Lu, C. Y. Li, C. J. Cheng, P. C. Peng, and W. J. Ho, “20-Gbps optical LiFi transport system,” Opt. Lett. 40(14), 3276–3279 (2015). [CrossRef]  

11. L. Y. Wei, C. W. Chow, G. H. Chen, Y. Liu, C. H. Yeh, and C. W. Hsu, “Tricolor visible-light laser diodes based visible light communication operated at 40.665 Gbit/s and 2 m free-space transmission,” Opt. Express 27(18), 25072–25077 (2019). [CrossRef]  

12. H. Y. Lan, I. C. Tseng, Y. H. Lin, G. R. Lin, D. W. Huang, and C. H. Wu, “High-speed integrated micro-LED array for visible light communication,” Opt. Lett. 45(8), 2203–2206 (2020). [CrossRef]  

13. S. W. H. Chen, Y. M. Huang, Y. H. Chang, Y. Lin, F. J. Liou, Y. C. Hsu, J. Song, J. Choi, C. W. Chow, C. C. Lin, R. H. Horng, Z. Chen, J. Han, T. Wu, and H. C. Kuo, “High-bandwidth green semipolar (20–21) InGaN/GaN micro light-emitting diodes for visible light communication,” ACS Photonics 7(8), 2228–2235 (2020). [CrossRef]  

14. J. Luo, L. Fan, and H. Li, “Indoor positioning systems based on visible light communication: state of the art,” IEEE Commun. Surv. Tutorials 19(4), 2871–2893 (2017). [CrossRef]  

15. C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The LED-ID detection and recognition method based on visible light positioning using proximity method,” IEEE Photonics J. 10(2), 1–16 (2018). [CrossRef]  

16. X. Yang, Z. Jiang, X. You, J. Chen, C. Yu, Y. Li, M. Gao, and G. Shen, “A 3D visible light positioning and orienteering scheme using two LEDs and a pair of photo-detectors,” Proc. ACP (2021), paper W2B.5.

17. P. F. Du, S. Zhang, C. Chen, A. Alphones, and W. D. Zhong, “Demonstration of a low-complexity indoor visible light positioning system using an enhanced TDOA scheme,” IEEE Photonics J. 10(4), 1–10 (2018). [CrossRef]  

18. C. Y. Hong, Y. C. Wu, Y. Liu, C. W. Chow, C. H. Yeh, K. L. Hsu, D. C. Lin, X. L. Liao, K. H. Lin, and Y. Y. Chen, “Angle-of-arrival (AOA) visible light positioning (VLP) system using solar cells with third-order regression and ridge regression algorithms,” IEEE Photonics J. 12(3), 1–5 (2020). [CrossRef]  

19. H. S. Kim, D. R. Kim, S. H. Yang, Y. H. Son, and S. K. Han, “An indoor visible light communication positioning system using a RF carrier allocation technique,” J. Lightwave Technol. 31(1), 134–144 (2013). [CrossRef]  

20. C. W. Hsu, J. T. Wu, H. Y. Wang, C. W. Chow, C. H. Lee, M. T. Chu, and C. H. Yeh, “Visible light positioning and lighting based on identity positioning and RF carrier allocation technique using a solar cell receiver,” IEEE Photonics J. 8(4), 1–7 (2016). [CrossRef]  

21. J. Armstrong, Y. A. Sekercioglu, and A. Neild, “Visible light positioning: a roadmap for international standardization,” IEEE Comm. Mag. 51(12), 68–73 (2013). [CrossRef]  

22. S. H. Shikder, A. D. F. Price, and M. Mourshed, “Evaluation of four artificial lighting simulation tools with virtual building reference,” European Simulation and Modelling Conference (ESM 2009), 77–82 (2009).

23. S. H. Song, D. C. Lin, Y. H. Chang, Y. S. Lin, C. W. Chow, Y. Liu, C. H. Yeh, K. H. Lin, Y. C. Wang, and Y. Y. Chen, “Using DIALux and regression-based machine learning algorithm for designing indoor visible light positioning (VLP) and reducing training data collection,” Proc. OFC (2021), paper Tu5E.3.

24. S. H. Song, D. C. Lin, Y. Liu, C. W. Chow, Y. H. Chang, K. H. Lin, Y. C. Wang, and Y. Y. Chen, “Employing DIALux to relieve machine-learning training data collection when designing indoor positioning systems,” Opt. Express 29(11), 16887–16892 (2021). [CrossRef]  

25. Y. C. Chuang, Z. Q. Li, C. W. Hsu, Y. Liu, and C. W. Chow, “Visible light communication and positioning using positioning cells and machine learning algorithms,” Opt. Express 27(11), 16377–16383 (2019). [CrossRef]  

26. J. He, C. Hsu, Q. Zhou, M. Tang, S. Fu, D. Liu, L. Deng, and G. Chang, “Demonstration of high precision 3D indoor positioning system based on two-layer ANN machine learning technique,” Proc. OFC (2019), paper Th3I.2.

27. M. Sielachowska, D. Tyniecki, and M. Zajkowski, “Measurements of the luminance distribution in the classroom using the SkyWatcher type system,” Lighting Conference of the Visegrad Countries (Lumen V4), 1–5 (2018).

28. X. Hao, W. Xudong, and W. Nan, “Indoor visible light fingerprint positioning scheme based on convolution neural network,” Laser & Optoelectronics Prog. 58(17), 1706008 (2021).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Photos of (a) a room for the VLP with the unit cell dimensions of about 300 cm × 220 cm × 155 cm, (b) an obstacle. (c) Experimental setup of the VLP system. PD: photodiode; RTO: real-time oscilloscope.
Fig. 2.
Fig. 2. Top-views of the rooms (a) without and (b) with an obstacle. The training points, testing points, and LED positions are illustrated.
Fig. 3.
Fig. 3. Flow diagram of DIALux simulation.
Fig. 4.
Fig. 4. Architecture of the FC-NN model.
Fig. 5.
Fig. 5. (a) Architecture and (b) flow diagram of CNN model.
Fig. 6.
Fig. 6. Virtual rooms built in DIALux (a) without and (b) with the obstacle.
Fig. 7.
Fig. 7. Measured room error distributions without obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.
Fig. 8.
Fig. 8. Light intensity distribution profile against horizontal distance obtained in actual experimental scene and in DIALux simulation.
Fig. 9.
Fig. 9. Measured positioning error CDF of room without obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.
Fig. 10.
Fig. 10. Measured room error distributions with obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.
Fig. 11.
Fig. 11. Measured positioning error CDF of room with obstacle using: (a), (b) second order regression trained by experimentally measured training data or by DIALux data. (c), (d) FC-NN trained by experimentally measured training data or by DIALux data. (e), (f) CNN trained by experimentally measured training data or by DIALux data.

Tables (1)

Tables Icon

Table 1. Comparison of positioning errors when using separated or combined predictions in FC-NN and CNN models

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

$${\mathbf F} = {w^{(0)}} + \sum\limits_{i = 1}^D {{w^{(i)}}{p_i}} + \sum\limits_{i = 1}^D {\sum\limits_{j = 1}^D {{w^{(i,j)}}{p_i}} } {p_j} = {\mathbf \Phi }{{\mathbf W}_{\textrm{ML}}}$$
$$\scalebox{0.9}{$\displaystyle {\mathbf \Phi } = {[{\phi _p}\textrm{(1)},{\phi _p}\textrm{(2)},\ldots ,{\phi _p}\textrm{(}N\textrm{)}]^\textrm{T}}\textrm{; }{\phi _p}(n) = [1,{p_1}(n),{p_2}(n),{p_3}(n),{p_4}(n),\ldots ,p_3^2(n),{p_3}(n){p_4}(n),p_4^2(n)]$}$$
$$\left\{ {\begin{array}{@{}c@{}} {{F_x} = w_x^{(0)} + w_x^{(1)}{p_1} + w_x^{(2)}{p_2} + w_x^{(3)}{p_3} + w_x^{(4)}{p_4} + w_x^{(11)}p_1^2 + \cdots + w_x^{(33)}p_3^2 + w_x^{(34)}{p_3}{p_4} + w_x^{(44)}p_4^2}\\ {{F_y} = w_y^{(0)} + w_y^{(1)}{p_1} + w_y^{(2)}{p_2} + w_y^{(3)}{p_3} + w_y^{(4)}{p_4} + w_y^{(11)}p_1^2 + \cdots + w_y^{(33)}p_3^2 + w_y^{(34)}{p_3}{p_4} + w_y^{(44)}p_4^2} \end{array}} \right.$$
$${\mathbf t} = {\left[ \begin{array}{l} {x_1},{x_2}, \cdots ,{x_{N - 1}},{x_N}\\ {y_1},{y_2}, \cdots ,{y_{N - 1}},{y_N} \end{array} \right]^T}$$
$${{\mathbf W}_{{\mathbf ML}}}\textrm{ = (}{{\mathbf \Phi }^\textrm{T}}{\mathbf \Phi }{)^{ - 1}}{{\mathbf \Phi }^\textrm{T}}{\mathbf t}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.