Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Identifying structured light modes in a desert environment using machine learning algorithms

Open Access Open Access

Abstract

The unique orthogonal shapes of structured light beams have attracted researchers to use as information carriers. Structured light-based free space optical communication is subject to atmospheric propagation effects such as rain, fog, and rain, which complicate the mode demultiplexing process using conventional technology. In this context, we experimentally investigate the detection of Laguerre Gaussian and Hermite Gaussian beams under dust storm conditions using machine learning algorithms. Different algorithms are employed to detect various structured light encoding schemes including the use of a convolutional neural network (CNN), support vector machine, and k-nearest neighbor. We report an identification accuracy of 99% under a visibility level of 9 m. The CNN approach is further used to estimate the visibility range of a dusty communication channel.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Free space optics (FSO) is an unlicensed optical wireless communication that has recently attracted considerable attention for a wide range of applications. In particular, FSO is suggested as a practical solution to the “last mile” connectivity gap in optical communication networks, mainly when installing new optical fibers is costly or not possible [1]. FSO can be equally used to set up secure communications between buildings and cities and to backup optical fiber connections. Wireless optical communication can guarantee a long-range high-throughput line of sight transmission at a minimum cost [2]. Importantly, FSO is a potential technology to scale down bandwidth challenges in next-generation communication networks [3].

FSO signals in outdoor environments are subject to various propagation effects. For instance, particles in the atmosphere, caused by different weather conditions, including rain, fog, and dust, are one factor that causes scattering to optical signals. This effect is severe when the sizes of particles are comparable to the signal wavelength [4]. In particular, dust particles have an average radius that is inversely proportional to particle height, ranging from 8 to 19 $\mu$m at heights from the ground between 21 and 1 m, respectively [5]. Therefore, the amount of scattering introduced by those particles is high on signals at the 1550 nm wavelength, compared with the attenuation introduced by other particles such as raindrops that have larger sizes. Moreover, a dust particle comprises some minerals that highly scatter light more than the scattering introduced by water drops of fog [6]. Therefore, investigating the effect of dust on FSO signals is essential, especially for cities that are located in desert areas where dust storms are more likely to happen. We note that regions with desert climate represent 14.2 % of the Earth’s land area. There are also many studies in the literature on the optical signal performance over the fog, scintillation, and rain conditions [4], yet light propagation through dust storms is largely unexplored.

Recently, optical wireless communication has been conducted using complex structures of light beams rather than the standard Gaussian waveform [7]. These include modes from the Laguerre Gaussian (LG) [8], Bessel Gaussian (BG) [9], and Hermite Gaussian (HG) [10] mode bases. This helps to overcome the bandwidth bottleneck challenges in optical networks by using space as an extra degree of freedom for data multiplexing. The different patterns of spatial modes can also be utilized as information carriers and used to build M-ary pattern coding systems.

Despite the advantages of using structured light modes in FSO, atmospheric conditions significantly affect the phase-fronts of propagating beams, which complicates the detection of the initially encoded signals at the receiver side. One way to cope with the effects of turbulence is to use adaptive optics (AO) to compensate for beam distortions [11]. This is typically achieved via sequentially modulating a spatial light modulator (SLM) or a deformable mirror until minimizing an objective function to reconstruct originally transmitted beams. This increases the implementation complexity at the receiver side. Additionally, the optimization process is performed in cycles [12] that limit the usage of the AO-based approach in rapid environmental changes. Alternatively, digital signal processing (DSP), such as multiple-input-multiple-output (MIMO) equalization, corrects channel impairments [13]; however, this approach becomes more complicated as the number of transmitted spatial modes increases.

To correctly identify spatial modes in turbulent channels, machine learning techniques can also be exploited without the need for AO or DSP equalization algorithms [14]. Machine learning is a powerful tool that can be used as a “regressor” or a “classifier”, and has been applied thoroughly for modulation formats classification and impairments monitoring in optical networks [15,16]. By using the mode patterns recorded on a camera, Krenn et al. in [17] used an artificial neural network algorithm to distinguish between 16 LG modes, after real-world 3 km free-space transmission, without the need for a complicated hardware receiver side or any modal decomposition process. A similar approach was adopted to identify LG modes after 143 km propagation between two Islands [18]. Authors of [19] proposed a convolution neural network (CNN) to recognize orbital angular momentum (OAM) modes in turbulent FSO links. Different classifiers were tested for the demodulation of OAM beams under various atmospheric regimes in [20]. Similarly, authors of [21] proposed using a CNN-based algorithm for joint turbulence impairment detection and mode demodulation. Zhao et al. further demonstrated the potential of a CNN method for the detection of OAM modes subject to turbulence and misalignment, using simulated data [22]. Turbulence regression CNN is reported in [23], where the estimated turbulence impairment is fed back to the transmitter in order to achieve impairment-free transmission of OAM modes. A CNN classifier was used in [24] to detect 21 laboratory-generated HG modes with different input beam parameters.

To the best of our knowledge, no work has been reported in the literature for using machine learning as a tool to predict the structured light patterns under the effect of dust storms in FSO links. Here, we experimentally investigate the impact of a dusty channel on 32 different modes from the LG and HG mode basis. The 32 modes are formed by 8 LG modes, 8 superpositions of opposite topological charge LG modes (denoted as Mux-LG), and 16 HG modes. A dust chamber is exploited to emulate the effect of a dusty environment. The identification accuracies of 8-ary mode, 16-ary mode, and 32-ary mode encoding schemes are investigated using three different machine learning techniques. This includes the use of a CNN, support vector machine (SVM), and $k$-nearest neighbor (KNN) based methods. Besides, we utilize the sensed mode patterns and CNN regression to predict the visibility of the dusty channels.

2. Spatial mode bases background

The idea of structured light pattern encoding consists of using a particular beam shape among a set of possible shapes as information carrier without any signal processing operations. Here, we propose using coding schemes that are based on the shapes of the LG and HG modes. Both mode sets are solutions to the paraxial wave equation [25,26]. Each LG mode posses two indices $\ell$ and $p$. The former represents the topological charge, which defines the twist of the helical phase-front, and the latter indicates the radial components. In cylindrical coordinates, with a position vector $(r,\phi ,z)$, the electric field of an LG$_{pl}$ mode is defined as [25]:

$$\begin{aligned}E^{LG}_{(p,\ell)}(r,\phi,z)&=\frac{1}{\omega(z)}\sqrt{\frac{2p!}{\pi({\mid}\ell\mid{+}p)!}}\exp\left[i(2p+{\mid}\ell\mid{+}1)\Phi(z)\right]\left(\frac{\sqrt{2}r}{\omega(z)}\right)^{{\mid}\ell\mid}\\&\quad\times L_{p}^{{\mid}\ell\mid}\left(\frac{2r^{2}}{\omega(z)^{2}}\right)\exp\left(-\frac{ikr^2}{2R(z)}-\frac{-r^2}{\omega(z)^2}\right)\exp\left(i\ell\phi\right), \end{aligned}$$
where $\omega (z)=\omega _{0}\sqrt {1+(z/z_{R})^2}$ is the beam spot size as a function of $z$, the beam waist $\omega _{0}$ and the Rayleigh size $z_{R}=\pi \omega _{0}^2/\lambda$ with $\lambda$ being the optical wavelength. $\Phi (z)=\arctan (z/z_{R})$ denotes the Gouy phase, $R(z)=z[1+(z/z_{R})^2]$ is the beam curvature and $L_{p}^{\mid \ell \mid }(.)$ are the generalized Laguerre polynomials.

On the other hand, each HG$_{nm}$ mode is characterized by two indices $n$ and $m$, which indicate the number of nodes on the horizontal and the vertical axis, respectively. In a Cartesian coordinate system, the electric field of a Hermite Gaussian beam can be written as [26]:

$$\begin{aligned}E_{(m,n)}^{HG}(x,y,z)&= \sqrt{\frac{2}{\pi n!m!}}2^{-\frac{n+m}{2}}\exp\left[{-}i\frac{k(x^2+y^2)}{2R(z)}\right]\exp[{-}i(n+m+1)]\\&\quad\times \exp\left(-\frac{x^2+y^2}{\omega^2}\right) H_{n}\left(\frac{\sqrt{2}x}{\omega}\right)H_{m}\left(\frac{\sqrt{2}y}{\omega}\right), \end{aligned}$$
where $H_{n}(.)$ and $H_{m}(.)$ are Hermite polynomials of order $n$ and $m$, respectively. A set of laboratory-generated single and multiplexed LG and HG modes is depicted in Fig. 1.

 figure: Fig. 1.

Fig. 1. A set of measured transverse intensity profiles of LG$_{pl}$ modes ($p$ = 0, $\ell$ = 1: 8), Mux-LG modes ($p$ = 0, $\ell$=$\pm 1:\pm 8$), and HG$_{nm}$ modes ($n$ = 0:3, $m$ = 0 : 3).

Download Full Size | PDF

3. Methods

3.1 Experimental methodology

The used experimental setup is shown schematically in Fig. 2(a) where a TeraXion laser diode (laser1) generates a continuous wave (CW) light of a 1-kHz linewidth and a 1550-nm operation wavelength. The CW light is then amplified using an Amonics Erbium-doped fiber amplifier (EDFA), which output is coupled to a standard single-mode fiber (SMF). The output beam from the SMF is collimated using an FC/PC fiber collimation package (Thorlabs, F230FC-1550). The collimated beam is directed towards a half-wave plate (HWP), which is rotated until we maximize the output intensity of a subsequent polarizer that selects optical waves with the polarization direction perpendicular to the optical table. The polarized light is reflected using a mirror in the direction of the liquid crystal display of an SLM (Hamamatsu, Model X13138-08), which has a phase modulation axis perpendicular to the optical table, aligned with the polarization direction of the incident light. Using a computer (PC1), we program the SLM with predetermined holograms such that it converts the incident Gaussian-shaped beam into a reflected LG, Mux-LG, or HG modes.

 figure: Fig. 2.

Fig. 2. (a) Schematic of the used experimental setup. SMF: single-mode fiber; HWP: half-wave plate; C: collimator; P: polarizer; M: mirror; BS: beam splitter; PD: photodetector; PC: computer; L: lens. (b) Photograph of the dusty-weather emulation chamber.

Download Full Size | PDF

Performing the measurements in an outdoor environment is more relevant as the channel is a real one. However, using a controlled environment to mimic the outdoor environment has some advantages. First, it allows performing the measurements without the need to wait for a long time for a dust storm to happen. Second, it allows repeating the measurements under the same conditions for reliability. Third, it facilitates controlling the density and type of dust particles. Note that such controlled environments were used in many studies in the literature for emulating fog [27], scintillation [28], rain [29], and dust [30]. In order to mimic the impact of a dusty communication channel on the quality of the transmitted spatial modes, we design a $90\times 40\times 40$ cm$^{3}$ controlled-environment chamber where the dust particles are homogeneously distributed using fans installed at the bottom of the chamber (see Fig. 2(b)). This has the effect of emulating light, moderate, and severe dust conditions. The dust particles used within the chamber are collected during a real dust storm, and their average diameter is measured to be 17.3 $\mu$m, as characterized using the SALD-2300 particle size analyzer. The generated light beams enter and exit the dust chamber through transparent windows to minimize power losses sources other than the dust particles.

The visibility range can be tuned by changing the amount of dust particles blown by the fans. The lower is the visibility range, the higher is the concentration of the dust particles within the chamber. In order to quantify the visibility range, we establish another visible light link within the dust chamber (Fig. 2(a)). In particular, a green light beam emitted from a laser diode (laser2, 520 nm wavelength) is transmitted through the dust chamber. The output green light from the chamber is received by a photodetector (PD2), which is connected to a power meter to acquire the signal power. By measuring the signal power before and after attenuation by dust, the visibility range can be calculated, as we clarified in our previously published work [6].

The 1550 nm beam transmitted through the dust chamber is refocused using an aspheric lens with a focal length of 10 cm to be directed towards a 50:50 beam splitter (BS). The transmitted beam through the BS is detected via a charge-coupled device (CCD) camera (Ophir Spiricon, model: LBP2-IR2). As we change the visibility range, a parameter that defines the severity of a dust storm [27], the CCD camera captures the intensity profiles of the individually transmitted LG modes, Mux-LG modes, or HG modes. The obtained profiles are used later to train the machine learning algorithms to identify the transmitted beams. On the other hand, the reflected beam by the BS is detected using a photodetector (PD1), which is used to measure the power of different received modes. The CCD, PD1, and PD2 are controlled using another computer (PC2).

3.2 Dataset generation

In the experimental setup shown in Fig. 2(a), the CCD camera recorded 10,000 frames, for each mode continuously for $\sim$ 17 minutes, with a frame rate of 10 frames/sec. This created a dataset of 80,000, 160,000, and 320,000 frames for 8-ary modes (LG or Mux-LG), 16-ary modes (LG+Mux-LG or HG), and HG 32-ary modes (LG+Mux-LG+HG), respectively. Simultaneously, the power of the 1550 and 520 nm lasers was acquired, as seen in Fig. 2(a). Figure 3 shows the temporal received power averaged over 32 modes (left y-axis), while the (right y-axis) corresponds to the temporal visibility deduced from the recorded power of the green laser. It is clear from Fig. 3 that the received power changes quickly at the beginning of the experiment, then slowly tends to saturate when the amount of dust reduces in the chamber. In Fig. 4, we show the received beam profiles at three different received power levels of −4, 0, and 4 dBm. It is clear that at −4 dBm received power, most of the mode profiles for LG, Mux-LG, and HG are very similar to each other and cannot be easily visually distinguished. Moreover, when the received power is improved to 0 dBm, the higher-order modes are still not clear. However, when the received power reaches 4 dBm, all modes become distinguishable. Since the average received power is almost saturated after 5 minutes, the datasets for 8-ary, 16-ary, and 32-ary schemes are reduced to 24,000, 48,000, and 96,000 images, respectively. With the generated datasets, different machine learning algorithms are used to classify the modes where 70% of each dataset is used for training and the remaining 30% for testing. It is worth noting that reducing the training set to 60% can maintain the same recognition quality. However, we choose a 70% training set following the common practice in machine learning literature.

 figure: Fig. 3.

Fig. 3. Average received power [dBm] and the average visibility [m] as a function of time [min.].

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Different measured beam profiles at (a) −4 dBm, (b) 0 dBm, and (c) 4 dBm received power.

Download Full Size | PDF

3.3 Machine learning algorithms

The CNN classifier is a multilayer network that belongs to the deep neural network architectures. The CNN comprises three layers known as convolutional, pooling, and fully connected layer, as shown in Fig. 5. The advantage of this technique to our work is that it facilitates direct processing of 2-dimensional input signals such as images. To reduce the computational complexity, the colored recorded images are first converted into grayscale and then resized into $128\times 128$ pixels. The pre-processed images are convoluted with kernel filters of size $5\times 5$ to get the output feature maps; this represents the first convolutional layer. Then the resultant features maps are downsampled by 2 to obtain $64\times 64$ pixel-sized images in the pooling layer. Another convolutional and pooling layers are utilized, such that the final disassembled layer holds 1024 features fully connected to M-nodes for the case of M-ary mode modulation identification in the neural network layer. Additionally, we have utilized other machine learning algorithms such as SVM and KNN to compare their performance with CNN. The KNN relies on the majority vote of the $k$-nearest neighbors (i.e., we consider $k$=5 in this work). The nearest neighbors are determined by calculating the distance between the testing point and all dataset points. However, SVM relies on finding the optimal hyperplane that separates the different classes.

 figure: Fig. 5.

Fig. 5. A schematic illustration of the convolutional neural network (CNN) architecture. The network hyperparameters include; Input layer: mode pattern images of $128\times 128$ pixels. Convolutional 1 layer: sixteen $128\times 128$ feature maps generated using sixteen $5\times 5$ kernels. Pooling 1 layer: sixteen $64\times 64$ feature maps obtained after $2\times 2$ downsampling. Convolutional 2 layer: thirty-two $64\times 64$ feature maps generated using thirty-two $5\times 5$ kernels. Pooling 2 layer: thirty-two $32\times 32$ feature maps obtained after $2\times 2$ downsampling. The output layer is 8, 16, and 32 nodes for LG and Mux-LG, HG and LG+Mux-LG, and LG+Mux-LG+HG modes, respectively. The used activation function is the rectified linear unit (ReLU).

Download Full Size | PDF

4. Results and discussion

4.1 Classification accuracy

First, we consider the average classification over an experimental period of 5 minutes, where the mode average received power changing from −8 dBm to 2.2 dBm. Figure 6 compares the identification accuracy of KNN, SVM, and CNN algorithms, for the three involved pattern modulation schemes 8, 16, and 32 modes. The KNN algorithm provides an accuracy of 90% for 8-ary (Mux-LG), 85% for 8-ary (LG), and 16-ary (LG+Mux-LG). The identification accuracy reduces to 75% for 16-ary (HG) and 32-ary pattern coding schemes. In contrast, both SVM and CNN show an average accuracy of 99% for various mode coding schemes. It is worth noting that classification of one pattern in the testing phase took 7.5 ms, 25 ms, and 1.67 s for CNN, SVM, and KNN, respectively, for using a machine equipped with an Intel Xeon E5-2620 processor. As CNN achieves better performance over that of SVM, the following analysis will only focus on CNN’s results. In Fig. 7, we show the confusion matrix for correct pattern classification using the CNN technique. The diagonal entries show the robustness of CNN to classify different mode patterns correctly. For the 8-ary LG scheme, the LG$_{07}$ mode is most confused with LG$_{04}$, LG$_{05}$, LG$_{06}$, and LG$_{08}$ modes. Whereas, the 8 patterns in the 8-ary Mux-LG coding scheme are identified with minimum confusions. For the patterns of the 16-ary HG scheme, the HG$_{22}$ mode is confused with a low percent with all modes expect HG$_{00}$ and HG$_{01}$ modes. It is relevant here to mention that by virtue of the confusion matrices depicted in Fig. 7 and the fact that each pattern carries 3 bits (if 8-ary LG), 3 bits (if 8-ary Mux-LG), and 4 bits (if 16-ary HG), the identification accuracies can be translated to bit error rate (BER) values of 0.005, 0.0025, and 0.00531, respectively. We note that so far, the demonstrated results are based on a CNN model trained and tested without taking into account the time-varying behavior of dust. In what follows, we consider this behavior by dividing the images of the dataset with respect to time according to 10 visibility regions. The sample-set is divided into 10 visibility regions with 100 frames each, and each region has a duration of 10 seconds. Figure 8 shows the identification accuracy versus the classification regions. We can see that both 8-ary (Mux-LG) and 16-ary (HG) achieve recognition accuracy of 92% in region II and 99% in region III, which corresponds to a visibility range of 7 and 9 m, respectively. On the other hand, the 8-ary LG scheme requires reaching region VII with a relatively larger visibility range to achieve the same level of recognition accuracy of 99%. This is because the confusion between LG modes is high, especially for modes with high $\ell$ indices (LG$_{\geq 04}$), since all beams have the same donut shape, as illustrated in Fig. 1. For more investigation, in Fig. 9, we show the confusion matrices at the second classification region (visibility range of 7 m) for different patterns. For the 8-ary LG scheme, only LG$_{01}$ mode that has high power intensity is identified correctly. However, the other LG modes have faded powers, which creates high confusion with the neighboring modes. For 8-ary Mux-LG, all modes exceed 90% of correct classification except modes LG$_{0\pm 6}$ and LG$_{0\pm 7}$ that are confused with the neighboring modes. For a 16-ary (LG+Mux-LG) patterns, LG$_{0\pm 7}$ is confused with LG$_{0\pm 8}$ due to shape similarity; also, LG$_{04}$ to LG$_{08}$ are confused with the nearest LG modes. For a 16-ary HG scheme, most of the modes achieved an accuracy of more than 90$\%$. However, for some modes, such as HG$_{22}$ and HG$_{33}$, the identification accuracy is less than 90$\%$.

 figure: Fig. 6.

Fig. 6. Identification accuracy of CNN, SVM, and KNN algorithms for differnt M-ary modes.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. CNN confusion matrix for (a) 8-ary pattern (LG), (b) 8-ary pattern (Mux-LG), and (c) 16-ary pattern (HG).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Classification results at different visibility conditions.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Confusion matrices at classification region II for (a) 8-ary LG, (b) 8-ary Mux-LG, (c) 16-ary LG+Mux-LG, and (d) 16-ary HG modes.

Download Full Size | PDF

4.2 Visibility prediction

One additional potential application of structured light pattern encoding is to sense atmospheric weather parameters. In this section, we use different pattern schemes to predict weather visibility using the CNN model as a regression tool. In this study, the network hyperparameters of CNN-based visibility predictor are kept the same as those of the CNN-based classifier. However, in visibility prediction, the CNN regressor output layer contains one node, which represents the predicted visibility value. Note that the regressor labels are the visibility ranges measured using the visible link (laser2) in Fig. 2(a) and shown in Fig. 3 (right y-axis). Using the dataset described in Section 3.2, the CNN regressor is trained and tested using a sample space of 80,000 mode patterns for the 8-ary LG and the 8-ary Mux-LG schemes and 160,000 patterns for the 16-ary HG scheme. This corresponds to a recording duration of about 17 minutes, and a visibility range from 7 to 80 m. 70% of the sample space was used to train the regressor, while 30% was used to test the data. The normalized correlation coefficient ($\rho$) of the actual and predicted visibility is used as an assessment tool and given as follows [31]:

$$\rho=1-\frac{\sum_{i=1}^{N}(x_i-\hat{x_i})^2}{\sum_{i=1}^{N}(x_i-\bar{x})^2},$$
where $N$ denotes the total number of test samples. $x_i$ is the actual data (i.e., ground truth), $\hat {x_i}$ is the predicted data, and $\bar {x}$ is the mean of the actual data. Figure 10 shows the prediction accuracy of the visibility measurements. Using the 8-ary LG, 8-ary Mux-LG, and 16-ary HG coding schemes, the correlation coefficients are of values 0.984, 0.987, and 0.976, respectively. From Fig. 10, it can be observed that the variance of prediction increases as the visibility values increase. This is intuitively not surprising. With reference to Fig. 3, we note that the rate of change of visibility is increasing exponentially with time. This, in turn, leads to the availability of a smaller number of correlated beam profiles for a given observation period, hence higher variability in visibility prediction.

 figure: Fig. 10.

Fig. 10. Visibility prediction accuracy using (a) LG, (b) Mux-LG, and (c) HG modes.

Download Full Size | PDF

5. Conclusion

In this paper, we investigated the impact of dusty weather on the propagation of LG, Mux-LG, and HG modes. We studied the potential of KNN, SVM, and CNN classifiers to detect light patterns under the effect of a lab-emulated desert environment. The highest classification accuracy of 99% is reached by CNN and SVM classifiers. Also, 8-Mux-LG and 16-HG are candidate schemes under severe dust conditions. Furthermore, the regression results show the potential utilization of structured light mode pattern coding schemes for atmospheric visibility measurement applications.

Funding

Deanship of Scientific Research, King Saud University (grant no. RG-1440-112); King Abdullah University of Science and Technology (KKI2 special initiative).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. H. A. Willebrand and B. S. Ghuman, “Fiber optics without fiber,” IEEE Spectrum 38(8), 40–45 (2001). [CrossRef]  

2. M. A. Khalighi and M. Uysal, “Survey on free space optical communication: A communication theory perspective,” IEEE Commun. Surv. Tutorials 16(4), 2231–2258 (2014). [CrossRef]  

3. M. Jaber, M. A. Imran, R. Tafazolli, and A. Tukmanov, “5G backhaul challenges and emerging research directions: A survey,” IEEE Access 4, 1743–1766 (2016). [CrossRef]  

4. H. Kaushal and G. Kaddoum, “Optical communication in space: Challenges and mitigation techniques,” IEEE Commun. Surv. Tutorials 19(1), 57–96 (2017). [CrossRef]  

5. A. S. Ahmed, A. A. Ali, and M. A. Alhaider, “Airborne dust size analysis for tropospheric propagation of millimetric waves into dust storms,” IEEE Trans. Geosci. Electron. GE-25(5), 593–599 (1987). [CrossRef]  

6. M. A. Esmail, H. Fathallah, and M. Alouini, “An experimental study of FSO link performance in desert environment,” IEEE Commun. Lett. 20(9), 1888–1891 (2016). [CrossRef]  

7. W. Jian, Y. Jeng-Yuan, F. I. M A. Nisar, Y. Yan, H. Hao, R. Yongxiong, Y. Yang, D. Samuel, T. Moshe, and A. E. Willner, “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

8. A. Trichili, C. Rosales-Guzmán, A. Dudley, B. Ndagano, A. B. Salem, M. Zghal, and A. Forbes, “Optical communication beyond orbital angular momentum,” Sci. Rep. 6(1), 27674 (2016). [CrossRef]  

9. N. Ahmed, M. P. J. Lavery, H. Huang, G. Xie, Y. Ren, Y. Yan, and A. E. Willner, “Experimental demonstration of obstruction-tolerant free-space transmission of two 50-Gbaud QPSK data channels using Bessel beams carrying orbital angular momentum,” in 2014 The European Conference on Optical Communication (ECOC), (2014), pp. 1–3.

10. M. A. Cox, L. Cheng, C. Rosales-Guzmán, and A. Forbes, “Modal diversity for robust free-space optical communications,” Phys. Rev. Appl. 10(2), 024020 (2018). [CrossRef]  

11. R. Yongxiong, X. Guodong, H. Hao, A. Nisar, Y. Yan, L. Long, B. Changjing, M.P.J. Lavery, T. Moshe, M. A. Neifeld, R. W. Boyd, J. H. Shapiro, and A. E. Willner, “Adaptive-optics-based simultaneous pre-and post-turbulence compensation of multiple orbital-angular-momentum beams in a bidirectional free-space optical link,” Optica 1(6), 376–382 (2014). [CrossRef]  

12. T. Qiu, I. Ashry, A. Wang, and Y. Xu, “Adaptive mode control in 4- and 17-mode fibers,” IEEE Photonics Technol. Lett. 30(11), 1036–1039 (2018). [CrossRef]  

13. H. Huang, Y. Cao, G. Xie, Y. Ren, Y. Yan, C. Bao, N. Ahmed, M. A. Neifeld, S. J. Dolinar, and A. E. Willner, “Crosstalk mitigation in a free-space orbital angular momentum multiplexed communication link using 4× 4 MIMO equalization,” Opt. Lett. 39(15), 4360–4363 (2014). [CrossRef]  

14. A. Trichili, K. Park, M. Zghal, B. S. Ooi, and M. Alouini, “Communicating using spatial mode multiplexing: Potentials, challenges, and perspectives,” IEEE Commun. Surv. Tutorials 21(4), 3175–3203 (2019). [CrossRef]  

15. F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “An optical communication’s perspective on machine learning and its applications,” J. Lightwave Technol. 37(2), 493–516 (2019). [CrossRef]  

16. F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Communications Surveys Tutorials 21(2), 1383–1408 (2019). [CrossRef]  

17. M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, and A. Zeilinger, “Communication with spatially modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014). [CrossRef]  

18. M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, and A. Zeilinger, “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. 113(48), 13648–13653 (2016). [CrossRef]  

19. T. Doster and A. T. Watnik, “Machine learning approach to OAM beam demultiplexing via convolutional neural networks,” Appl. Opt. 56(12), 3386–3396 (2017). [CrossRef]  

20. Z. Wang, M. I. Dedo, K. Guo, K. Zhou, F. Shen, Y. Sun, S. Liu, and Z. Guo, “Efficient recognition of the propagated orbital angular momentum modes in turbulences with the convolutional neural network,” IEEE Photonics J. 11(3), 1–14 (2019). [CrossRef]  

21. J. Li, M. Zhang, D. Wang, S. Wu, and Y. Zhan, “Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication,” Opt. Express 26(8), 10494–10508 (2018). [CrossRef]  

22. Q. Zhao, S. Hao, Y. Wang, L. Wang, X. Wan, and C. Xu, “Mode detection of misaligned orbital angular momentum beams based on convolutional neural network,” Appl. Opt. 57(35), 10152–10158 (2018). [CrossRef]  

23. S. Lohani and R. T. Glasser, “Turbulence correction with artificial neural networks,” Opt. Lett. 43(11), 2611–2614 (2018). [CrossRef]  

24. L. R. Hofer, L. W. Jones, J. L. Goedert, and R. V. Dragone, “Hermite–gaussian mode detection via convolution neural networks,” J. Opt. Soc. Am. A 36(6), 936–943 (2019). [CrossRef]  

25. L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, “Orbital angular momentum of light and the transformation of laguerre-gaussian laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]  

26. A. E. Siegman, lasers (University Science Books, 1986).

27. M. Ijaz, Z. Ghassemlooy, J. Pesek, O. Fiser, H. L. Minh, and E. Bentley, “Modeling of fog and smoke attenuation in free space optical communications link under controlled laboratory conditions,” J. Lightwave Technol. 31(11), 1720–1726 (2013). [CrossRef]  

28. J. Fang, M. Bi, S. Xiao, G. Yang, C. Li, L. Liu, Y. Zhang, T. Huang, and W. Hu, “Performance investigation of the polar coded FSO communication system over turbulence channel,” Appl. Opt. 57(25), 7378–7384 (2018). [CrossRef]  

29. G. G. Soni, A. Tripathi, A. Mandloi, and S. Gupta, “Compensating rain induced impairments in terrestrial FSO links using aperture averaging and receiver diversity,” Opt. Quantum Electron. 51(7), 244 (2019). [CrossRef]  

30. J. Libich, J. Perez, S. Zvanovec, Z. Ghassemlooy, R. Nebuloni, and C. Capsoni, “Combined effect of turbulence and aerosol on free-space optical links,” Appl. Opt. 56(2), 336–341 (2017). [CrossRef]  

31. C. Huber-Carol, N. Balakrishnan, M. Nikulin, and M. Mesbah, Goodness-of-fit tests and model validity (Springer Science & Business Media, 2012).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. A set of measured transverse intensity profiles of LG$_{pl}$ modes ($p$ = 0, $\ell$ = 1: 8), Mux-LG modes ($p$ = 0, $\ell$=$\pm 1:\pm 8$), and HG$_{nm}$ modes ($n$ = 0:3, $m$ = 0 : 3).
Fig. 2.
Fig. 2. (a) Schematic of the used experimental setup. SMF: single-mode fiber; HWP: half-wave plate; C: collimator; P: polarizer; M: mirror; BS: beam splitter; PD: photodetector; PC: computer; L: lens. (b) Photograph of the dusty-weather emulation chamber.
Fig. 3.
Fig. 3. Average received power [dBm] and the average visibility [m] as a function of time [min.].
Fig. 4.
Fig. 4. Different measured beam profiles at (a) −4 dBm, (b) 0 dBm, and (c) 4 dBm received power.
Fig. 5.
Fig. 5. A schematic illustration of the convolutional neural network (CNN) architecture. The network hyperparameters include; Input layer: mode pattern images of $128\times 128$ pixels. Convolutional 1 layer: sixteen $128\times 128$ feature maps generated using sixteen $5\times 5$ kernels. Pooling 1 layer: sixteen $64\times 64$ feature maps obtained after $2\times 2$ downsampling. Convolutional 2 layer: thirty-two $64\times 64$ feature maps generated using thirty-two $5\times 5$ kernels. Pooling 2 layer: thirty-two $32\times 32$ feature maps obtained after $2\times 2$ downsampling. The output layer is 8, 16, and 32 nodes for LG and Mux-LG, HG and LG+Mux-LG, and LG+Mux-LG+HG modes, respectively. The used activation function is the rectified linear unit (ReLU).
Fig. 6.
Fig. 6. Identification accuracy of CNN, SVM, and KNN algorithms for differnt M-ary modes.
Fig. 7.
Fig. 7. CNN confusion matrix for (a) 8-ary pattern (LG), (b) 8-ary pattern (Mux-LG), and (c) 16-ary pattern (HG).
Fig. 8.
Fig. 8. Classification results at different visibility conditions.
Fig. 9.
Fig. 9. Confusion matrices at classification region II for (a) 8-ary LG, (b) 8-ary Mux-LG, (c) 16-ary LG+Mux-LG, and (d) 16-ary HG modes.
Fig. 10.
Fig. 10. Visibility prediction accuracy using (a) LG, (b) Mux-LG, and (c) HG modes.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

E ( p , ) L G ( r , ϕ , z ) = 1 ω ( z ) 2 p ! π ( + p ) ! exp [ i ( 2 p + + 1 ) Φ ( z ) ] ( 2 r ω ( z ) ) × L p ( 2 r 2 ω ( z ) 2 ) exp ( i k r 2 2 R ( z ) r 2 ω ( z ) 2 ) exp ( i ϕ ) ,
E ( m , n ) H G ( x , y , z ) = 2 π n ! m ! 2 n + m 2 exp [ i k ( x 2 + y 2 ) 2 R ( z ) ] exp [ i ( n + m + 1 ) ] × exp ( x 2 + y 2 ω 2 ) H n ( 2 x ω ) H m ( 2 y ω ) ,
ρ = 1 i = 1 N ( x i x i ^ ) 2 i = 1 N ( x i x ¯ ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.