Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fundamental probing limit on the high-order orbital angular momentum of light

Open Access Open Access

Abstract

The orbital angular momentum (OAM) of light, possessing an infinite-dimensional degree of freedom, holds significant potential to enhance the capacity of optical communication and information processing in both classical and quantum regimes. Despite various methods developed to accurately measure OAM modes, the probing limit of the highest-order OAM remains an open question. Here, we report an accurate recognition of superhigh-order OAM using a convolutional neural network approach with an improved ResNeXt architecture, based on conjugated interference patterns. A type of hybrid beam carrying double OAM modes is utilized to provide more controllable degrees of freedom for greater recognition of the OAM modes. Our contribution advances the OAM recognition limit from manual counting to machine learning. Results demonstrate that, within our optical system, the maximum recognizable OAM modes exceed l = ±690 with an accuracy surpassing 99.93%, the highest achieved by spatial light modulator to date. Enlarging the active area of the CCD sensor extends the number of recognizable OAM modes to 1300, constrained only by the CCD resolution limit. Additionally, we explore the identification of fractional high-order OAM modes with a resolution of 0.1 from l = ±600.0 to l = ±600.9, achieving a high accuracy of 97.86%.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since its initial discovery by Allen et al. in 1992, orbital angular momentum (OAM) has attracted extensive interest [15]. An OAM is carried by a vortex beam corresponding to the helical wavefront, which can be expressed as a spatial phase structure of exp(-ilφ), where φ is the azimuthal angle and l is the topological charge. With the rapid development of massive data transmission, cloud computing, and artificial intelligence, channel capacity needs to be urgently increased [6,7]. To address this challenge, an innovative communication technique has been proposed, particularly based on OAM [814]. Utilizing OAM in communication presents significant advantages. Since vortex beams have an infinite number of eigenstates, the information capacity can be greatly enhanced by transmitting along the same channel with larger OAM modes. Moreover, the inherent orthogonality with different OAM modes allows information modulation on different vortex beams. These have led the OAM research to attract much attention in the field of information transmission. To achieve larger-capacity information processing and communication, future development trends are not only on higher-dimensional forms of structured light [15,16] but also on high-order OAM modes. As with the use of high-order OAM modes, it is also possible to incorporate more OAM states using fractional OAM [1719]. For example, if the spacing between neighboring topological charges is 0.1 (i.e., the resolution of the OAM), fractional OAM modes are easily 10 times more numerous compared to neighboring integer OAM modes. Therefore, accurate measurement of high-order integers and even fractional OAM modes is very beneficial for OAM-based free-space optical communications.

To date, researchers have employed various techniques to measure the OAM mode as accurately as possible. However, the predominant method involves human eye-based counting of results obtained from specific light field strips [2025]. In 2016, Zhou et al. used $\mathrm{\pi }$/2 mode conversion to detect the high-order OAM mode and got the maximum result of l = 100 [20]. In 2022, Li et al. theoretically proposed that the far-field diffraction of the product of a vortex beam and a petal-like zone plate was a Hermitian-like form of the vortex beam. This allows for the determination of topological charges by analyzing the numbers and directions of dark lines in the Hermitian-like pattern, resulting in measured OAM values reaching up to 100 [21]. Later, A. M. Dezfouli et al. proposed a detection of optical vortices with OAM topological charge up to 150 using a reflective phase-only liquid crystal on a silicon spatial light modulator [22]. Yang et al. recognized topological charges over ±160 based on the combination of annular phase grating and auxiliary beams [23]. Also in 2022, high-order OAM states of the identification over ±270 orders were implemented by annular phase grating (APG) and Gaussian beams with different wavelengths [24]. Pinnell et al. successfully detected an extremely high-order optical vortex mode by extrapolating the topological charge, reaching l = 600, the highest attainable value using a spatial light modulator (SLM) [25]. While low-order modes can be easily discerned in far-field diffraction patterns through manual counting, distinguishing modes with high OAM values becomes challenging with increasing OAM. Deep learning, renowned for its exceptional feature extraction capabilities, has significantly improved OAM recognition and communication [2629]. To address the growing need for precise differentiation among modes with substantial OAM values, we choose the deep learning method as a replacement for manual counting.

This study investigates a fundamental probing limit for high-order OAM modes, focusing on two primary implementation aspects that constrain the maximum recognizable OAM value: generation and detection. Based on the bucket effect, it easily reveals that the maximum value of the recognizable OAM modes depends on the limiting factor of the shortest board. Despite advanced laboratory equipment, such as a high-resolution SLM, has significantly improved the generation of the highest-order OAM modes, achieving accurate mode detection remains a challenge. Obviously, detection and recognition are the key points. To probe the limit of OAM modes, it prompts a shift from manual counting to machine learning. For the unique interference patterns, 2|l| evenly distributed stripes arranged along an annulus, the CNN in deep learning is considered to be the prime candidate which can greatly enhance the recognition. Employing a high-resolution CCD to capture intricate stripes around the entire interference annulus, we utilize an improved ResNeXt 50 model. Surpassing a maximum OAM mode of l = ±690 with an accuracy exceeding 99.93%, this signifies a breakthrough in the detection capability of multiplexed OAM states. Although expanding the sensor's active area could potentially achieve a recognizable OAM limit of 1300, this is constrained only by CCD resolution. Furthermore, we explore the identification of fractional high-order OAM modes within the resolution of 0.1, spanning from l = ±600.0 to l = ±600.9. Achieving a high accuracy of 97.86%, our methodology successfully identifies these fractional modes.

2. Designs and methods

2.1 Experimental setup

Figure 1 shows the experimental setup. A stabilized HeNe laser at 632.991 nm is used as the light source. After collimation and expansion, the Gaussian beam hits the screen of the SLM and is modulated to carry the multiplexed hologram. As in Fig. 1(b)-(d), the holograms of ±100, ± 200, and ±300 show densely distributed phase information. To make the Gaussian beam perfectly modulated by the loaded hologram, the SLM needs to have both high resolution and proper calibration or both. So a phase-only reflective liquid crystal device (4160 × 2464 pixels, 3.74 µm per pixel pitch) was chosen. As for the calibration, the color look-up table (also called “gamma table”) is used to apply a specific pulse code to the pixels of the SLM for each gray level from the HDMI signal, as shown in Fig. 1(a). It can be used to create a linear phase response over the gray levels from the HDMI signal. Based on the calibration software in the manufacturer package, for a given wavelength, the SLM phase modulation is linear across the range [0, 2π] in correspondence with the loaded grayscale hologram. Besides, Fig. 1(e), (f), and (g) reveal the theoretical simulations of OAM superposition for l =±100, ± 200 and ±300. As the topological charge increases, the circular diameter of the interference pattern increases. For both the whole circle pattern and the stripes distributed around it to be completely recorded by the CCD (1920 × 1200 pixels, 5.86µm per pixel pitch, sensor active area is 11.34 × 7.13 mm), we place a plano-convex lens L3 (f = 100 mm) in front of the CCD. A smaller lens (e.g. f = 75 mm) is not selected in our optical system because it will make the interference annulus over-compressed, and the inside stripes features will be destroyed. Then, the detected pattern images of light distribution are transmitted to a computer and trained by the ResNeXt model of CNN (GPU: NVIDIA, RTX-3070; CPU: INTEL, i7-10700).

 figure: Fig. 1.

Fig. 1. Schematic diagrams and Experimental setup for recognition of high-order OAM modes. ISO: optical isolator; L1, L2: lens; M: mirror; HWP: half-wave plate; BS: beam splitter; SLM: spatial light modulator; L3: lens; CCD: charge-coupled device; (a) color look-up table of SLM; (b) (c) (d): the loaded hologram on SLM of l = ±100, l = ±200 and l = ±300. (e) (f) (g): the theoretical simulations of OAM superposition for l = ±100, ± 200 and ±300, respectively.

Download Full Size | PDF

2.2 Method of multiplexed OAM

For multiplexed OAM, it is necessary to generate multiple OAM beams. We use a modulated Gaussian beam to generate the optical vortex by carrying a helical wavefront. The light field distribution of the OAM beam in cylindrical coordinates can be written as:

$${\textrm{E}_{ + l}}({r,\varphi } )= \textrm{A} \times \textrm{exp}\left( { - \frac{{{r^2}}}{{\omega_0^2}}} \right) \times \textrm{exp}({il\varphi } )$$
where $({r,\varphi } )$ denotes the cylindrical coordinates, A and ${\mathrm{\omega }_0}$ are the complex amplitude and the waist of the incident Gaussian beam, respectively. The conjugate wave of this vortex beam can be expressed as:
$${\textrm{E}_{ - l}}({r,\varphi } )= \textrm{A} \times \textrm{exp}\left( { - \frac{{{r^2}}}{{\omega_0^2}}} \right) \times \textrm{exp}({ - il\varphi } )$$

The superpositions of higher-order ± l OAM modes can be express as:

$$\left| {{\textrm{E}_{ \pm l}}} \right\rangle = \frac{1}{{\sqrt 2 }}\left( {\left| {\left. {{\textrm{E}_{ + l}}} \right\rangle + } \right|\left. {{\textrm{E}_{ - l}}} \right\rangle } \right)$$

2.3 Framework of residual convolutional networks

In this work, our deep learning is based on the CNN model, with a highly modularized network architecture [30] (ResNeXt-50), due to its powerful feature acquisition capability. As a variant of the original residential network (ResNet) [31], ResNeXt effectively deepens and widens the network by establishing the connection and group convolution mechanism [32] between the input and residual signals in parallel - in each convolutional block and each unit block - to improve the classification accuracy. In Fig. 2, the input image is 224 × 224 randomly cropped from a resized image using the scale and aspect ratio augmentation. The ResNeXt50 model consists of Conv 1, maxpooling layer, Conv 2, Conv 3, Conv 4, Conv 5, average pooling layer, fully connected layer, and softmax in turn. Except for Conv 1, which is a basic convolutional layer, Conv 2, Conv 3, Conv 4, and Conv 5 are formed by a stack of residual blocks with the same topology, displayed separately in Fig. 2(a)-(d). Take Fig. 2(a) as an example, inside the orange dotted box is the shape of a residual block. “Grouped = 32” suggests grouped convolutions with 32 groups. These blocks with grouped convolutional layers make a wider but more sparsely connected module than the original bottleneck residual block in ResNet. In the training process, we used the Adam method for the best optimization [33]. As for an important evaluation criterion of training result, we chose the categorical cross-entropy to be the loss function, which can be expressed as:

$$\textrm{Loss} ={-} \mathop \sum \nolimits_{i = 1}^\textrm{m} {y_i}\; \cdot \log \widehat {{y_i}}$$
where m is the output size, $\widehat {{y_i}}$ is the predicted output, and ${y_i}$ is the ideal output.

 figure: Fig. 2.

Fig. 2. Image recognition of high-order OAM modes based on the improved residual convolutional networks (ResNeXt 50). (a) building blocks in Conv2 of ResNeXt 50; (b) building blocks in Conv3 of ResNeXt 50; (c) building blocks in Conv4 of ResNeXt 50; (d) building blocks in Conv5 of ResNeXt 50. A layer is denoted as (filter size, and output channels).

Download Full Size | PDF

3. Results and discussion

3.1 Recognition of high-order integer OAM

To determine the probing limit of high-order OAM using deep learning, we train separate models for various values of l. These values include 100s (l = ±100 to ±190), 200s (l = ±200 to ±290), 300s (l = ±300 to ±390), 400s (l = ±400 to ±490), 500s (l = ±500 to ±590), and 600s (l = ±600 to ±690), with 10 for each interval. We then train and test each model using 7000 images, which are divided randomly into three groups: a training set, a validation set, and a test set, with a ratio of 6:2:2.

Figure 3 shows the detected light distribution for high-order OAM modes of ±200s: Fig. 3(a) l = ±200, (b) l = ±290, as captured by the experimental CCD camera. The circular interference pattern is visible in the background of both Fig. 3(a) and (b), with local details represented by the uniformly distributed stripes in a circular pattern within the white frame at the center. Although there may be slight differences in the shape of adjacent stripes due to imperfect mode creation at the SLM and unmodulated photons, these imperfections will not impact the machine learning training process, as the stripes are still clear with stable identifying features.

 figure: Fig. 3.

Fig. 3. Image recognition of high-order OAM modes of ±200s. (a) the whole circle image of the light intensity distribution of l = ±200, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±290, and the local details in the middle; (c) Accuracy and loss function curves of recognition of l = ±200s; (d) confusion matrix for high-order OAM modes recognition from ±200 to ±290 with the accuracy of 99.64%.

Download Full Size | PDF

Performance evaluation of the superposition of l = ±200s is shown in Fig. 3(c) for the accuracy and loss function curves, and in Fig. 3(d) for the confusion matrix. Both the training and validation accuracies quickly increased in the first 5 epochs and stabilized around 10 epochs, achieving almost 100%. The training and validation curves are closely aligned, indicating a good fit. The loss function curves also show convergence after 10 epochs. Finally, the confusion matrix in Fig. 3(d) demonstrates a text accuracy of 99.64% for the detected OAM modes by ResNeXt on the vertical coordinates and the sent OAM modes on the horizontal coordinates.

Figure 4 depicts the recognition of high-order OAM modes of l = ±400s. Details of the intensive strips with l = ±400 and l = ±490 are visible in Fig. 4(a) and (b). As the topological charges increase, the circle pattern displays a varying number of stripes - 800 in Fig. 4(a) and 980 in Fig. 4(b). However, the size of the white viewfinder frame remains the same despite an increase in the radius of the circle. Figure 4(b) shows a denser distribution of stripes, with individual patterns becoming thinner and a significant reduction in light intensity compared to Fig. 3(a), (b) and Fig. 4(a). This complexity increases the difficulty of model training. The performance evaluation of the superposition of l = ±400s is presented in Fig. 4(c) through accuracy and loss function curves. Both training and validation accuracies stabilize within 20 epochs, while loss curves converge after 15-epoch training with no significant difference between the final loss. Although both results are good, the validation curves in Fig. 4(c) show more fluctuations before stationarity. Figure 4(d) shows the confusion matrix, with a test accuracy of 99.86%. Two images of l = ±440 are incorrectly recognized in the adjacent classification l = ±450 in the test set.

 figure: Fig. 4.

Fig. 4. Image recognition of high-order OAM modes of l = ±400s. (a) the whole circle image of the light intensity distribution of l = ±400, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±490, and the local details in the middle; (c) Accuracy and loss function curves of recognition of l = ±400s; (d) confusion matrix for high-order OAM modes recognition from ±400 to ±490 with the accuracy of 99.86%.

Download Full Size | PDF

The results displayed in Fig. 5 show the successful recognition of high-order OAM modes of l = ± 600s. It is apparent from Fig. 5(a) and (b) that the CCD camera can capture the oversized circle with pinstripes for l = ±600 and l = ±690 in their entirety, thanks to the ultra-high resolution SLM and CCD camera. The clear details of the stripes, shown in the white box at the center of Fig. 5(a) and (b), are displayed even when tightly fitted together. The accuracy curves and the loss function curves in Fig. 5(c) demonstrate that the model was well-trained to extract the features in the images of each OAM. The training accuracy and validation accuracy curves reach a good result of nearly 100% and are stabilized around 25 epochs. The training loss function and validation loss function curves are converged and very close to 0 after 20 epochs. However, the validation curves in Fig. 5(c) show bigger fluctuations and bigger validation loss values in the early several epochs compared to Fig. 3(c) and Fig. 4(c), indicating a tougher task to make 600s well recognized. The confusion matrix in Fig. 5(d) displays only one miss recognition at l = ±640, making the test accuracy as high as 99.93%.

 figure: Fig. 5.

Fig. 5. Image recognition of high-order OAM modes of l = ±600s. (a) the whole circle image of the light intensity distribution of l = ±600, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±690, and the local details in the middle; (c) accuracy and loss function curves of recognition of l = ±600s; (d) confusion matrix for high-order OAM modes recognition from ±600 to ±690 with the accuracy of 99.93%.

Download Full Size | PDF

The dependence of the training performances on the different sets of OAM modes is shown in Fig. 6(a) for training accuracy and (b) for training loss. In Fig. 6(a), for different sets of OAM modes, the epoch value and the training accuracy are recorded when the accuracy curve tends towards stability, and also the epoch value and the training loss of convergence. The dark blue dots are projections on the x-z plane, showing the training accuracy of different OAM sets. All accuracies achieve beyond 99.6% and have little change with the increase of the topological charges, only fluctuating randomly and slightly. In Fig. 6(b), the dark blue dots, on the x-z plane, show the training loss of different OAM sets, which are all smaller than 0.02. Small loss function reflected the minimal deviation between the predicted value and the actual result, and the robustness of the model. The light blue dots on the x-y plane show the number of iterations required in the deep learning process or the training accuracy to reach stability in Fig. 6(a) and for the training loss to reach the minima and convergence in Fig. 6(b). It can be seen that with the increase of the l value, the number of iterations required for training the machine learning model increases, both in accuracy and loss, which means the sets with a larger OAM mode have a lower convergence speed, namely a longer training time. This is most likely because the higher the OAM mode is, the denser the stripes distribute on the ring, the more difficult to extract image features, and the longer time is needed to train the model. Combined with the different oscillation levels between validation curves of Fig. 3(c), Fig. 4(c), and Fig. 5(c), it can be concluded that with the increase of l, the pattern is more complex and it needs more time (epochs) to train a model. Test accuracy of different sets of high-order OAM modes is shown in Fig. 7. The test accuracy of all sets of OAMs is very ideal, even for the superhigh-order OAM modes of l = ±600 - ± 690, it can also be well recognized with an accuracy of 99.93%. We have also trained a model to recognize a wide range of l from ±100 to ±690. Maintaining the original amount of data, 700 images for each l value, a total of 42,000 images are fed into the model. And the single model OAM recognition still achieves a good test accuracy of 99.91%. And all the results indicate that our deep learning network is reliable and can successfully recognize superhigh-order OAM modes.

 figure: Fig. 6.

Fig. 6. The dependence of the training performances on the different sets of OAM modes, from 100s ∼ 600s, (a) the epoch value when the curve of training accuracy flattens out is recorded, and also the value of training accuracy at this time; (b) the epoch value when the curve of training loss begin to converge are recorded, and also the value of training loss at this time.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Test accuracy of different sets of high-order OAM modes.

Download Full Size | PDF

As far as the identification results of the OAM mode with a maximum of 690 in this paper are concerned, due to the limited sensor active area of CCD, the ResNeXt network performs well in OAM identification. It is important to note, however, that simply increasing the sensor active area of CCD would not necessarily result in an infinitely higher recognizable OAM limit value. This is because the resolution of CCD would limit the improvement in OAM identification. In our optical system, we explored the relationship between the inner diameter of the interference ring ρ of multiplexed vortex beams and different OAM values l, as illustrated in Fig. 8(a) with black dots. The fitting result, represented by the red line, can be expressed as:

$$\rho = 1221.37 + 16.858l + ({ - 0.03} ){l^2} + ({4 \times {{10}^{ - 5}}} ){l^3} + ({ - 2 \times {{10}^{ - 8}}} ){l^4}$$

From the schematic diagram in Fig. 8(b), the interval spacing between adjacent interference stripes d can be written as:

$$d = \rho \mathrm{\ast }\sin \left( {\frac{\pi }{{2l}}} \right)$$

Discerning interference stripes becomes unfeasible when the separation distance d between two interference stripes is less than the current minimum CCD pixel pitch of 5.86 µm (resolution of CCD). This poses a challenge for counting them using both human observation and machine learning techniques. Applying the Eq. (5) to Eq. (6), we can determine the maximum measurable OAM value in this optical system. The resulting limit value is 1300. To successfully recognize OAM in the 1300s, the model must undergo approximately 58 training epochs.

 figure: Fig. 8.

Fig. 8. (a) Inner diameter of the interference ring ρ of OAM beam is plotted versus topological charge l. The black dots are the experimental results and the red line is the fitting curve. (b) Schematic diagram for calculating the distance between adjacent stripes on the interference ring.

Download Full Size | PDF

3.2 Recognition of high-order fractional OAM with 0.1 resolution

In addition, we also further study the identification of fractional high-order OAM modes. We generate fractional vortex modes that interfered with its conjugate at 0.1 resolution from l = ±600.0 to l = ±600.9. Even in the case of superimposed fractional OAMs, the interference light distribution exhibits distinct structures. For fractional OAM model training, we also collected 7000 images with whole interference rings detected, as shown in Fig. 9(c), and divided them into a training set, validation set, and test set according to the ratio of 6: 2: 2. To provide more intricate details, we have enlarged the upper right corner of the image within the white frame of Fig. 9(c), as the radius of the OAM interference ring was too extensive to observe. Figure 10 displays local details with a resolution of 0.1. The first and third rows show the experimental optical intensity distribution, while the second and fourth rows show the simulated normalized optical intensity distribution. Uniform stripes of equal thickness and length are distributed on the interfering rings at l = ±600.0 in Fig. 10. As l increases with 0.1 resolution, the strip in the middle of the image is gradually shorter and darker which is quite evident at l = ±600.4. While l is ±600.5 to ±600.9, the middle stripes are continually growing longer and brighter again. Finally, at l = ±601.0, all the stripes are re-uniform in thickness and length, distributing on the interfering rings. The simulated results agree with the experiment.

 figure: Fig. 9.

Fig. 9. Image recognition of high-order fractional OAM modes from ±600.0 to ±600.9 at 0.1 resolution. (a) Accuracy and loss function curves; (b) the confusion matrix with an accuracy of 97.86%; (c) the whole circle image of the light intensity distribution of l=±600.5, and the enlarged local details of the upper left corner shown in the middle.

Download Full Size | PDF

In Fig. 9(a), the blue solid and dashed lines signify the training and validation accuracy curves, respectively. Meanwhile, the yellow solid and dashed lines represent the training and validation loss function curves. Notably, both the accuracy and validation accuracy curves stabilized around 30 epochs, while the loss and validation accuracy curves converged at around 20 epochs, indicating that our model can effectively identify superhigh-order-OAM recognition in fractional form. Additionally, the confusion matrix in Fig. 9(b) demonstrates the distinguishability of OAM mode superpositions ranging from l = ±600.0 to l = ±600.9, with an accuracy rate of 97.86%, as expected. However, compared to integer OAM recognition of l = ±600 to l = ±690, the subtler feature image distinctions in fractional OAM superposition do increase the recognition difficulty. The decrease in accuracy and the fact that misrecognized l values in the confusion matrix are no longer just misidentified to adjacent l values indicate this. For instance, two l = ±600.5 are misrecognized as l = ±600.3. Nevertheless, the model can still accurately recognize high-order fractional OAM modes.

 figure: Fig. 10.

Fig. 10. Enlarged upper left parts of the image that the fractional vortex mode interfered with its conjugate at 0.1 resolution. EXP: experimental data optical intensity distribution; THE: theoretical data optical intensity distribution.

Download Full Size | PDF

4. Conclusions

In this work, we have studied the probing limit of high-order OAM recognition, focusing on two primary constraints: generation and detection. High-resolution calibrated SLM equipment with a pixel size of 3.74 µm is employed for generation, effectively modulating the Gaussian beam with loaded phase information. For probing and recognition, a high-resolution CCD camera, augmented by a plano-convex lens, is utilized to observe the complete interference annulus and record internal stripes. Despite the limited sensor active area of the CCD, our optical system achieves recognition of OAM modes up to 690 with an accuracy exceeding 99.93% through the application of the ResNeXt model. The ResNeXt model exhibits no limitations in OAM identification within network performance, implying that an increase in CCD active area would result in a larger recognizable OAM mode. However, the ultimate limits are determined by the resolution of the SLM and CCD. In our system, the pixel pitch of the CCD, being larger than that of the SLM, emerges as the main factor in determining the limit. Simulation results indicate a recognizable OAM limit of 1300. Moreover, we explore the identification of fractional high-order OAM modes with 0.1 resolution from l = ±600.0 to l = ±600.9, achieving a remarkable accuracy of 97.86%. These promising results broaden the potential applications for larger-capacity, higher-security information processing and communication [8].

Funding

National Natural Science Foundation of China (12174115, 91836103, 11834003).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data and code that support the findings of this study are available from the corresponding authors on reasonable request.

References

1. L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, et al., “Orbital angular momentum of light and transformation of Laguerre Gaussian Laser modes,” Phys. Rev. A 45(11), 8185–8189 (1992). [CrossRef]  

2. J. P. Yin, W. J. Gao, and Y. F. Zhu, “Generation of dark hollow beams and their applications,” Prog. Opt. 45(11), 119–204 (2003). [CrossRef]  

3. A. M. Yao and M. J. Padgett, “Orbital angular momentum: origins, behavior and applications,” Adv. Opt. Photonics 3(2), 161–204 (2011). [CrossRef]  

4. A. Forbes, A. Dudley, and M. McLaren, “Creation and detection of optical modes with spatial light modulators,” Adv. Opt. Photonics 8(2), 200–227 (2016). [CrossRef]  

5. Y. J. Shen, X. J. Wang, Z. W. Xie, et al., “Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities,” Light: Sci. Appl. 8(1), 90 (2019). [CrossRef]  

6. A. E. Willner, H. Huang, Y. Yan, et al., “Optical communications using orbital angular momentum beams,” Adv. Opt. Photonics 7(1), 66–106 (2015). [CrossRef]  

7. H. Rubinsztein-Dunlop, A. Forbes, M. V. Berry, et al., “Roadmap on structured light,” J. Opt. 19(1), 013001 (2017). [CrossRef]  

8. A. E. Willner, Z. Zhao, C. Liu, et al., “Perspectives on advances in high-capacity, free-space communications using multiplexing of orbital-angular-momentum beams,” APL Photonics 6(3), 030901 (2021). [CrossRef]  

9. S. Lohani, E. M. Knutson, and R. T. Glasser, “Generative machine learning for robust free-space communication,” Commun. Phys. 3(1), 177 (2020). [CrossRef]  

10. J. Wang, J. Y. Yang, I. M. Fazal, et al., “Terabit free-space data transmission employing orbital angular momentum multiplexing,” Nat. Photonics 6(7), 488–496 (2012). [CrossRef]  

11. N. Bozinovic, Y. Yue, Y. X. Ren, et al., “Terabit-scale orbital angular momentum mode division multiplexing in fibers,” Science 340(6140), 1545–1548 (2013). [CrossRef]  

12. G. Vallone, V. D’Ambrosio, A. Sponselli, et al., “Free-space quantum key distribution by rotation-invariant twisted photons,” Phys. Rev. Lett. 113(6), 060503 (2014). [CrossRef]  

13. M. Krenn, R. Fickler, M. Fink, et al., “Communication with spatially modulated light through turbulent air across Vienna,” New J. Phys. 16(11), 113028 (2014). [CrossRef]  

14. M. Krenn, J. Handsteiner, M. Fink, et al., “Twisted light transmission over 143 km,” Proc. Natl. Acad. Sci. 113(48), 13648–13653 (2016). [CrossRef]  

15. C. He, Y. J. Shen, and A. Forbes, “Towards higher-dimensional structured light,” Light: Sci. Appl. 11(1), 205 (2022). [CrossRef]  

16. Z. S. Wan, H. Wang, Q. Liu, et al., “Ultra-degree-of-freedom structured light for ultracapacity information carriers,” ACS Photonics 10(7), 2149–2164 (2023). [CrossRef]  

17. H. Zhang, J. Zeng, X. Y. Lu, et al., “Review on fractional vortex beam,” Nanophotonics 11(2), 241–273 (2022). [CrossRef]  

18. Z. W. Liu, S. Yan, H. G. Liu, et al., “Superhigh-resolution recognition of optical vortex modes assisted by a deep-learning method,” Phys. Rev. Lett. 123(18), 183902 (2019). [CrossRef]  

19. Y. B. Na and D. K. Ko, “Adaptive demodulation by deep-learning-based identific-ation of fractional orbital angular momentum modes with structural distortion due to atmospheric turbulence,” Sci. Rep. 11(1), 23505 (2021). [CrossRef]  

20. J. Zhou, W. H. Zhang, and L. X. Chen, “Experimental detection of high-order or fractional orbital angular momentum of light based on a robust mode converter,” Appl. Phys. Lett. 108(11), 111108 (2016). [CrossRef]  

21. F. J. Li, H. Ding, Z. Meng, et al., “Measuring high-order optical orbital angular momentum with a petal-like zone plate,” IEEE Photonics Technol. Lett. 34(2), 125–128 (2022). [CrossRef]  

22. A. M. Dezfouli, D. Abramović, M. Rakić, et al., “Detection of the orbital angular momentum state of light using sinusoidally shaped phase grating,” Appl. Phys. Lett. 120(19), 191106 (2022). [CrossRef]  

23. C. Y. Yang, R. Liu, W. J. Ni, et al., “High-order OAM states unwrapping in multiplexed optical links,” APL Photonics 8(5), 056110 (2023). [CrossRef]  

24. W. J. Ni, R. Liu, C. Y. Yang, et al., “Annular phase grating-assisted recording of an ultrahigh-order optical orbital angular momentum,” Opt. Express 30(21), 37526–37535 (2022). [CrossRef]  

25. J. Pinnell, V. Rodríguez-Fajardo, and A. Forbes, “Probing the limits of orbital angular momentum generation and detection with spatial light modulators,” J. Opt. 23(1), 015602 (2021). [CrossRef]  

26. B. P. da Silva, B. A. D. Marques, R. B. Rodrigues, et al., “Machine-learning recognition of light orbital-angular-momentum superpositions,” Phys. Rev. A 103(6), 063704 (2021). [CrossRef]  

27. B. L. Li, H. T. Luan, K. Y. Li, et al., “Orbital angular momentum optical communications enhanced by artificial intelligence,” J. Opt. 24(9), 094003 (2022). [CrossRef]  

28. H. Wang, X. L. Yang, Z. Q. Liu, et al., “Deep-learning-based recognition of multi-singularity structured light,” Nanophotonics 11(4), 779–786 (2022). [CrossRef]  

29. H. Wang, Z. Y. Zhan, Y. J. Shen, et al., “Deep-learning-assisted communication capacity enhancement by non-orthogonal state recognition of structured light,” Opt. Express 30(16), 29781–29795 (2022). [CrossRef]  

30. S. N. Xie, R. Girshick, P. Dollár, et al., “Aggregated Residual Transformations for Deep Neural Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 5987–5995 (2017).

31. K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 770–778 (2016).

32. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017). [CrossRef]  

33. D. P. Kingma and B. Jimmy, “Adam: A method for stochastic optimization,” arXiv, arXiv:1412.6980 (2014). [CrossRef]  

Data availability

The data and code that support the findings of this study are available from the corresponding authors on reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagrams and Experimental setup for recognition of high-order OAM modes. ISO: optical isolator; L1, L2: lens; M: mirror; HWP: half-wave plate; BS: beam splitter; SLM: spatial light modulator; L3: lens; CCD: charge-coupled device; (a) color look-up table of SLM; (b) (c) (d): the loaded hologram on SLM of l = ±100, l = ±200 and l = ±300. (e) (f) (g): the theoretical simulations of OAM superposition for l = ±100, ± 200 and ±300, respectively.
Fig. 2.
Fig. 2. Image recognition of high-order OAM modes based on the improved residual convolutional networks (ResNeXt 50). (a) building blocks in Conv2 of ResNeXt 50; (b) building blocks in Conv3 of ResNeXt 50; (c) building blocks in Conv4 of ResNeXt 50; (d) building blocks in Conv5 of ResNeXt 50. A layer is denoted as (filter size, and output channels).
Fig. 3.
Fig. 3. Image recognition of high-order OAM modes of ±200s. (a) the whole circle image of the light intensity distribution of l = ±200, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±290, and the local details in the middle; (c) Accuracy and loss function curves of recognition of l = ±200s; (d) confusion matrix for high-order OAM modes recognition from ±200 to ±290 with the accuracy of 99.64%.
Fig. 4.
Fig. 4. Image recognition of high-order OAM modes of l = ±400s. (a) the whole circle image of the light intensity distribution of l = ±400, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±490, and the local details in the middle; (c) Accuracy and loss function curves of recognition of l = ±400s; (d) confusion matrix for high-order OAM modes recognition from ±400 to ±490 with the accuracy of 99.86%.
Fig. 5.
Fig. 5. Image recognition of high-order OAM modes of l = ±600s. (a) the whole circle image of the light intensity distribution of l = ±600, and the local details in the middle; (b) the whole circle image of the light intensity distribution of l = ±690, and the local details in the middle; (c) accuracy and loss function curves of recognition of l = ±600s; (d) confusion matrix for high-order OAM modes recognition from ±600 to ±690 with the accuracy of 99.93%.
Fig. 6.
Fig. 6. The dependence of the training performances on the different sets of OAM modes, from 100s ∼ 600s, (a) the epoch value when the curve of training accuracy flattens out is recorded, and also the value of training accuracy at this time; (b) the epoch value when the curve of training loss begin to converge are recorded, and also the value of training loss at this time.
Fig. 7.
Fig. 7. Test accuracy of different sets of high-order OAM modes.
Fig. 8.
Fig. 8. (a) Inner diameter of the interference ring ρ of OAM beam is plotted versus topological charge l. The black dots are the experimental results and the red line is the fitting curve. (b) Schematic diagram for calculating the distance between adjacent stripes on the interference ring.
Fig. 9.
Fig. 9. Image recognition of high-order fractional OAM modes from ±600.0 to ±600.9 at 0.1 resolution. (a) Accuracy and loss function curves; (b) the confusion matrix with an accuracy of 97.86%; (c) the whole circle image of the light intensity distribution of l=±600.5, and the enlarged local details of the upper left corner shown in the middle.
Fig. 10.
Fig. 10. Enlarged upper left parts of the image that the fractional vortex mode interfered with its conjugate at 0.1 resolution. EXP: experimental data optical intensity distribution; THE: theoretical data optical intensity distribution.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

E + l ( r , φ ) = A × exp ( r 2 ω 0 2 ) × exp ( i l φ )
E l ( r , φ ) = A × exp ( r 2 ω 0 2 ) × exp ( i l φ )
| E ± l = 1 2 ( | E + l + | E l )
Loss = i = 1 m y i log y i ^
ρ = 1221.37 + 16.858 l + ( 0.03 ) l 2 + ( 4 × 10 5 ) l 3 + ( 2 × 10 8 ) l 4
d = ρ sin ( π 2 l )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.