Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

M2 factor estimation in few-mode fibers based on a shallow neural network

Open Access Open Access

Abstract

A high-accuracy, high-speed, and low-cost M2 factor estimation method for few-mode fibers based on a shallow neural network is presented in this work. Benefiting from the dimensionality reduction technique, which transforms the two-dimension near-field image into a one-dimension vector, a neural network with only two hidden layers can estimate the M2 factor directly. In the simulation, the mean estimation error is smaller than 3% even when the mode number increases to 10. The estimation time of 10000 simulation test samples is around 0.16s, which indicates a high potential for real-time applications. The experiment results of 50 samples from the 3-mode fiber have a mean estimation error of 0.86%. The strategies involved in this method can be easily extended to other applications related to laser characterization.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Beam quality is one of the core parameters to characterize fiber lasers, which is closely related to the directionality of the laser. The research of beam quality is fundamental and practical because of its important impact on the application effect of fiber lasers, especially in laser communication, laser manufacturing, and laser surgery. There are many criteria to quantify laser beam quality: beam quality factor (or M2 factor), far-field divergence angle, times-diffraction-limited factor, Strehl ratio, power-in-the-bucket, etc. Each with its unique advantages and limitations [15]. The M2 factor is a universal criterion, which is defined as the ratio of the beam parameter (the product of beam width and divergence half-angle of the beam) of an actual beam to that of an ideal beam, the fundamental mode Gaussian beam (TEM00), at the same wavelength [3,6]. The M2 factor of TEM00 mode is 1, and the farther M2 deviates from 1, the worse the laser beam quality.

The International Organization for Standardization (ISO) provided an experimental test method for the M2 factor [7], a typical variable-distance method. The beam intensity in multiple planes along the propagation direction is used to evaluate beam widths and then calculate the M2 factor. Generally, a moveable charge-coupled device (CCD) is used for image collection. It is difficult for this scheme to work for fiber lasers with a fast dynamic, unquantified instability, or real-time demand. Another important class of methods is the variable-focus method that scans laser beams using a liquid lens with a controllable focal length or spatial light modulators (SLMs) as a programmable lens [810], providing another path to the fast estimation of the M2 factor. As the improved method, some works replace the multiple caustic measurements at different propagation distances with one or two measurements and complex amplitude reconstruction, recovering the complete field of the laser beam with the measurements from interferometry or CCDs at a single location and then calculating the fields in other positions through virtual propagation guided by diffraction theory [1113]. Using scattered light imaging, one-shot measurement can also reduce the complexity of the experimental operation [14]. Studies show that the M2 factor measurement can be achieved with virtual propagation based on modal analysis [15]. Some numerical research, such as Yoda’s theory, provides a straight calculation formulation from modal coefficients to the M2 factor in a step-index fiber [16]. To obtain modal coefficients in practical application, mode decomposition (MD), an essential technique for spatial analysis, is needed, which aims to find the modal coefficients of each mode, equivalent to reconstructing the laser field on a group of given bases (eigenmodes in the fiber) to some extent [1720] and can be used to a wide range of areas, such as mode properties characterization [21,22], fiber laser beams characterization [2326], and mode-related processes measurement [2729]. Successful examples of real-time M2 estimation with MD for the Nd: YAG laser and fiber laser can be found in Ref [30,31].

Recently, deep learning, a hot topic of machine learning, has shown power in mining the mapping hidden in data based on multiple simple but nonlinear transformation modules that can achieve data representation at a higher abstract level [32]. In fiber laser research, deep learning as an emerging technology has brought about breakthroughs in ultrafast nonlinear dynamics prediction [3335], mode-locking laser control [36,37], coherent beam combination [3845], and so on [46]. In fiber laser beam spatial analysis and characterization, Yi An et al. introduced the deep learning scheme for MD [4749] and the M2 factor estimation [50]. In Ref. [50], a deep convolutional neural network (CNN), modified from a mature architecture VGG-16 [51], is trained to estimate the M2 factor from the near-field image in the few-mode fiber for the first time.

The high-accuracy and high-speed methods to implement M2 factor estimation for few-mode fibers are still sorely needed. The performance of machine learning methods is heavily dependent on applied data representation (or features) since different data representations show different explanatory factors behind the data [52]. The dimensionality reduction technique helps to remove the redundant information of initial features that are highly correlated, making it easier and faster for machine learning algorithms to analyze and process data. Principal component analysis (PCA) is an unsupervised dimensionality reduction (DR) technique for linear feature transformations, projecting initial features with high-dimension into fewer dimensions called principal components that are linear combinations or mixtures of initial features. PCA simplifies the complexity of raw data while retaining trends and patterns as much as possible [53]. Principal components as new features and a good data representation can be the input of the classifier or predictor in the next stage to improve the machine learning effect.

This paper presents the M2 factor estimation method for few-mode fibers based on a shallow neural network. We first implement PCA to transform the two-dimension near-field image into a one-dimension principal component vector. Then, a supervised predictor, a neural network with two hidden layers, trained with the principal components of the near-field image and the corresponding M2 factor, can achieve M2 factor estimation for few-mode fibers. Compared with the CNN used in Ref. [50], the complexity of the neural network for M2 factor estimation is reduced dramatically, leading to a shorter training time, lower requirement for high-performance training hardware, and a dramatical improvement of estimation speed. Simulation and experimental results demonstrate the feasibility and effectiveness of the proposed scheme. Related technical concepts and strategies can easily be extended to other fiber laser field applications.

2. Method

Our method aims to estimate the M2 factor for few-mode fibers solely from a single near-field image using a shallow neural network called M2-Net. The scheme of our method is illustrated in Fig. 1, which includes data generation, M2-Net training, and M2 factor estimation. We produce plenty of samples for M2-Net training and test in data generation. A collection of samples called a dataset, D ={(xi, yi)| i = 1, 2,…, M}, where xi is a sample feature vector, yi is the sample label, and M is the size of the dataset. The network is expected to learn the mapping from the feature vector to label during the training with a training dataset. In our work, the feature vector is the DR result of a near-field image, and its label is the corresponding M2 factor. Specifically, the M2 factor, along with two orthogonal directions, is marked as $\textrm{M}_x^2$ and $\textrm{M}_y^2$. When the near-field image resolution is n×n, and the size of the vector after DR is m, our method includes two mappings, ${\mathrm{{\cal F}}_{\textrm{DR}}}\textrm{: }{{\mathbb R}^{n \times n}} \to {{\mathbb R}^m}$ and ${\mathrm{{\cal F}}_{{\textrm{M}^\textrm{2}}\textrm{ - Net}}}\textrm{: }{{\mathbb R}^m} \to {{\mathbb R}^2}$. The generality ability of trained M2-Net will be tested on a test dataset. The test dataset is independent of the training dataset and unknown to the trained network before the test. In the M2 factor estimation, the DR results are used as input to the M2-Net, and the output is the estimated M2 factor. The accuracy of M2-Net can be described by the difference between its output and the test label.

 figure: Fig. 1.

Fig. 1. Scheme of M2 factor estimation based on a shallow neural network.

Download Full Size | PDF

We provide more details about the data generation. In this work, we consider a step-index fiber as an example. Assuming the number of supported modes of the fiber is N, the propagating optical field can be mathematically represented as a superposition of modes with complex modal coefficients, ${\mathbf E} = \sum\nolimits_{i = 1}^N {{c_i}{\psi _i}}$, where ψi is the electric field of the ith eigenmode in the fiber which meets orthogonality relation, and modal coefficient ci is a complex value, normalized according to $\sum\nolimits_{i = 1}^N {|{c_i}{|^2}} = 1$. With known fiber parameters and working laser wavelength, ψi can be calculated [5]. The near-field image can be represented as:

$$I = \textrm{ }|{\mathbf E}{|^2} = \textrm{ }|\sum\nolimits_{i = 1}^N {{c_i}{\psi _i}} {|^2}. $$

According to Yoda’s theory [16], the M2 factor, along with two orthogonal directions, $\textrm{M}_x^2$ and $\textrm{M}_y^2$ can be calculated as

$$M_k^2 = \sqrt {4{B_k}\sigma _k^2({{z_0}} )+ A_k^2} ,k = x,y, $$
and the effective M2 factor, $M_{eff}^2$, is calculated by
$$M_{eff}^2 = \sqrt {M_x^2 \times M_y^2}, $$
where
$$\sigma _k^2({{z_0}} )= \int\!\!\!\int {({{({k - < k > ({z_0})} )}^2}{{|{{\mathbf E}(x,y,{z_0})} |}^2})dxdy}, $$
$${A_k} = \int\!\!\!\int {(k - < k > ({z_0}))({\mathbf E}(x,y,{z_0}) \times \frac{{\partial {{\mathbf E}^ \ast }}}{{\partial k}}(x,y,{z_0}) - c.c.)dxdy}, $$
$${B_k} = \int\!\!\!\int {|\frac{{\partial {{\mathbf E}^ \ast }}}{{\partial k}}(x,y,{z_0}){|^2}dxdy} + \frac{1}{4}\int\!\!\!\int {({\mathbf E}(x,y,{z_0}) \times \frac{{\partial {{\mathbf E}^ \ast }}}{{\partial k}}(x,y,{z_0}) - c.c.)dxdy}, $$
< k > is the center coordinate in k axis, ${{\mathbf E}^ \ast }$ is the conjugate field of ${\mathbf E}(x,y,{z_0})$, and c.c. denotes the conjugate polynomial of the former one.

The training dataset can be generated from simulations, and the main steps are listed as follows:

  • 1) Calculating eigenmodes according to the fiber and laser setup;
  • 2) Generating an amount of random modal coefficients and then calculating near-field images through the superposition of eigenmodes and modal coefficients;
  • 3) Applying PCA to near-field images and using the principal components as the sample feature;
  • 4) Calculating the M2 factor along with two orthogonal directions according to Yoda’s theory as the sample label.

It should be noted that the trick of dimensionality reduction is to trade a little accuracy for simplicity while preserving as much information as possible. Principal components are ranked in the percentage of variance (POV), which is how much each principal component explains the total variance relative to the sum. In this work, the sample feature size, m, is the number of used principal components that contain a larger sum of variance percentage values.

The source of the test dataset for the M2 factor estimation test can also come from simulation like the training dataset. In the experiment test, the near-field image and its label can be collected from the CCD and the commercial device that adopted the ISO method.

3. Results and analysis

The M2-Net is designed to have two hidden layers with hi (i = 1,2) nodes, each of them is followed by a tan-sigmoid activation function layer, and the output layer has a linear activation function layer. All networks presented in this work are trained and tested on a computer with an Intel Core i9-10900KF Central Processing Unit (CPU). We use the mean squared error (MSE) as the loss function for network training and Bayesian regularization backpropagation [54] as the training algorithm. The accuracy of trained networks is evaluated by the estimation error of test samples,

$$e = |{\textrm{M}_{eff,p}^2 - \textrm{M}_{eff,l}^2} |/\textrm{M}_{eff,l}^2, $$
where ‘p’ denotes the estimated value, and ‘l’ denotes the label value (ground truth).

3.1 Simulation-based analysis

In simulations, a typical large-mode-area fiber (a core diameter of 25 µm and the NA of 0.08) works at a wavelength of 1064 nm, which can support up to 10 eigenmodes, arranged in the order as LP01, LP11e, LP11o, LP21e, LP21o, LP02, LP31e, LP31o, LP12e, and LP12o. There are 5 possible cases of the mode combination that propagate in this setup, the first 3, 5, 6, 8, and 10 modes, respectively. The resolution of the near-field image is set to 128×128 (n = 128). In the 3-mode case, after ranking the POV explained by principal components of near-field images, it is found that the top six components explain over 99% of all variance. Therefore, we select the sample feature size as six (m = 6), which means the near-field image with resolution 128×128 is transformed into a 1×6 one-dimension vector.

The M2-Net structure is designed as 6-30-30-2, where h1= h2= 30. The loss evolution during the training is shown in Fig. 2. N1 and N2 represent the network trained with 22500 and 45000 samples. As the training dataset size increases, the MSE of N1 and N2 after 100 training epochs decreases from ∼10−7 to ∼10−8. However, the training time becomes longer, 71.82 s for N1 and 136.18 s for N2, respectively.

 figure: Fig. 2.

Fig. 2. In the 3-mode case, the evolution of loss through training epochs under the different number of training samples.

Download Full Size | PDF

We use the same 10000 test samples to test the trained N1 and N2. The estimation time of N1 and N2 is around 0.16s for 10000 samples, around 0.016 ms per sample, which holds a dramatic estimation speed. Figure 3 shows the estimation error distribution of N2, where the mean estimation error of 0.0055% and the maximum value is 0.0559%. The samples with estimation errors less than 0.005% and 0.01% accounted for 56.02% and 86.26% of the total, respectively. In more detail, the 10000 test samples are marked as S1, S2, …, and S10000 in descending order of estimation error and Table 1 gathers the results of six samples, S1, S2000, S4000, S6000, S8000, and S10000.

 figure: Fig. 3.

Fig. 3. The estimation error distribution of 10000 test samples.

Download Full Size | PDF

Tables Icon

Table 1. Six test examples of the 3-mode case.

To analyze the performance scaling of the proposed scheme, we apply it to 5-mode, 6-mode, 8-mode, and 10-mode cases. Like that in the 3-mode case, m is determined by the number of components that explain more than 99% of all variability in the PCA, where m is 12, 15, 20, and 26 in the 5-mode, 6-mode, 8-mode, and 10-mode cases, respectively. For each case, we also choose h1= h2= 30, and the corresponding network structure are 12-30-30-2, 15-30-30-2, 20-30-30-2, and 26-30-30-2. After 200 epochs of training with 45000 samples, we test these networks using 10000 samples. The mean estimation error is 1.51%, 1.62%, 2.63%, and 2.98% for four cases, respectively. As the mode number increases, the mean estimation error increases because the more the number of modes, the larger the variation range of the M2 factor, and the greater the learning difficulty. An optimized network structure with more trainable parameters and high image resolution may be helpful to improve the estimation accuracy further.

The robustness of our method is discussed. We use the networks trained above with the one-dimension vector from the clean near-field images and then test them with the vector from the near-field images added Gaussian noise. We calculate the mean estimation error under different noisy conditions. Figure 4 shows the relationship between the mean estimation error of test 10000 samples and the Signal-to-Noise Ratio (SNR) in the 6-mode and 10-mode cases. When the SNR is -15 dB, the mean estimation error is 4.58% and 5.37% for the 6-mode and 10-mode cases, respectively, indicating that the method is still valid.

 figure: Fig. 4.

Fig. 4. M2 Factor estimation test under noise conditions.

Download Full Size | PDF

Reference [50] uses CNN to achieve high accuracy M2 factor estimation using the near-field image. The CNN model with multiple convolutional modules can achieve data representation at a higher abstract level, which helps discover the hidden knowledge of data. In detail, the convolutional blocks in the front part of the network can effectively extract features from the near-field image, and then the fully connected layers at the end of the network will map these features into the M2 factor. The CNN model in Ref [50]. has 23,105,218 trainable parameters (occupying 88.14 MB) without considering the Batch Normalization layer. Training such a deep network requires high computation resources, such as the graphic processing unit (GPU) and massive data. After a 2 h training with 500000 samples, the mean estimation error in simulations can reach 0.4%, 1.3%, 1.6%, 1.8%, and 2.0% for 3-mode, 5-mode, 6-mode, and 10-mode cases, with an estimation time of around 5 ms. The M2-Net in this work feathers low network complexity and only contains 1202, 1502, 1592, 1742, and 1922 trainable parameters in 5 cases (occupying ∼ 1 MB), so it can work on a CPU and train with fewer data. The low network complexity also means a fast estimation time. The estimation time per sample of M2-Net improves by two orders of magnitude over the CNN model. According to the mean estimation error, the M2-Net performs better than CNN in the 3-mode case and slightly worse in the other mode cases but still represents high accuracy. This comparison shows that even a network with very low network complexity is enough to work for M2 factor estimation when the DR technique is adopted to gain a good data representation in advance.

3.2 Experiment-based analysis

As an example, a 3-mode case is considered in the experiment. When the working wavelength is 1080 nm, a few-mode fiber with a core diameter of 20 µm and NA of 0.06 can support 3 eigenmodes: LP01, LP11e, and LP11o. The experimental setup is shown in Fig. 5. A laser source with pigtail fiber of 20 µm passes through the few-mode fiber by splice fusion and a collimator, followed by a Beam Splitter (BS) with a beam-splitting ratio of 50:50. One output of BS goes into a laser quality monitor (LQM, PRIMES HighPower-LaserQualityMonitor II with optional fiber adapter) and a CCD camera (Point Grey camera, CM3-U3-28S4M-CS) collects the other beam after a focus lens. The near-field images from the CCD will be sent to a computer where the trained network works. In the experiment, various near-field images can be obtained by manually controlling the splice fusion (such as dislocation and tilt) between laser pigtail fiber and the few-mode fiber.

 figure: Fig. 5.

Fig. 5. Scheme of the experimental setup. LQM, Laser quality monitor; CCD, charge-coupled device.

Download Full Size | PDF

We first train the M2-Net with a structure of 6-30-30-2 on 45000 simulation samples and then test the trained network on the experiment data. To analyze the accuracy of our method in the experiment, we use the M2 factor measurement values from LQM as the ground truth for comparison. The LQM utilizes the ISO method, which collects 21 image planes along the propagation direction to fit the M2 factor.

The test results of 50 samples using these two methods are shown in Fig. 6. The measured effective M2 factor from LQM is between 1.156 and 1.384, covering the common situations of few-mode fiber in the experiment. It can be seen that, except for several samples, the estimation effective M2 factor by the trained shallow network has a small estimation error compared with the measurement results of LQM, where the estimation error of 34 samples is smaller than 1%. The maximum estimation error is 5.17%, and the mean value is 0.86%. There are two experimental samples with estimation errors greater than 4%, possibly due to experimental noise, experimental beam displacement, and deformation (The beam pattern in simulations is naturally set to the center of the dear-field image, but the beam collected from experiments may be on the edge of the image). In our scheme, the estimation time (including beam pattern image pre-processing, such as image resolution adjustment) for 50 experimental samples is 0.15s. The time for 50 samples here is similar to the time for 10000 samples in the simulation-based analysis above, which means that in the test, the sample computation time for 10,000 samples or 50 samples is negligible compared to the network load time. The measurement time of LQM is around 2 mins each time. Since the calculation of LQM cannot be real-time, this comparative test is not either. However, the estimation speed of our scheme indicates a strong potential for real-time applications.

 figure: Fig. 6.

Fig. 6. Experimental M2 Factor estimation error of 50 samples.

Download Full Size | PDF

4. Conclusion

To achieve high-speed and high-accuracy M2 factor estimation, we provide a scheme based on a shallow neural network. The test on simulation samples shows the advantages: low computation hardware requirement, a small amount of training data, short training time, fast estimation speed, high estimation accuracy, and high robustness ability. The experimental results also prove the superior performance of M2 factor characterization, which is the fastest method to the best of our knowledge. Further, the method is expected to work well for real-time applications, especially with unstable beam quality cases.

Funding

Natural Science Foundation of Hunan Province (2019JJ10005); Hunan Provincial Innovation Construct Project (2019RS3017); Training Program for Excellent Young Innovators of Changsha (KQ2106005); National Natural Science Foundation of China (62061136013, 62075242).

Disclosures

The authors declare no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Alda, “Laser and Gaussian Beam Propagation and Transformation,” in Encyclopedia of Optical and Photonic Engineering, Second Edition (CRC Press, 2015), (January 2003), pp. 1–15.

2. P. Zhou, Z. Liu, X. Xu, Z. Chen, and X. Wang, “Beam quality factor for coherently combined fiber laser beams,” Opt. Laser Technol. 41(3), 268–271 (2009). [CrossRef]  

3. A. E. Siegman, “Defining, measuring, and optimizing laser beam quality,” in A. Bhowmik, ed. (1993), V, pp. 2–12.

4. T. S. Ross, Laser Beam Quality Metrics (SPIE, 2013, 24(9).

5. A. W. Snyder and J.D. Love, Optical Waveguide Theory (Springer US, 1984).

6. A. E. Siegman, “New developments in laser resonators,” in Proc. SPIE 1224, Optical Resonators, D. A. Holmes, ed. (1990), p. 2.

7. International Organization for Standardization, “Lasers and laser-related equipment - Test methods for laser beam widths, divergence angles and beam propagation ratios,” ISO 11146 1–3, (2005).

8. R. D. Niederriter, J. T. Gopinath, and M. E. Siemens, “Measurement of the M2 beam propagation factor using a focus-tunable liquid lens,” Appl. Opt. 52(8), 1591–1598 (2013). [CrossRef]  

9. C. Schulze, D. Flamm, M. Duparré, and A. Forbes, “Beam-quality measurements using a spatial light modulator,” Opt. Lett. 37(22), 4687–4689 (2012). [CrossRef]  

10. M. Sheikh and N. A. Riza, “Motion-free hybrid design laser beam propagation analyzer using a digital micromirror device and a variable focus liquid lens,” Appl. Opt. 49(16), D6–D11 (2010). [CrossRef]  

11. S. Pan, J. Ma, R. Zhu, T. Ba, C. Zuo, F. Chen, J. Dou, C. Wei, and W. Zhou, “Real-time complex amplitude reconstruction method for beam quality M2 factor measurement,” Opt. Express 25(17), 20142–20155 (2017). [CrossRef]  

12. Y. Du, Y. Fu, and L. Zheng, “Complex amplitude reconstruction for dynamic beam quality M2 factor measurement with self-referencing interferometer wavefront sensor,” Appl. Opt. 55(36), 10180–10186 (2016). [CrossRef]  

13. Z.-G. Han, L.-Q. Meng, Z.-Q. Huang, H. Shen, L. Chen, and R.-H. Zhu, “Determination of the laser beam quality factor (M2) by stitching quadriwave lateral shearing interferograms with different exposures,” Appl. Opt. 56(27), 7596–7603 (2017). [CrossRef]  

14. K. C. Jorge, R. Riva, N. A. S. Rodrigues, J. M. S. Sakamoto, and M. G. Destro, “Scattered light imaging method (SLIM) for characterization of arbitrary laser beam intensity profiles,” Appl. Opt. 53(20), 4555–4564 (2014). [CrossRef]  

15. D. Flamm, C. Schulze, R. Brüning, O. A. Schmidt, T. Kaiser, S. Schröter, and M. Duparré, “Fast M2 measurement for fiber beams based on modal analysis,” Appl. Opt. 51(7), 987–993 (2012). [CrossRef]  

16. H. Yoda, P. Polynkin, and M. Mansuripur, “Beam quality factor of higher order modes in a step-index fiber,” J. Lightwave Technol. 24(3), 1350–1355 (2006). [CrossRef]  

17. D. N. Schimpf, R. A. Barankov, and S. Ramachandran, “Cross-correlated (C2) imaging of fiber and waveguide modes,” Opt. Express 19(14), 13008–13019 (2011). [CrossRef]  

18. R. Bruning, P. Gelszinnis, C. Schulze, D. Flamm, and M. Duparre, “Comparative analysis of numerical methods for the mode analysis of laser beams,” Appl. Opt. 52(32), 7769–7777 (2013). [CrossRef]  

19. L. Huang, J. Leng, P. Zhou, S. Guo, H. Lü, and X. Cheng, “Adaptive mode control of a few-mode fiber by real-time mode decomposition,” Opt. Express 23(21), 28082–28090 (2015). [CrossRef]  

20. K. Choi and C. Jun, “Sub-sampled modal decomposition in few-mode fibers,” Opt. Express 29(20), 32670–32681 (2021). [CrossRef]  

21. C. Schulze, A. Lorenz, D. Flamm, A. Hartung, S. Schröter, H. Bartelt, and M. Duparré, “Mode resolved bend loss in few-mode optical fibers,” Opt. Express 21(3), 3170–3181 (2013). [CrossRef]  

22. C. Schulze, R. Brüning, S. Schröter, and M. Duparré, “Mode Coupling in Few-Mode Fibers Induced by Mechanical Stress,” J. Lightwave Technol. 33(21), 4488–4496 (2015). [CrossRef]  

23. D. S. Kharenko, M. D. Gervaziev, A. G. Kuznetsov, E. V. Podivilov, S. Wabnitz, and S. A. Babin, “Mode-resolved analysis of pump and Stokes beams in LD-pumped GRIN fiber Raman lasers,” Opt. Lett. 47(5), 1222–1225 (2022). [CrossRef]  

24. S. Fu, Y. Zhai, J. Zhang, X. Liu, R. Song, H. Zhou, and C. Gao, “Universal orbital angular momentum spectrum analyzer for beams,” PhotoniX 1(1), 1–12 (2020). [CrossRef]  

25. C. Schulze, A. Dudley, D. Flamm, M. Duparré, and A. Forbes, “Measurement of the orbital angular momentum density of light by modal decomposition,” New J. Phys. 15(7), 073025 (2013). [CrossRef]  

26. Y. Ding, Y. Ren, T. Liu, S. Qiu, C. Wang, Z. Li, and Z. Liu, “Analysis of misaligned optical rotational Doppler effect by modal decomposition,” Opt. Express 29(10), 15288–15299 (2021). [CrossRef]  

27. C. Jollivet, A. Mafi, D. Flamm, M. Duparré, K. Schuster, S. Grimm, and A. Schülzgen, “Mode-resolved gain analysis and lasing in multi-supermode multi-core fiber laser,” Opt. Express 22(24), 30377–30386 (2014). [CrossRef]  

28. F. Stutzki, H.-J. Otto, F. Jansen, C. Gaida, C. Jauregui, J. Limpert, and A. Tünnermann, “High-speed modal decomposition of mode instabilities in high-power fiber lasers,” Opt. Lett. 36(23), 4572–4574 (2011). [CrossRef]  

29. D. Flamm, O. A. Schmidt, C. Schulze, J. Borchardt, T. Kaiser, S. Schröter, and M. Duparré, “Measuring the spatial polarization distribution of multimode beams emerging from passive step-index large-mode-area fibers,” Opt. Lett. 35(20), 3429–3431 (2010). [CrossRef]  

30. O. A. Schmidt, C. Schulze, D. Flamm, R. Brüning, T. Kaiser, S. Schröter, and M. Duparré, “Real-time determination of laser beam quality by modal decomposition,” Opt. Express 19(7), 6741–6748 (2011). [CrossRef]  

31. L. Huang, S. Guo, J. Leng, H. Lü, P. Zhou, and X. Cheng, “Real-time mode decomposition for few-mode fiber based on numerical method,” Opt. Express 23(4), 4620–4629 (2015). [CrossRef]  

32. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

33. L. Salmela, N. Tsipinakis, A. Foi, C. Billet, J. M. Dudley, and G. Genty, “Predicting ultrafast nonlinear dynamics in fibre optics with a recurrent neural network,” Nat. Mach. Intell. 3(4), 344–354 (2021). [CrossRef]  

34. U. Teğin, B. Rahmani, E. Kakkava, N. Borhani, C. Moser, and D. Psaltis, “Controlling spatiotemporal nonlinearities in multimode fibers with deep neural networks,” APL Photonics 5(3), 030804 (2020). [CrossRef]  

35. H. Sui, H. Zhu, L. Cheng, B. Luo, S. Taccheo, X. Zou, and L. Yan, “Deep learning based pulse prediction of nonlinear dynamics in fiber optics,” Opt. Express 29(26), 44080–44091 (2021). [CrossRef]  

36. Q. Yan, Q. Deng, J. Zhang, Y. Zhu, K. Yin, T. Li, D. Wu, and T. Jiang, “Low-latency deep-reinforcement learning algorithm for ultrafast fiber lasers,” Photonics Res. 9(8), 1493–1501 (2021). [CrossRef]  

37. T. Baumeister, S. L. Brunton, and J. Nathan Kutz, “Deep learning and model predictive control for self-tuning mode-locked lasers,” J. Opt. Soc. Am. B 35(3), 617–626 (2018). [CrossRef]  

38. H. Tünnermann and A. Shirakawa, “Deep reinforcement learning for coherent beam combining applications,” Opt. Express 27(17), 24223–24230 (2019). [CrossRef]  

39. T. Hou, Y. An, Q. Chang, P. Ma, J. Li, D. Zhi, L. Huang, R. Su, J. Wu, Y. Ma, and P. Zhou, “Deep-learning-based phase control method for tiled aperture coherent beam combining systems,” High Power Laser Sci. Eng. 7, e59 (2019). [CrossRef]  

40. Q. Chang, Y. An, T. Hou, R. Su, P. Ma, and P. Zhou, “Phase-locking System in Fiber Laser Array through Deep Learning with Diffusers,” 2020 Asia Communications and Photonics Conference, ACP 2020 and International Conference on Information Photonics and Optical Communications, IPOC 2020 - Proceedings7–9 (2020).

41. R. Liu, C. Peng, X. Liang, and R. Li, “Coherent beam combination far-field measuring method based on amplitude modulation and deep learning,” Chin. Opt. Lett. 18(4), 041402 (2020). [CrossRef]  

42. T. Hou, Y. An, Q. Chang, P. Ma, J. Li, L. Huang, D. Zhi, J. Wu, R. Su, Y. Ma, and P. Zhou, “Deep-learning-assisted, two-stage phase control method for high-power mode-programmable orbital angular momentum beam generation,” Photonics Res. 8(5), 715–722 (2020). [CrossRef]  

43. H. Tünnermann and A. Shirakawa, “Deep reinforcement learning for tiled aperture beam combining in a simulated environment,” J. Phys. Photonics 3(1), 015004 (2021). [CrossRef]  

44. X. Zhang, P. Li, Y. Zhu, C. Li, C. Yao, L. Wang, X. Dong, and S. Li, “Coherent beam combination based on Q-learning algorithm,” Opt. Commun. 490(February), 126930 (2021). [CrossRef]  

45. M. Shpakovych, G. Maulion, V. Kermene, A. Boju, P. Armand, A. Desfarges-Berthelemot, and A. Barthélemy, “Experimental phase control of a 100 laser beam array with quasi-reinforcement learning of a neural network in an error reduction loop,” Opt. Express 29(8), 12307–12318 (2021). [CrossRef]  

46. M. Jiang, H. Wu, Y. An, T. Ho, Q. Chang, L. Huang, J. Li, R. Su, P. Zhou, P. Z. Min Jiang, Hanshuo Wu, Yi An, Tianyue Hou, Qi Chang, Liangjin Huang, Jun Li, and Rongtao Su a, “Fiber laser development enabled by machine learning: review and prospect,” PhotoniX to be published (2022).

47. Y. An, L. Huang, J. Li, J. Leng, L. Yang, and P. Zhou, “Deep Learning-Based Real-Time Mode Decomposition for Multimode Fibers,” IEEE J. Sel. Top. Quantum Electron. 26(4), 1–6 (2020). [CrossRef]  

48. Y. An, L. Huang, J. Li, J. Leng, L. Yang, and P. Zhou, “Learning to decompose the modes in few-mode fibers with deep convolutional neural network,” Opt. Express 27(7), 10127–10137 (2019). [CrossRef]  

49. Y. An, J. Li, L. Huang, L. Li, J. Leng, L. Yang, and P. Zhou, “Numerical mode decomposition for multimode fiber: From multi-variable optimization to deep learning,” Opt. Fiber Technol. 52(June), 101960 (2019). [CrossRef]  

50. Y. An, J. Li, L. Huang, J. Leng, L. Yang, and P. Zhou, “Deep learning enabled superfast and accurate M2 evaluation for fiber beams,” Opt. Express 27(13), 18683–18694 (2019). [CrossRef]  

51. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings1–14 (2014).

52. Y. Bengio, A. Courville, and P. Vincent, “Representation Learning: A Review and New Perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013). [CrossRef]  

53. J. Lever, M. Krzywinski, and N. Altman, “Points of Significance: Principal component analysis,” Nat. Methods 14(7), 641–642 (2017). [CrossRef]  

54. D. J. C. MacKay, “Bayesian Interpolation,” Neural Computation 4(3), 415–447 (1992). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Scheme of M2 factor estimation based on a shallow neural network.
Fig. 2.
Fig. 2. In the 3-mode case, the evolution of loss through training epochs under the different number of training samples.
Fig. 3.
Fig. 3. The estimation error distribution of 10000 test samples.
Fig. 4.
Fig. 4. M2 Factor estimation test under noise conditions.
Fig. 5.
Fig. 5. Scheme of the experimental setup. LQM, Laser quality monitor; CCD, charge-coupled device.
Fig. 6.
Fig. 6. Experimental M2 Factor estimation error of 50 samples.

Tables (1)

Tables Icon

Table 1. Six test examples of the 3-mode case.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I =   | E | 2 =   | i = 1 N c i ψ i | 2 .
M k 2 = 4 B k σ k 2 ( z 0 ) + A k 2 , k = x , y ,
M e f f 2 = M x 2 × M y 2 ,
σ k 2 ( z 0 ) = ( ( k < k > ( z 0 ) ) 2 | E ( x , y , z 0 ) | 2 ) d x d y ,
A k = ( k < k > ( z 0 ) ) ( E ( x , y , z 0 ) × E k ( x , y , z 0 ) c . c . ) d x d y ,
B k = | E k ( x , y , z 0 ) | 2 d x d y + 1 4 ( E ( x , y , z 0 ) × E k ( x , y , z 0 ) c . c . ) d x d y ,
e = | M e f f , p 2 M e f f , l 2 | / M e f f , l 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.