Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Prediction technique of aberration coefficients of interference fringes and phase diagrams based on convolutional neural network

Open Access Open Access

Abstract

In this study, we present a new way to predict the Zernike coefficients of optical system. We predict the Zernike coefficients through the function of image recognition in the neural network. It can reduce the mathematical operations commonly used in the interferometers and improve the measurement accuracy. We use the phase difference and the interference fringe as the input of the neural network to predict the coefficients respectively and compare the effects of the two models. In this study, python and optical simulation software are used to confirm the overall effect. As a result, all the Root-Mean-Square-Error (RMSE) are less than 0.09, which means that the interference fringes or the phase difference can be directly converted into coefficients. Not only can the calculation steps be reduced, but the overall efficiency can be improved and the calculation time reduced. For example, we could use it to check the performance of camera lenses.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In designing or analyzing the quality of optical systems, aberration is an important criterion. It is especially important in imaging optical systems [1] and directly affects the imaging performance of the entire optical system. Therefore, measuring the aberration is especially important. The methods of measuring aberration can be divided into observation type [2] and measurement type. The method of measurement type mainly uses of wavefront sensor [3] or interferometer [4]. Among them, the wavefront sensor uses a lens array to focus the light, uses the deviation of each light spot to estimate the wavefront of light, and then obtains the information about the aberrations. On the other hand, the interferometer uses the interference phenomenon of light. Using the interference fringes generated by the phase difference between two beams to obtain the aberration information in the beams.

Interference fringe is an optical phenomenon [5]. When two beams overlap, fringes will be generated based on the phase difference of the two beams. Compared with the traditional inspection method, the interference phenomenon can exhibit more subtle differences [6].

It has been a long time to observe the performance of optical components through interference fringes, but it cannot directly quantify the aberrations. Therefore, we use two systems that can quantify aberrations, namely Seidel aberrations [7] and Zernike polynomials [8]. Both can quantify the magnitude of aberrations. Among them, the Zernike polynomials uses a series of polynomials which orthogonal to the unit circle. Today, it is mainly used to represent the wavefront after passing through an optical system or the lens surface.

In the process of obtaining the Zernike coefficients by the interferometer, usually through some conversion methods [9], interference phase shift method, Fourier trans-form method, etc., the interference fringe is converted into the wavefront difference or phase difference. Then use surface fitting with Zernike polynomials to calculate the coefficients [10]. In this process, conversion requires many mathematical methods and is quite complicated. Here, we propose using an artificial intelligence (AI) architecture to simplify the processing and predict the Zernike coefficients directly from interference fringes or phase difference.

In the development of AI, deep learning has expanded to many fields with the development of Convolutional Neural Network (CNN). CNN has advantages in image recognition, so many people began to introduce the AI into the field of aberration analysis. They use the ability of AI on the wavefront sensor [11,12]. The wavefront of the object can be represented by the corresponding focus distribution on wavefront sensor. It uses this distribution for aberration analysis, which has a good effect. Compared with the traditional method, it not only improves the accuracy, but also has short judgment time and may make real-time judgment. Some studies also use AI to analyze the distribution of light sources on the analysis surface [13,14], and use these distribution shapes to determine the magnitude of aberrations. This method does not require instruments, but directly uses AI for aberration analysis. These methods all bring the ability of AI in image recognition to the calculation of aberrations [15,16]. It can reduce the process of mathematical operations and reduce calculation time. In these studies, AI can effectively increase the efficiency of the entire system.

Since there is no method to calculate the Zernike polynomials directly from interference fringes in the study of AI, this study wants to use the phase difference or interference fringes as the input of the CNN network to obtain the Zernike coefficients. In this study, the phase difference and interference fringes will be used for comparison. Although they are related, the images presented are indeed different. We want to compare the prediction ability of the two neural networks with different input. Using this method can reduce the mathematical operations, time cost required, and increase the accuracy of the interferometer when measuring camera lenses.

2. Methods

2.1. Datasets for training and testing model

The neural network needs a lot of data to learn before it can predict the coefficients. In this study, we use two neural networks to learn two datasets, phase difference and interference fringe. In the training phase, we use Python to generate the raw data of the two datasets. First, the phase difference is represented by the Zernike polynomials [17] and the Zernike polynomials is shown in Eq. (1).

$$\delta (r,\theta ) = \sum\limits_{n = 1}^N {\mathop a\nolimits_n \mathop z\nolimits_n (} r,\theta ),$$
In the Eq. (1), the $\delta $ is used to represent the appearance of phase difference, the ${a_n}$ is represented as the Zernike coefficient. In here, the different range of Zernike coefficient will be set according to the situation, and the value will be randomly obtained from the range. The ${z_n}$ is the nth items in the polynomials, the r is the radius (0≤r≤1), and the $\theta $ is from 0 to 2π. According to the Zernike polynomial, each item represents the different meaning of aberration and has the different appearance. The phase and its Zernike coefficients are shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The phase different of the correspondence Zernike coefficients. (a) The picture of phase different. (b)The Zernike coefficients of the phase different.

Download Full Size | PDF

Interference fringes can be expressed in mathematics [18] as Eq. (2),

$$I = {I_a} + {I_b} + 2\sqrt {{I_a}{I_b}} \cos (\delta ),$$
$$I = 4{I_0}{\cos ^2}(\frac{\delta }{2}),$$
Where the ${I_a}$ is the reference light and the ${I_b}$ is the testing light. When the intensity of the two beams are the same (${I_a} = {I_b} = {I_0})$, the equation can be simplified to Eq. (3). The ${I_0}$ is the intensity of the background, the δ represents the phase difference between the two beams. Since one of them is the reference light, the $\delta $ can be regarded as the aberration of measured phase different. When measuring different objects, it will generate correspond interference fringes based on the aberration. Therefore, we use different Zernike coefficients to adjust the interference fringes for training the neural network, it shows in Fig. 2.

In the case of an interferometer, it converts the obtained interference fringes into the phase form and then fits it to Zernike coefficients. Regarding these conversion methods, there are several conversion methods. One of the more famous methods is the interference phase shift method [19]. The phase shift method uses the known changes of phase to obtain corresponding interference fringes, and then use these fringes to find the phase information. According to the different changes of phase, it can be divided into several precision methods, such as three-step phase shift method, four-step phase shift method [20], or five-step phase shift method. The following is the formula of the four-step phase shift method.

$${I_1} = {I_a} + {I_b} + 2\sqrt {{I_a}{I_b}} \cos (\delta ),$$
$${I_2} = {I_a} + {I_b} + 2\sqrt {{I_a}{I_b}} \cos (\delta + \frac{\pi }{2}),$$
$${I_3} = {I_a} + {I_b} + 2\sqrt {{I_a}{I_b}} \cos (\delta + \pi ),$$
$${I_4} = {I_a} + {I_b} + 2\sqrt {{I_a}{I_b}} \cos (\delta + \frac{{3\pi }}{2}),$$
$$\delta = {\tan ^{ - 1}}(\frac{{{I_4} - {I_2}}}{{{I_1} - {I_3}}}),$$
In Eq. (4) to Eq. (7), the original interference fringe is Eq. (4), and by adding multiples of π/2 in phase to generate the other three interference fringes from Eq. (5) to Eq. (7). Finally, the phase information can be obtained from the intensity information in these 4 interference fringes through Eq. (8). This method requires multiple interference fringe to solve the equation. Although these methods are difficult in controlling the phase change, it can make the information of phase more accurate.

 figure: Fig. 2.

Fig. 2. The Interference fringe of the correspondence Zernike coefficients. (a) The picture of interference fringe. (b)The Zernike coefficients of the interference fringe.

Download Full Size | PDF

In this study, we set the Zernike polynomial with 1 to 32 items and bring the randomly generated coefficients into the polynomial to generate the corresponding phase difference [13]. Then use these polynomials with the equation about interference fringe to generate the corresponding interference fringes. Therefore, the two sets of data can be generated and provided to two neural networks, and each set includes 100,000 data as training data and 5,000 data as verification data.

According to the formulas of phase different and interference fringe, the corresponding phase different will be different with the positive or negative values of Zernike coefficients, but due to the cosine function, the corresponding interference fringe will be the same. Therefore, multiple interference fringes are used for judgment in the interferometer to determine accurate phase information. According to this reason, the phase difference and interference fringe are divided into two parts, and different models are trained separately.

In case of phase difference, since different positive or negative coefficients will not produce repeated images, it can directly use the images to train the model. On the contrary, the interference fringe has the opportunity to repeat with positive or negative coefficients, so it is modified with reference to the phase shift method. The fringes generated by the phase change of π/4 in the positive and negative coefficients are different, so we can use this different fringe changes to distinguish between positive and negative coefficients. We add a π/4 change in the phase to generate a new interference fringe and divide the two interference fringes (the original interference fringes and the new interference fringes) as input to train the network. Therefore, the images with positive and negative coefficients are not repeated, and the network can accurately predict the coefficients.

2.2. Neural network architecture

CNN is one of the most popular neural networks. It has several different architectures but consists of several primary layers. These layers include convolution layers, pooling layers, and fully connected layers. Different architectures use different layer configurations. The convolution layer will extract features from the input. The pooling layer will reduce the data size but retain the primary function. The fully connected layer will increase the depth of the network and predict the answer. Combine the layers to complete the CNN.

In this paper, we use the architecture of GoogleNet [21]. In GoogleNet, Inception is used to increase the capacity of the network to improve performance, including 1×1, 3×3, and two groups of 3×3 filters for convolution. To increase the field of view (FOV) of the Inception layer, the 3×3 filter is changed to 1×3 and 3×1 filters to enhance the effect of the entire GoogleNet [22]. Finally, the results of these convolutions are merged and output to the next layer. The architecture of Inception is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. The architecture diagram of Inception layer.

Download Full Size | PDF

The GoogleNet network contains two Convolution layers, nine Inception layers, and cooperates with MaxPooling [23] and AveragePooling to reduce the data [24]. The architecture is shown in Fig. 4. The first Convolution layer is composed of 64 7×7 filters and outputs to a 3×3 MaxPooling layer. The second Convolution layer is composed of 192 3×3 filters and outputs to a MaxPooling layer. Next, the above results input to the Inception layers which are the core of GoogleNet. The layers are I1∼I9, and the number of filters in I1 is 64, I2 is 120, I3∼I5 are 128, I6 is 132, I7∼I8 are 208, and I9 is 256. The MaxPooling layers are contained in the middle process to reduce the amount of data. Finally, the results are flattened and sent to the fully connected layer. There are two fully connected layers with 1,000 and 32 neurons, respectively. The 32 neurons in the last Fully Connected layer correspond to the Zernike polynomial with 1 to 32 items.

 figure: Fig. 4.

Fig. 4. The architecture of the neural network.

Download Full Size | PDF

According to the different distribution of data, different activation functions are used [25]. It can improve the accuracy of network. The phase different data contains positive and negative values, so all Convolution layers in the neural network use “tanh” as the activation function and the last two Fully Connected layers use “tanh” and “linear” as the activation function. On the other hand, only positive values are included in the interference fringe data, so “relu” and “linear” are used as the activation function in the network for the entire layers.

2.3. Processes of the method

In this study, we use two kinds of data to predict the Zernike coefficients, namely the phase difference and the interference fringe. The process of building a model is divided into two steps, which are model training and model testing. In the model training, we randomly generate many 256×256-pixel images according to the coefficient range. The Zernike coefficients of interference fringe and phase difference are both in the range of −0.5 to 0.5. In the training process, it uses mean squared error (MSE) as the loss function and 100,000 images for training in 50 epochs with batch size of 32.

In the model testing, the phase difference model and the interference fringe model are tested separately. We generate 50,000 data for prediction, and calculate the root mean square error (RMSE) of the two models [13]. Then discuss whether there are differences between the two models and why. After that, the coefficient range of the test data is expanded or reduced. It can test the performance through three different ranges of coefficient to observe whether the prediction ability of the network can be use in any range or in an actual system. The overall of the architecture is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Process of model training, testing and experiment.

Download Full Size | PDF

2.4. Experimental architecture

After having the prediction models used for predicting Zernike coefficients, we use optical simulation software to generate phase difference and interference fringe to do the experiments. VirtualLab fusion is an optical software that can simulate wave optics with actual optical system [26]. We use this software to generate the phase difference and the interference fringes from the corresponding Zernike coefficients. The structure is Fizeau interferometer, as shown in Fig. 6. Using this optical software with a real interferometer architecture can simulate the performance of connecting a real interferometer.

 figure: Fig. 6.

Fig. 6. The architecture of fizeau interferometer

Download Full Size | PDF

During the experiment, the phase difference and interference fringe from VirtualLab fusion are used to predict the coefficients. Then, we compare the RMSE with above results and discuss the difference.

3. Results

3.1. Testing result

To evaluate the performance of the models, test data is randomly generated, coefficients are predicted through the network models, and the results are compared with the ground true. The performances of the two models are shown in Table 1 and Table 2

Tables Icon

Table 1. The RMSE of the estimated Zernike coefficients with phase difference.

Tables Icon

Table 2. The RMSE of the estimated Zernike coefficients with Interference fringe.

The Keras framework is used for the entire network. When using the Colab service pro-vided by Google using the Tesla K80 GPU, the prediction time of each picture is about 0.010 seconds. According to the results, the model using phase difference is better than using interference fringe. It may be due to the different variation in the two data. The data of the interference fringe oscillates violently in a fixed range, but the data of phase difference changes more smoothly in an unfixed range. Our network may not easily see the small changes, but it shows the feasibility of AI predicting the Zernike coefficient through pictures. As a result, it can predict the coefficients using the interference fringes directly and can be used to reduce the complex process in interferometer.

In addition, when the range of predicted coefficients becomes larger, the performance of model becomes worse. This may be because not all coefficients are included in the training range. When testing a wide range of coefficients, only about half of the coefficients appear in the training set.

3.2. VirtualLab fusion – phase difference

Through VirtualLab fusion and the interferometer architecture, the images of phase difference can be generated. A total of 10 images are tested and the average RMSE of tested result is about 0.039 ± 0.007. The result is like the testing result, and there is not much difference. Figure 7 and Fig. 8 are two examples in this experiment. It can see that the prediction values are very close to the answer, so the phase model can be used with general phase difference graph.

 figure: Fig. 7.

Fig. 7. The result of experiment in the phase difference — sample1

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The result of experiment in the phase difference — sample2.

Download Full Size | PDF

3.3. VirtualLab fusion - interference fringe

Through VirtualLab fusion and the interferometer architecture, the images of interference fringes can be generated. Compared with the training and testing images, the interference fringes simulated by the VirtualLab fusion are blurry, the contrast and the edges are not clear, but the contour are similar. We move the component in the interferometer to generate a fringe image with phase change, and then divide the two images to obtain the input of network for predicting coefficients. A total of 10 images are tested and the average RMSE of tested result is about 0.095 ± 0.018. The experimental error is not as small as the testing result, but at least less than 0.1. Figure 9 and Fig. 10 are two examples in this experiment. It can see that the prediction values are close to the answer, so the coefficients can be directly predicted through the interference fringe using this model.

 figure: Fig. 9.

Fig. 9. The result of experiment in the Interference fringe — sample1.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The result of experiment in the Interference fringe — sample2.

Download Full Size | PDF

4. Discussions and conclusion

The test and experiment results are shown in Table 3, which lists the errors of the two models.

Tables Icon

Table 3. The result RMSE of experiment and test.

According to the results, the model using phase difference is superior to the model using interference fringe. Generally, in the traditional methods, the interference fringe will first be converted into a phase difference, and then the coefficients will be calculated from the phase difference. Therefore, it may be reasonable that the error of the model using interference fringe is larger than using phase difference. In addition, it may also be caused by the different variation in the two data, because the data oscillation of the interference fringe is more intense. Although the model using interference fringe has large error, it can only use two images to predict positive and negative coefficients, which is less than the phase shifting method using multiple images. In the model using phase difference, the error is small, and the predicted result is close to the ground true [13].

Interferometers have been widely used to measure Zernike coefficients. This paper presents a method for predicting the aberration coefficients from interference fringe or wave-front difference using neural networks. After a series of experiments, the neural network can predict the coefficients, and the prediction value are close to the standard value. In the experiment, no matter the RMSE of using phase difference is 0.039 and using interference fringe is 0.095, both prediction models can be used in the structure of the interferometer. These methods can improve the efficiency of the entire process. We hope that this method can be further used for real-time calculation and provide more accurate values and better efficiency when measuring the aberration of camera lenses.

Funding

Ministry of Science and Technology, Taiwan (109-2221-E-011 -030 -).

Disclosures

The authors declare no conflicts of interest.

References

1. R. E. Fischer, B. Tadic-Galeb, P. R. Yoder, R. Galeb, B. C. Kress, S. C. McClain, T. Baur, R. Plympton, B. Wiederhold, and A. J. Grant, Optical system design (McGraw-Hill, 2000), Chap. 3.

2. H. Jing, B. Fan, S. Wu, F. Wu, and T. Fan, “Measurement of optical surfaces with knife edge method,” in 3rd International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment (International Society for Optics and Photonics, 2008), p. 67235L.

3. J. Siv, R. Mayer, G. Beaugrand, G. Tison, R. Juvénal, and G. Dovillaire, “Testing and characterization of challenging optics and optical systems with Shack Hartmann wavefront sensors,” in EPJ Web of Conferences (EDP Sciences, 2019), p. 06003.

4. J. D. Briers, “Interferometric testing of optical systems and components: a review,” Opt. Laser Technol. 4(1), 28–41 (1972). [CrossRef]  

5. E. P. Goodwin and J. C. Wyant, “Field guide to interferometric optical testing,” (SPIE, 2006), pp. 1–6.

6. P. Feng, F. Tang, X. Wang, Y. Lu, J. Xu, F. Guo, and G. Zhang, “Dual-fiber point diffraction interferometer to measure the wavefront aberration of an imaging system,” Appl. Opt. 59(10), 3093–3096 (2020). [CrossRef]  

7. M. J. Kidger, “Importance of aberration theory in understanding lens design,” Proc. SPIE 3190, 26–33 (1997). [CrossRef]  

8. V. Lakshminarayanan and A. Fleck, “Zernike polynomials: a guide,” J. Mod. Opt. 58(7), 545–561 (2011). [CrossRef]  

9. I. Gurov and M. Volynsky, “Interference fringe analysis based on recurrence computational algorithms,” Opt. Laser Eng. 50(4), 514–521 (2012). [CrossRef]  

10. D. Malacara-Hernandez, M. Carpio-Valadez, and J. J. Sanchez-Mondragon, “Wavefront fitting with discrete orthogonal polynomials in a unit radius circle,” Opt. Eng. 29(6), 672–676 (1990). [CrossRef]  

11. L. Hu, S. Hu, W. Gong, and K. Si, “Learning-based Shack-Hartmann wavefront sensor for high-order aberration detection,” Opt. Express 27(23), 33504–33517 (2019). [CrossRef]  

12. H. Guo, N. Korablinova, Q. Ren, and J. Bille, “Wavefront reconstruction with artificial neural networks,” Opt. Express 14(14), 6456–6462 (2006). [CrossRef]  

13. Y. Nishizaki, M. Valdivia, R. Horisaki, K. Kitaguchi, M. Saito, J. Tanida, and E. Vera, “Deep learning wavefront sensing,” Opt. Express 27(1), 240–251 (2019). [CrossRef]  

14. Y. Zhang, H. Xie, and Q. Dai, “Robust sensorless wavefront sensing via neural network in a single-shot,” in Adaptive Optics and Wavefront Control for Biological Systems VI (International Society for Optics and Photonics, 2020), p. 112480E.

15. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “DNN-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019). [CrossRef]  

16. K. Yan, Y. Yu, and L. Jiaxing, “Neural networks for interferograms recognition,” in Sixth International Conference on Optical and Photonic Engineering (icOPEN 2018) (International Society for Optics and Photonics, 2018), p. 108273Q.

17. J. Schwiegerling, “Review of Zernike polynomials and their use in describing the impact of misalignment in optical systems,” in Optical System Alignment, Tolerancing, and Verification XI (International Society for Optics and Photonics, 2017), p. 103770D.

18. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi, “Fringe pattern denoising based on deep learning,” Opt. Commun. 437, 148–152 (2019). [CrossRef]  

19. R. Shrestha, J. Park, and W. Kim, “Application of thermal wave imaging and phase shifting method for defect detection in Stainless steel,” Infrared Phys. Technol. 76, 676–683 (2016). [CrossRef]  

20. S. Zhang, “Composite phase-shifting algorithm for absolute phase measurement,” Opt. Laser Eng. 50(11), 1538–1541 (2012). [CrossRef]  

21. P. Ballester and R. M. Araujo, “On the performance of GoogLeNet and AlexNet applied to sketches,” in Thirtieth AAAI Conference on Artificial Intelligence (2016).

22. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 2818–2826.

23. B. Graham, “Fractional max-pooling,” arXiv preprint arXiv:1412.6071 (2014).

24. D. Yu, H. Wang, P. Chen, and Z. Wei, “Mixed pooling for convolutional neural networks,” in International conference on rough sets and knowledge technology (Springer, 2014), pp. 364–375.

25. B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized MLP architectures of neural networks,” International Journal of Artificial Intelligence and Expert Systems 1, 111–122 (2011).

26. B. Kimbrough, J. Millerd, J. Wyant, and J. Hayes, “Low-coherence vibration insensitive Fizeau interferometer,” in Interferometry XIII: Techniques and Analysis (International Society for Optics and Photonics, 2006), p. 62920F.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The phase different of the correspondence Zernike coefficients. (a) The picture of phase different. (b)The Zernike coefficients of the phase different.
Fig. 2.
Fig. 2. The Interference fringe of the correspondence Zernike coefficients. (a) The picture of interference fringe. (b)The Zernike coefficients of the interference fringe.
Fig. 3.
Fig. 3. The architecture diagram of Inception layer.
Fig. 4.
Fig. 4. The architecture of the neural network.
Fig. 5.
Fig. 5. Process of model training, testing and experiment.
Fig. 6.
Fig. 6. The architecture of fizeau interferometer
Fig. 7.
Fig. 7. The result of experiment in the phase difference — sample1
Fig. 8.
Fig. 8. The result of experiment in the phase difference — sample2.
Fig. 9.
Fig. 9. The result of experiment in the Interference fringe — sample1.
Fig. 10.
Fig. 10. The result of experiment in the Interference fringe — sample2.

Tables (3)

Tables Icon

Table 1. The RMSE of the estimated Zernike coefficients with phase difference.

Tables Icon

Table 2. The RMSE of the estimated Zernike coefficients with Interference fringe.

Tables Icon

Table 3. The result RMSE of experiment and test.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

δ ( r , θ ) = n = 1 N a n z n ( r , θ ) ,
I = I a + I b + 2 I a I b cos ( δ ) ,
I = 4 I 0 cos 2 ( δ 2 ) ,
I 1 = I a + I b + 2 I a I b cos ( δ ) ,
I 2 = I a + I b + 2 I a I b cos ( δ + π 2 ) ,
I 3 = I a + I b + 2 I a I b cos ( δ + π ) ,
I 4 = I a + I b + 2 I a I b cos ( δ + 3 π 2 ) ,
δ = tan 1 ( I 4 I 2 I 1 I 3 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.