Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-based optical authentication using the structural coloration of metals with femtosecond laser-induced periodic surface structures

Open Access Open Access

Abstract

Structurally colored materials present potential technological applications including anticounterfeiting tags for authentication due to the ability to controllably manipulate colors through nanostructuring. Yet, no applications of deep learning algorithms, known to discover meaningful structures in data with far-reaching optimization capabilities, to such optical authentication applications involving low-spatial-frequency laser-induced periodic surface structures (LSFLs) have been demonstrated to date. In this work, by fine-tuning one of the lightweight convolutional neural networks, MobileNetV1, we investigate the optical authentication capabilities of the structurally colorized images on metal surfaces fabricated by controlling the orientation of femtosecond LSFLs. We show that the structural color variations due to a broad range of the illumination incident angles combined with both the controlled orientations of LSFLs and differences in features captured in the image make this system suitable for deep learning-based optical authentication.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since its direct use of the wave nature of light, structural coloration is one of the most intriguing methods to create colors compared to other coloration methods such as pigments and bioluminescence [1]. Particularly, structural color by the periodic surface structures involves diffraction, and can be easily tailored by the period and orientation of the structures with the positions of source and observer according to conical diffraction and Rayleigh anomalies [15]. Due to these various ways of manipulating the structural colors, the periodic structures have been extensively used for anticounterfeiting purposes [14].

In 2008, following femtosecond (fs) laser pulse irradiation, the structural coloration of the metal surface was demonstrated for the first time by Vorobyev and Guo via the formation of low-spatial-frequency laser-induced periodic surface structures (LSFLs) [6]. LSFLs induced by fs laser pulses are extensively covered with randomly oriented nanostructures, and their period and groove height are not quite regularly distributed compared to commercially available diffraction gratings [79]. However, these structural irregularities actually make LSFLs extremely difficult to be duplicated without inducing structural damages, and grant opportunities for a reliable randomness-based high-capacity optical authentication scheme with distinct structural colors that depend on the relative orientation of white light illumination and observation [911]. Also, by adjusting the polarization direction and incident angle of incident laser pulses [1013], the orientation and period of LSFLs can be manipulated within the focused laser spot, respectively, and the complex structure of LSFL orientation can be achieved by using various optical components, including spatial light modulators, q-plates, depolarizers, and optical vortex converters [1417]. With these controls, the distinctive structural colors of LSFLs can be tailored even within a beam spot area of a few tens of µm2, and lead to the creation of structurally colored images under the specific positions of observer and illumination [10,11,18]. Accordingly, LSFLs may serve as a platform suitable for fabricating anticounterfeiting labels for optical authentication [4,19,20].

Recently, deep-learning algorithms have manifested their usefulness in optimizing the performance of laser processing [2126]. Previous studies showed that the algorithms can be utilized to monitor and control the quality of laser processing [21], predict absorptance in the laser-treated region [22], and simulate laser ablation [23]. Also, for LSFLs, their functional surface properties were able to be forecasted by the algorithms with laser processing parameters [26]. In this paper, we report on the first-time application of a deep learning algorithm to the structural coloration of metal surfaces through LSFLs for optical authentication. Our systematic study involves the well-controlled fabrication of two types of structurally colored images on stainless steel plates, where single and dual binary logo images are embedded at the surfaces, and fine-tuning of the deep learning algorithm to our fabricated samples. Our experimental results and analysis demonstrate the anticounterfeiting capabilities of the structural colors attainable by LSFLs.

2. Methods and materials

An amplified Ti:sapphire femtosecond (fs) laser system is employed to generate linearly polarized 33.6-fs laser pulses with the maximum energy of 1.2 mJ at a repetition rate of 5 kHz, and its central wavelength and spectral bandwidth (FWHM) of the pulses are 800 nm and 36 nm, respectively. Our experiments use commercial stainless steel (STAVAX ESR, ASSAB) plates with a thickness of 2 mm, mechanically polished their surfaces with 80-nm-grade colloidal silica to minimize any changes in the structural colors due to the initial surface roughness. The average roughness of the polished samples measured with a laser confocal scanning microscope is about 14 nm.

To fabricate LSFLs, linearly polarized fs laser pulses are weakly focused by using a 40 mm focal length objective lens with a numerical aperture of 0.14 at a defocused distance of about 60 µm, as shown in Fig. 1(a). Under these experimental conditions, the 1/e2 intensity spot radius at the sample surface is about 18 µm. The samples are raster scanned via a translation stage at a speed of 15 mm/s with a laser fluence of 0.59 J/cm2, while the distance between the scanlines is fixed at 40 µm, so that the scanlines do not overlap each other. Under these processing conditions, the presence of a 4 µm gap between two adjacent scanlines is necessary to minimize the possibility of changes in the structural colors of LSFLs originating from additional pulses of irradiation due to the overlap of the scanlines. As shown in Fig. 1(a), during the raster scan, the polarization direction of irradiating fs laser pulses is adjusted by changing the orientation of the fast axis of a half-wave-plate to control structural colors originating from a change in the orientation of LSFLs. The half-wave-plate is rotated to the next desired orientation whenever the sample regions with the current orientation of LSFLs are fabricated completely. Regardless of LSFL’s orientation, the period of LSFLs is about 670 ± 50 nm, measured with a scanning electron microscope (SEM). For all figures, single-headed yellow arrows indicate the scanning direction of fs laser pulses, and the polarization direction of fs laser pulses and the orientation (grating vector) of LSFLs are described by double-headed red arrows. It is the orientation of LSFLs (rather than their depths and periods) that is well controlled in our experiments and serves as a key parameter for optical authentication in this work.

 figure: Fig. 1.

Fig. 1. (a) Schematic of LSFL fabrication in our experiments. The polarization direction of fs laser pulses at the laser output is in the x-direction, and is controlled with a half-wave-plate (HWP) before the focusing lens. (b) Training and authenticating configuration of structurally colored images by LSFLs. The images are captured under white LED illumination. (c) Normalized spectral power distribution of the white LED used in (b).

Download Full Size | PDF

Using two binary logo images with the size of 750 by 750 pixels, structurally colored images on stainless steel are engraved by changing the orientation of LSFLs, where the polarization direction of fs laser pulses at black and white pixels in the image is chosen to be perpendicular to each other. Each row of pixels in the images is raster scanned by fs laser pulses, and this eventually creates the structurally colored images on stainless steel plates with a size of 30 mm by 30 mm since the scanline distance between rows is 40 µm, as mentioned earlier.

Prior to optically authenticating the samples, the structurally colored regions of the samples are packaged with a transparent polycarbonate (PC) window with a thickness of 0.55 mm for protection. As described in Fig. 1(b), the white-light-emitting diode (white LED) illuminates the packaged samples within a broad range of incident angles, 30-60°, across the surface of the sample, and the spectral power distribution of the white LED shown in Fig. 1(b) is displayed in Fig. 1(c). The sample images are captured by a camera module with a CMOS sensor (Sony IMX219) from the surface normal direction. Then, we take advantage of the classification capabilities of the lightweight convolutional neural networks (CNNs) for mobile vision applications, MobileNetV1 (mobilenet-v1-1.0-224), for the optical authentication of our structurally colored samples [27]. From the original MobileNetV1, all layers up to the final $7 \times 7$ average pooling layer are included, and then the dense layer is added as the last layer with four output nodes activated by the SoftMax function to authenticate the structurally colored image by LSFLs on stainless steel plates.

3. Results and discussion

Two types of structurally colored samples are fabricated with various angles of LSFLs orientations on stainless steel plates by using two binary logo images (the logos of KITECH and Optimus System Co., Ltd.). As shown in Fig. 2(a), the first one uses a single binary logo image (KITECH). By adjusting the polarization direction of irradiating fs laser pulses, the angle of LSFLs orientation with respect to the scanning direction is rotated with an incremental step of 5° in the counterclockwise direction, while the angle of the LSFLs orientation between black and white pixels in the image keeps about 90 ± 5° (See the schematic orientations of LSFLs in Fig. S1 in Supplement 1). In Fig. 2(a), the counterclockwise rotation angle of LSFL orientation for white pixels with respect to the scanning direction is indicated by δWK. The structurally colored KITECH logo by two LSFL orientations can be clearly observed near two orthogonal planes containing the direction of white light illumination and the two grating vectors (orientations) of LSFLs, as shown in Fig. 2(b).

 figure: Fig. 2.

Fig. 2. (a) Binary logo image of KITECH used to fabricate the 1st type of structurally colored images with LSFLs on stainless steel plates. SEM images show the LSFL orientations in the raster scanlines for the black and white pixels of the binary image with various δWK values. (b) Experimentally captured images of structurally colored KITECH logo on stainless steel plate (δWK =30°) under white LED illumination from the sample surface normal. The samples are observed at an angle range of about 45-60° across the surface in various directions. Black and white pixels (KITECH) in (b) denote clear structurally colorized regions in the black and white pixels in the binary image shown in (a), respectively. See the optical micrograph and morphological profile of the raster scanned sample surface in Fig. S2 in Supplement 1.

Download Full Size | PDF

The 2nd type of structurally colored sample employs two binary logo images (KITECH and Optimus System Co., Ltd.) shown in Fig. 3(a). Two images are interlaced with each other, where all even rows use the image from the logo of KITECH and all odd rows employ the image from the logo of Optimus System Co., Ltd. To distinguish two logo images each with two brightness levels, the four distinct orientations of LSFLs are used to create the 2nd type of structurally colored images on stainless steel. Here, δWO is defined as the counterclockwise rotation angle of the orientation of LSFLs for the white pixels in the odd rows with respect to the direction of the raster scan (See the schematic orientations of LSFLs in Fig. S1 in Supplement 1). The difference in the LSFLs’ orientation between black and white pixels regions keeps around 90 ± 5° for the same binary image as its first type, and all four orientations are equally distributed about the surface normal direction with an angle of about 45°, as shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. (a) Binary logo images of Optimus System Co. Ltd. and KITECH used for the 2nd type of structurally colored images constructed with LSFLs on stainless steel plates. SEM images describe the orientations (grating vectors) of LSFLs in the black and white pixel regions for these two binary logo images when δWO = 20°. (b) Experimentally captured images of structurally colorized logo observed at an angle range of about 40-60° across the sample surface with multiple directions under white light illumination from the surface normal. Black and white pixels (Optimus and KITECH) shown in (b) represent the directions of sample observation with clear structural colors for the four orientations of LSFLs described in (a). See the optical micrograph and morphological profile of the raster scanned sample surface in Fig. S3 in Supplement 1.

Download Full Size | PDF

Since the number of LSFL orientations for the 2nd type of the sample is four, quaternary structurally colored logo images can be distinctly displayed around four different planes, containing the direction of the incident white light illumination and the four grating vectors (orientations) of LSFLs, as described in Fig. 3(b). Both the 1st and 2nd types of structurally colored logo images shown in Figs. 2 and 3 are fabricated on our stainless-steel plates in the δWK and δWO range of 0-45° with an incremental step of 5° and at δWK and δWO = 90° (See the schematic orientations of LSFLs in Fig. S1 in Supplement 1).

Structural coloration by periodic structures can be understood by diffraction, as briefly mentioned in Section 1. Under white light illumination from the surface normal as shown in Fig. 2(b) and 3(b), surface coloration from LSFLs only occurs near the directions of the plane containing the incident white light and the grating vectors of LSFLs. Also, toward the observation direction, the color of the sample surface shifts from red to blue. This clearly shows that planar diffraction is involved in the structural coloration of LSFLs since the color variation across the sample surface follows the grating equation, where the 1st order diffraction angle with respect to the surface normal is smaller for a shorter wavelength [28].

Figure 4(a) shows the structurally colored logo images shown in Fig. 2(a) and 3(a) at various δWK and δWO values. The images are captured by our CMOS camera under the configuration shown in Fig. 1(b). Compared to the image capture configuration used in Fig. 2(b) and 3(b), the directions of illumination and observation in Fig. 1(b) are reversed, and this configuration can rule out any unwanted effects related to the rotation of logos in the captured images. As shown in Fig. 4(a), depending on the incident angle of white LED illumination across the image and the orientations of LSFLs, each captured structural color image shows the color variations through the -1st order planar and conical diffraction [9,10,29], and selectively displays the feature of the logos in each color channel without any changes in the orientation of the logos themselves. Also, these variations of colors and features depend on δWK and δWO. The color variation of LSFLs with the illumination incident angle and the orientation and period of LSFLs can be evaluated by using the diffraction grating formula [9,10,29].

 figure: Fig. 4.

Fig. 4. (a) Images of the 1st and 2nd types of structurally colored logos with δWK and δWO, respectively. The images are captured by the CMOS camera under white light illumination with the image capture configuration shown in Fig. 1(b). White LED illuminates from the bottom direction of the images at the incident angle range of 30-60° across the surface of the sample. (b) Captured images of the 2nd type of structurally colored logos for δWO = 0°, 45°, and 90° from the blue, green, and red color channels.

Download Full Size | PDF

To evaluate the optical authentication capabilities of our structurally colored samples with the aforementioned color and displayed feature variations, we start by splitting the captured structural color images into three 8-bit brightness images each from the blue (B), green (G), and red (R) color channel of the CMOS sensor shown in Fig. 4(b). As expected from Fig. 4(a), these single-channel brightness images are different from each other, and change with δWO in each color channel.

Our fine-tuned MobileNetV1 described earlier is trained twice separately with our 1st and 2nd types of structurally colored logo images specifically for δWK and δWO = 0, respectively, and two trained models are created so that their four output nodes in the last dense layer can correspond to classifying one captured full-color image and three brightness images from the B, G, and R channels within the same type of the sample. For each model, a total of 550 full-color images are collected individually from five different samples under identical experimental conditions by changing the sample after every single shot, and 450 images are used for training 30 epochs with our models, and among the 100 rest images, 50 images each are used for test and validation, respectively. The four output nodes are labeled as F, B, G, and R nodes, and provide the posterior probabilities of the image classification through the SoftMax activation when the inputs are the captured full-color image and its three brightness images from the B, G, and R color channels, respectively. As shown in Fig. 5, the authentication is approved only if all posterior probabilities from the F, B, G, and R nodes are higher than the threshold of classification with the separate inputs of four images to our trained model, one captured full-color image and three brightness images from the B, G, and R channels, respectively.

 figure: Fig. 5.

Fig. 5. Optical authentication procedure of structurally colored samples with our fine-tuned MobileNetV1. To effectively capture the color and displayed feature variations from the samples, one full-color sample image and three 8-bit brightness images from the B, G, and R color channels are inputted separately to our fine-tuned MobileNetV1, trained by the 1st and 2nd types of structurally colored logo images for δWK and δWK = 0, respectively. The authentication is approved only if all four conditions (Condition #1-#4) are satisfied with the separate inputs of four images obtained from the captured sample image. TH indicates the threshold of classification.

Download Full Size | PDF

Figures 6 and 7 show the posterior probabilities obtained from the F, B, G, and R nodes as a function of δWK or δWO by inputting one captured full-color image and its three brightness images from the B, G, and R color channels for each δWK or δWO, respectively. In case our trained models are tested within the same type of samples, the posterior probabilities of the four nodes for the verification of all conditions described in Fig. 5 are nearly one with standard deviations of less than 0.002 for δWK, δWO $\le $10°, as shown in Fig. 6; however, once δWK or δWO reaches 15°, the posterior probabilities of G and R nodes start to decrease monotonically with δWK or δWO, respectively. This indicates that our trained models can capture the structural color variations and selectively displayed logo features of the sample mostly from the G and R channels due to differences in the LSFL orientation even though the same logo and type of images are used for structural coloration. By using a posterior probability of larger than 0.95 for all F, B, G, and R nodes as the classification threshold, our authentication procedure shown in Fig. 5 can clearly discriminate our training samples (δWK, δWO =0°) from the samples with δWK, δWO $\ge $15°. Therefore, when we consider the accuracy of LSFL orientation about 10°, our trained model can precisely authenticate our sample through the color variations of the structurally colored logo.

 figure: Fig. 6.

Fig. 6. Posterior probability versus δWK and δWO obtained from the same type of training and testing sample images. Our fine-tuned MobileNetV1 is trained by the captured images of the (a) 1st and (b) 2nd type of structurally colored samples for δWK and δWO = 0, respectively, and is tested with 50 images captured from the same type of the samples with various δWK and δWO values. Black, blue, green, and red dots denote average posterior probabilities obtained from the F, B, G, and R nodes of our trained MobileNetV1 models when the inputs are the captured full-color image and its three brightness images each from the B, G, and R color channel, respectively, as described in Fig. 5. Dashed black lines show a posterior probability of 0.95, used for the threshold of classification in our study.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Posterior probability versus δWO and δWK for the different types of training and testing sample images. Our fine-tuned MobileNetV1 is trained by the captured images of the (a) 1st and (b) 2nd type of structurally colored samples for δWK and δWO = 0, respectively, and is tested with 50 images captured from the (a) 2nd and (b) 1st type of the samples with various δWO and δWK values. Black, blue, green, and red dots denote average posterior probabilities obtained from the F, B, G, and R nodes of our trained MobileNetV1 models with the four inputs of the captured full-color image and its three brightness images each from the B, G, and R color channel, respectively, as shown in Fig. 5. Dashed black lines indicate the threshold of posterior probability, 0.95, used for the sample classification in our study.

Download Full Size | PDF

Our trained models are also tested by the images from the different types of the sample, as described in Fig. 7. Although the 1st and 2nd types of the samples are similar to each other in terms of sharing features from one of the logos (the logo of KITECH), not all posterior probabilities from the nodes overcome a classification threshold of 0.95 with any δWO and δWK values for the approval of authentication, respectively. Particularly, the 1st type of the sample with δWK = 45° and the 2nd type of the sample with δWO = 0° have the same orientations of LSFLs for both black and white pixels in the logo of KITECH. However, as shown in Fig. 4(a), the structurally colored logo of KITECH in the 2nd type of the sample for δWO = 0° is almost covered with that of Optimus System Co., Ltd. caused by planar and conical diffraction. This captured feature difference also makes the posterior probabilities from the R and G nodes far below our classification threshold, as described by the region boxed with the red dashed lines in Fig. 7(b), and clearly enhances the deep learning-based optical authentication capabilities.

It is worth noting that our authentication procedure does not appear to capture the structural irregularities inherent in LSFLs between the samples fabricated under identical conditions, and we believe that the image differences arising from the irregularities contribute little to the authentication scheme compared with those originating from the changes in an LSFLs orientation of 15° in our experiments. Also, the procedure is carefully verified with the same method as shown in Figs. 6 and 7 after our MobileNetV1 is trained by the 1st or 2nd type of images from any other δWK or δWO values used in our experiments, respectively. Although the changes in the posterior probabilities of the nodes with δWK or δWO are not the same as those shown in Fig. 6 and 7 and the posterior probabilities from the F and B nodes can become lower than our classification threshold, our authentication approval procedure is not depreciated with a posterior probability threshold of 0.95.

The total amount of time for the fabrication of the first and second types of structurally colored logo images used in our experiments is around 27 minutes and 33 minutes with our fs laser system at a repetition rate of 5kHz, respectively. Considering the commercially available high-power fs laser system can operate at an average power of more than 100W with a 1 MHz repetition rate, our sample can be prepared within half a minute, and our optical authentication approach using the structural color from LSFLs can become widely feasible in industrial applications.

4. Conclusion

In conclusion, the applicability of structurally colorized images fabricated by controlling the orientations of LSFLs for optical authentication has been investigated on metals through training one of the most light-weight CNNs, MobileNetV1, fine-tuned by us. By constructing two types of structurally colorized images with LSFLs, we have successfully demonstrated that structural color variations due to the orientations of LSFLs combined with a broad range of the illumination incident angle and the differences in the captured image features across the sample surface can be used for a deep learning-based optical authentication.

Funding

Korea Institute of Industrial Technology (EO-22-0006); Optimus System Co., Ltd. (IR-22-0011); National Research Foundation of Korea (2020R1G1A1103073); National Research Foundation of Korea (2021R1F1A1063020).

Acknowledgments

This study has been conducted with the support of the Korea Institute of Industrial Technology as “Development of root technology for multi-product flexible production (EO-22-0006),” Optimus System Co., Ltd. as “Development of GPU-based optical recognition module for femtosecond laser-treated nanoscale patterns using a machine learning approach (IR-22-0011),” and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MIST) (2020R1G1A1103073). This work was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1F1A1063020).”

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. Sun, B. Bhushan, and J. Tong, “Structural coloration in nature,” RSC Adv. 3(35), 14862–14889 (2013). [CrossRef]  

2. W. Hong, Z. Yuan, and X. Chen, “Structural Color Materials for Optical Anticounterfeiting,” Small 16(16), 1907626 (2020). [CrossRef]  

3. E. I. Ageev, V. P. Veiko, E. A. Vlasova, Y. Y. Karlagina, A. Krivonosov, M. K. Moskvin, G. v. Odintsova, V. E. Pshenichnov, V. v. Romanov, and R. M. Yatsuk, “Controlled nanostructures formation on stainless steel by short laser pulses for products protection against falsification,” Opt. Express 26(2), 2117 (2018). [CrossRef]  

4. J. Qian and Q. Z. Zhao, “Anti-Counterfeiting Microstructures Induced by Ultrashort Laser Pulses,” Phys. Status Solidi A 217(11), 1901052 (2020). [CrossRef]  

5. J. E. Harvey and R. N. Pfisterer, “Understanding diffraction grating behavior: including conical diffraction and Rayleigh anomalies from transmission gratings,” Opt. Eng. 58(08), 1 (2019). [CrossRef]  

6. A. Y. Vorobyev and C. Guo, “Colorizing metals with femtosecond laser pulses,” Appl. Phys. Lett. 92(4), 041914 (2008). [CrossRef]  

7. J. Bonse, “Quo Vadis LIPSS?—Recent and Future Trends on Laser-Induced Periodic Surface Structures,” Nanomaterials 10(10), 1950 (2020). [CrossRef]  

8. A. Y. Vorobyev, V. S. Makin, and C. Guo, “Periodic ordering of random surface nanostructures induced by femtosecond laser pulses on metals,” J. Appl. Phys. (Melville, NY, U. S.) 101(3), 034903 (2007). [CrossRef]  

9. T. Y. Hwang, Y. D. Kim, J. Cho, H. J. Lee, H. S. Lee, and B. Lee, “Multi-angular colorimetric responses of uni-and omni-directional femtosecond laser-induced periodic surface structures on metals,” Nanomaterials 11(8), 2010 (2021). [CrossRef]  

10. B. Dusser, Z. Sagan, H. Soder, N. Faure, J. P. Colombier, M. Jourlin, and E. Audouard, “Controlled nanostructrures formation by ultrafast laser pulses for color marking,” Opt. Express 18(3), 2913–2924 (2010). [CrossRef]  

11. T. Jwad, P. Penchev, V. Nasrollahi, and S. Dimov, “Laser induced ripples’ gratings with angular periodicity for fabrication of diffraction holograms,” Appl. Surf. Sci. 453, 449–456 (2018). [CrossRef]  

12. T. Y. Hwang and C. Guo, “Angular effects of nanostructure-covered femtosecond laser induced periodic surface structures on metals,” J. Appl. Phys. (Melville, NY, U. S.) 108(7), 073523 (2010). [CrossRef]  

13. C. A. Zuhlke, G. D. Tsibidis, T. Anderson, E. Stratakis, G. Gogos, and D. R. Alexander, “Investigation of femtosecond laser induced ripple formation on copper for varying incident angle,” AIP Adv. 8(1), 015212 (2018). [CrossRef]  

14. O. J. Allegre, W. Perrie, S. P. Edwardson, G. Dearden, and K. G. Watkins, “Laser microprocessing of steel with radially and azimuthally polarized femtosecond vortex pulses,” J. Opt. 14(8), 085601 (2012). [CrossRef]  

15. J. J. J. Nivas, E. Allahyari, F. Cardano, A. Rubano, R. Fittipaldi, A. Vecchione, D. Paparo, L. Marrucci, R. Bruzzese, and S. Amoruso, “Surface structures with unconventional patterns and shapes generated by femtosecond structured light fields,” Sci. Rep. 8(1), 13613 (2018). [CrossRef]  

16. T. Y. Hwang, H. Shin, H. J. Lee, H. S. Lee, C. Guo, and B. Lee, “Rotationally symmetric colorization of metal surfaces through omnidirectional femtosecond laser-induced periodic surface structures,” Opt. Lett. 45(13), 3414–3417 (2020). [CrossRef]  

17. M. Beresna, M. Gecevičius, P. G. Kazansky, and T. Gertus, “Radially polarized optical vortex converter created by femtosecond laser nanostructuring of glass,” Appl. Phys. Lett. 98(20), 201101 (2011). [CrossRef]  

18. Y. Tamamura and G. Miyaji, “Structural coloration of a stainless steel surface with homogeneous nanograting formed by femtosecond laser ablation,” Opt. Mater. Express 9(7), 2902–2909 (2019). [CrossRef]  

19. J. Bonse, S. v. Kirner, S. Höhm, N. Epperlein, D. Spaltmann, A. Rosenfeld, and J. Krüger, “Applications of laser-induced periodic surface structures (LIPSS),” in Laser-Based Micro- and Nanoprocessing XI (2017), Vol. 10092, p. 100920N. [CrossRef]  

20. M. Soldera, F. Fortuna, S. Teutoburg-Weiss, S. Milles, K. Taretto, and A. F. Lasag, “Comparison of structural colors achieved by laser-induced periodic surface structures and direct laser interference patterning,” J. Laser Micro Nanoeng. 15, 97–103 (2020). [CrossRef]  

21. Y. Xie, D. J. Heath, J. A. Grant-Jacob, B. S. Mackay, M. D. T. McDonnell, M. Praeger, R. W. Eason, and B. Mills, “Deep learning for the monitoring and process control of femtosecond laser machining,” J. Phys. Photonics 1(3), 035002 (2019). [CrossRef]  

22. S. Oh, H. Kim, K. Nam, and H. Ki, “Deep-learning approach for predicting laser-beam absorptance in full-penetration laser keyhole welding,” Opt. Express 29(13), 20010 (2021). [CrossRef]  

23. S. Tani and Y. Kobayashi, “Ultrafast laser ablation simulator using deep neural networks,” Sci. Rep. 12(1), 5837 (2022). [CrossRef]  

24. Q. Zhang, Z. Wang, B. Wang, Y. Ohsawa, and T. Hayashi, “Feature Extraction of Laser Machining Data by Using Deep Multi-Task Learning,” Information 11(8), 378 (2020). [CrossRef]  

25. B. Wang, P. Wang, J. Song, Y. Cheong, H. Song, Y. Wang, and S. Liu, “A hybrid machine learning approach to determine the optimal processing window in femtosecond laser-induced periodic nanostructures,” J. Mater. Process. Technol. 308, 117716 (2022). [CrossRef]  

26. L. Baronti, A. Michalek, M. Castellani, P. Penchev, T. L. See, and S. Dimov, “Artificial neural network tools for predicting the functional response of ultrafast laser textured/structured surfaces,” Int. J. Adv. Manuf. Technol. 119(5-6), 3501–3516 (2022). [CrossRef]  

27. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv, ArXiv arXiv:1704.04861 (2017). [CrossRef]  

28. E. Hecht, Optics, 4th ed. (Addison-Wesley, 2002).

29. J. E. Harvey and C. L. Vernold, “Description of Diffraction Grating Behavior in Direction Cosine Space,” Appl. Opt. 37(34), 8158 (1998). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       The schematic orientations of LSFLs(Fig.S1) and the optical micrographs and morphological profiles of the raster scanned sample surfaces shown in Fig.2 and 3(Fig.S2 and S3)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) Schematic of LSFL fabrication in our experiments. The polarization direction of fs laser pulses at the laser output is in the x-direction, and is controlled with a half-wave-plate (HWP) before the focusing lens. (b) Training and authenticating configuration of structurally colored images by LSFLs. The images are captured under white LED illumination. (c) Normalized spectral power distribution of the white LED used in (b).
Fig. 2.
Fig. 2. (a) Binary logo image of KITECH used to fabricate the 1st type of structurally colored images with LSFLs on stainless steel plates. SEM images show the LSFL orientations in the raster scanlines for the black and white pixels of the binary image with various δWK values. (b) Experimentally captured images of structurally colored KITECH logo on stainless steel plate (δWK =30°) under white LED illumination from the sample surface normal. The samples are observed at an angle range of about 45-60° across the surface in various directions. Black and white pixels (KITECH) in (b) denote clear structurally colorized regions in the black and white pixels in the binary image shown in (a), respectively. See the optical micrograph and morphological profile of the raster scanned sample surface in Fig. S2 in Supplement 1.
Fig. 3.
Fig. 3. (a) Binary logo images of Optimus System Co. Ltd. and KITECH used for the 2nd type of structurally colored images constructed with LSFLs on stainless steel plates. SEM images describe the orientations (grating vectors) of LSFLs in the black and white pixel regions for these two binary logo images when δWO = 20°. (b) Experimentally captured images of structurally colorized logo observed at an angle range of about 40-60° across the sample surface with multiple directions under white light illumination from the surface normal. Black and white pixels (Optimus and KITECH) shown in (b) represent the directions of sample observation with clear structural colors for the four orientations of LSFLs described in (a). See the optical micrograph and morphological profile of the raster scanned sample surface in Fig. S3 in Supplement 1.
Fig. 4.
Fig. 4. (a) Images of the 1st and 2nd types of structurally colored logos with δWK and δWO, respectively. The images are captured by the CMOS camera under white light illumination with the image capture configuration shown in Fig. 1(b). White LED illuminates from the bottom direction of the images at the incident angle range of 30-60° across the surface of the sample. (b) Captured images of the 2nd type of structurally colored logos for δWO = 0°, 45°, and 90° from the blue, green, and red color channels.
Fig. 5.
Fig. 5. Optical authentication procedure of structurally colored samples with our fine-tuned MobileNetV1. To effectively capture the color and displayed feature variations from the samples, one full-color sample image and three 8-bit brightness images from the B, G, and R color channels are inputted separately to our fine-tuned MobileNetV1, trained by the 1st and 2nd types of structurally colored logo images for δWK and δWK = 0, respectively. The authentication is approved only if all four conditions (Condition #1-#4) are satisfied with the separate inputs of four images obtained from the captured sample image. TH indicates the threshold of classification.
Fig. 6.
Fig. 6. Posterior probability versus δWK and δWO obtained from the same type of training and testing sample images. Our fine-tuned MobileNetV1 is trained by the captured images of the (a) 1st and (b) 2nd type of structurally colored samples for δWK and δWO = 0, respectively, and is tested with 50 images captured from the same type of the samples with various δWK and δWO values. Black, blue, green, and red dots denote average posterior probabilities obtained from the F, B, G, and R nodes of our trained MobileNetV1 models when the inputs are the captured full-color image and its three brightness images each from the B, G, and R color channel, respectively, as described in Fig. 5. Dashed black lines show a posterior probability of 0.95, used for the threshold of classification in our study.
Fig. 7.
Fig. 7. Posterior probability versus δWO and δWK for the different types of training and testing sample images. Our fine-tuned MobileNetV1 is trained by the captured images of the (a) 1st and (b) 2nd type of structurally colored samples for δWK and δWO = 0, respectively, and is tested with 50 images captured from the (a) 2nd and (b) 1st type of the samples with various δWO and δWK values. Black, blue, green, and red dots denote average posterior probabilities obtained from the F, B, G, and R nodes of our trained MobileNetV1 models with the four inputs of the captured full-color image and its three brightness images each from the B, G, and R color channel, respectively, as shown in Fig. 5. Dashed black lines indicate the threshold of posterior probability, 0.95, used for the sample classification in our study.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.