Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Diverse ranking metamaterial inverse design based on contrastive and transfer learning

Open Access Open Access

Abstract

Metamaterials, thoughtfully designed, have demonstrated remarkable success in the manipulation of electromagnetic waves. More recently, deep learning can advance the performance in the field of metamaterial inverse design. However, existing inverse design methods based on deep learning often overlook potential trade-offs of optimal design and outcome diversity. To address this issue, in this work we introduce contrastive learning to implement a simple but effective global ranking inverse design framework. Viewing inverse design as spectrum-guided ranking of the candidate structures, our method creates a resemblance relationship of the optical response and metamaterials, enabling the prediction of diverse structures of metamaterials based on the global ranking. Furthermore, we have combined transfer learning to enrich our framework, not limited in prediction of single metamaterial representation. Our work can offer inverse design evaluation and diverse outcomes. The proposed method may shrink the gap between flexibility and accuracy of on-demand design.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Metamaterials are subwavelength-structured artificial media that can control electromagnetic wave propagation in unique ways, surpassing limitations in conventional optics. These technologies offer opportunities for electromagnetic wave manipulation [14], imaging [58], and cloaking [911]. These special optical responses of metamaterials result from the delicate design of periodic or non-periodic meta-atoms [7,1216]. Most reported meta-atoms have been designed according to conventional algorithms [1723]. However, they up to now have been insufficient, without the guidance of physics, in giving designated metamaterial structures that satisfy the expected spectrum.

Different from physics-inspired design, recently researchers have shown an increasing interest in designing structures with deep learning [2426]. Deep learning, constructed by multilayer neural networks, aims to learn the representation relationship between input and output data [27]. Especially in the field of electromagnetic, several attempts have been made to model unit cells [2830], such as designing on-demand chiral metamaterials [3133], optimizing metasurfaces by a prior-knowledge-driven neural network [3436], using the neural network to control meta-atom for holographic images [37,38]. However, these studies have been mostly restricted to one-to-one representational forms of inputs and outputs. As a result, these methods concentrate too much on one result and lack reverse design flexibility. By employing a deep generative model, recent works attempt to establish one-to-many relation between optical responses and physical structures [39,40]. Instead of the one-to-one model, their work adopted a model more in line with design requirements that have greatly advanced flexibility in reverse design. But the outputs of their methods are not connected, randomly restricted, and are also not subjected to global evaluation. In this work, unifying the one-to-one and one-to-many methods (Fig. 1), we further perform inverse design by sorting the shortlist of metamaterial structures.

 figure: Fig. 1.

Fig. 1. Conceptual comparison of three inverse design mechanisms. (a): Inverse design result only contains an optimal solution [24]. (b): Inverse design of one-to-many but no ranking mapping [39,40]. (c): Schematic of the proposed method. Our method provides a ranked one-to-many inverse design result.

Download Full Size | PDF

From this perspective, we propose a global ranking inverse design method, which aims to learn the similarity of one electromagnetic wave response and all meta-atom architectures. Using the cogitation of contrastive learning [41], our method views metamaterial structures as a kind of the same-level signal as optical responses rather than treats structures as supervised labels. Contrastive learning is used to train the similarity relationship between these two signals. Based on the similarity relationship that was trained, the inverse design results, which are better suited for on-demand design, are presented in a ranked order. Moreover, to extend the ability across representation forms of our neural network framework, we incorporate transfer learning [4244] into our framework. The purpose of transfer learning is to allow our model to easily cope with different forms of metamaterial structures. The combination of contrastive learning and transfer learning leads to more diverse prediction results, increasing flexibility in the process of metamaterial design.

The details of the calculation process in our model are shown in Fig. 2. Figure 2(a) and 2(c) show the training and prediction process of contrastive learning while Fig. 2(b) and 2(d) show the same process of contrastive learning with transfer learning. The basic neural network framework (Fig. 2(a)) consists of two components, a light structure feature encoder and a complex spectrum feature encoder, which own identical dimension outputs with one another. Two encoder outputs are employed to calculate their similarity for minimizing the loss while the neural network is in a state of training. When out of training, in an attempt to make on-demand designs, the extracted spectrum features are devoted to comparing with the candidate parameters in order to achieve resemblance ranking. According to the ranking, a list of well-organized meta-atom representation can be readily acquired. For validating our methods, we select two different forms of metamaterial data and conducted a series of investigations. We are able to utilize transfer learning to enable our framework to perform inverse design across different representation forms.

 figure: Fig. 2.

Fig. 2. Overview of the process of the neural network framework. (a), (c) Display the architecture of contrastive learning neural network. (a) The training phase of the neural network, aimed to maximize the similarity of corresponding elements. (c) The prediction phase of the framework. After all global structure matrices are generated, they are compared with the target spectral. (b), (d) Combination of transfer learning into our framework. Additional structure encoder shares the spectral encoder with the previous structure encoder.

Download Full Size | PDF

2. Methods

2.1 Neural network architecture

Inspired by CLIP (Contrastive language image pre-training) [41], contrastive learning is much more efficient during the calculation, and therefore the improved method is adopted in our neural network, in which the streamline is analogous but the encoder is different. The one-type training framework is schematically depicted in Fig. 2(a). Collected data pairs are respectively sent to the metamaterial structure encoder and spectral encoder. After separately encoding operations, data pairs of different sizes are output as vectors with the same size. Scaled matrix multiplication is hereafter used to calculate the similarity between the vectors. The matrix of batch size × batch size, which is obtained by scaled similarity calculation, establishes the resemblance relationship between the vectors. According to the input order, the corresponding element is exactly distributed along the diagonal. Therefore, using the gradient backpropagation method, we minimize cross-entropy loss for diagonal elements to optimize the weight parameters of the network. The loss function of the whole framework is given by [45]

$$\mathrm{{\cal L}} ={-} \log \frac{{\textrm{exp} ({x_{n,{y_n}}}/\tau )}}{{\mathop \sum \nolimits_{i = 1}^{batch\textrm{ }size} \textrm{exp} ({x_{n,{y_i}}}/\tau )}}$$
where xn is the feature of the nth spectral, yn is the feature of the nth structure, thus they form a pair. And yi is the feature of the any structure, batch size is the quantity of one batch, $\tau$ is a learnable temperature parameter that is used to increase the model tolerance for samples beyond the most matching one. Although the loss function only guarantees the accuracy of the most similar data pair, the other similarity relations are correspondingly solved during the process of training tremendous data pairs.

After training, the two trained encoders and similarity matrix can be used to compute the similar relationship between the input structure and the input spectral. For the metamaterial structure, we can easily utilize its entire solution space to generate all global matrix representations. Making use of the trained encoders, we can calculate the proximity of all structures and the expected spectral curve. Then after the calculation, the neural network extracts a vector, representing the global similarity ranking. The maximum similarity is the required structure of the rank inverse prediction. The complete prediction process is illustrated in Fig. 2(c). (The details of each encoder are displayed in Supplement 1, Figures S1-S3.) Residual connection [4648] offers an effective way of acquiring excellent training results, therefore it can be generally applied to most of encoder architectures.

Contrastive learning satisfies most of the requirements in a kind of metamaterial during the rank inverse design, however, it does not further complete the reverse design of multiple kinds of metamaterials simultaneously. For the inverse design of metamaterial absorbers discussed here, different structures predictions always originate from the same absorption spectra. They consist of a number of absorption peaks. The structural parameters of metamaterials determine the position, width and depth of absorption peaks. Due to the similar spectra, the same encoder can be used to process them. Following the framework mentioned above, transfer learning is introduced to predict different kinds of metamaterial structures simultaneously according to the same spectrum. The transfer learning for training framework is shown in Fig. 2(b) and the course of predicting is displayed in Fig. 2(d).

For the prediction of multiple structures manifestations, the encoder of the trained neural network is kept frozen, as a result it can be transferred as another spectral encoder. Multiple types of structure encoders individually interact with the frozen spectral encoder. Consequently, different forms of structure encoders have a shared spectral encoder. After the process of training, when the anticipated spectral is input to the spectral encoder, different structure encoders will output the ranking vectors of diverse types of metamaterial structures.

2.2 Metamaterial structure

The major motivation for acquiring data is that most existing works [29,31] are oversampled so that much spectral information is lost. We now describe the process for collecting two different datasets of pairs (metamaterial structure and optical response). Figure 3 shows schematic structure of metamaterials and the corresponding digital signal representation. The first candidate considered is concentric copper rings metamaterial in Fig. 3(a). The metamaterial is a three-layered structure including metallic pattern, dielectric layer and the ground metal layer. The ground metal layer is used to block the transmitted signal since the inverse design is carried according to the absorption spectra. The metamaterial is composed of an array of square lattices with concentric copper rings that are patterned on a dielectric layer. The period of the metamaterial is p = 9 mm. The thickness of metallic pattern and the ground metal layer is 0.017 mm. The dielectric layer (FR4) has a thickness of 1.5 mm. The relative permittivity and permeability of the dielectric material is ɛ = 4.3 and µ = 1. The width of copper ring is 0.5 mm and the minimal distance between two adjacent rings is 0.5 mm, but the number of rings in each unit cell is randomly determined by the inverse design. The radius of the largest ring is c = 8 mm. In digital signal representation, the position is marked as “0” or “1” depending on an area that is covered by the copper ring or not. Therefore, the concentric copper rings metamaterial can be expressed as 1 × 16 vector. The second candidate considered is lattice metamaterial [49] in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Schematic of the unit cells of the metasurfaces and their matrix representations. (a) A circular ring metamaterial. The diameter of the ring accounts for one bit in the 16-bit matrix every 0.5 mm. (b) A square-lattice metamaterial. Using the operation of symmetry twice, the surface pattern of the metamaterial can be represented as a 4 × 4 matrix.

Download Full Size | PDF

Each unit cell is divided into 8 × 8 lattices. According to blank or copper resonator, each lattice can be represented as “0” or “1”. For reduced calculation, the unit cell is always a twofold symmetrical pattern with horizontal and vertical mirror symmetries during the inverse design. Under such limitation, we simply consider the 4 × 4 pattern (i.e. 4 × 4 matrix) that can be randomly generated by a computer program. The area of each lattice is 1 × 1 mm2. During the inverse design, the metallic pattern of the metamaterial is exploited as the key factor and can be transformed into a vector or matrix as the input of the neural network.

According to different structures mentioned above, we analyze two different kinds of absorbing metamaterial data in the frequency range of 2 to 30 GHz. Matrix encoding corresponding to different patterns is also included in Fig. 3. By use of randomly generated matrix, 20000 ring metamaterials and 20000 lattice metamaterials are constructed, respectively. The metamaterial examples are then simulated using the CST Microwave Studio to collect reflection spectra data. Subsequently, the collected data (i.e. spectral pattern pair) is down-sampled to a length of 512. To evaluate network training results, nine-tenths of each dataset are treated as training data and the rest as test data.

3. Results

Firstly, we generate a large matrix for the ring structure samples (20000) and preprocess all data. For the purpose of inversely designed the metamaterial structures, the matrix and target optical response are input to the structure and spectral encoder, respectively. The trained encoders will give the corresponding feature vectors, which is adopted to calculate the proximity ranking. In accordance with the ranking, the unit structure can be selected depending on the design requirement. All metamaterial structure matrices (65536) are produced by traversing structures in the whole inverse design. Subsequently, all matrices and one expected spectrum are fed into the neural network. The structure with the maximum similarity is generally the result of the inverse design. (The hyper-parameters during the training process are provided in Supplement 1. We also examined the impact of dimensions [50] on the results in Supplement 1).

As mentioned, the top three ranking structures are the result of the reverse design as shown in Fig. 4. The two ground-truth curves, not in the training dataset, are selected from new randomly generated data. The ground-truth curves, which are drawn with red dots, reduce the dimension to 512 points after the CST Microwave Studio simulation. The predicted spectra are drawn with black solid lines.

 figure: Fig. 4.

Fig. 4. Performance of inverse design of ring-shaped metamaterial. Expected spectra (not training data) are plotted by red dots. Predicted spectra are plotted by black lines. (a)-(c) The top three ranking results of ring metamaterials for the same expected spectrum. (d)-(f) The top three ranking results of ring metamaterial for another expected spectrum.

Download Full Size | PDF

We randomly selected two spectra as the expected spectra. For two expected spectra, different metamaterials are predicted, respectively. Figures 4(a)–4(c) represent different predicted results of the same expected spectrum, respectively. Figures 4(d)–4(f) also represent different predicted results of the other same expected spectrum, respectively. For the same expectation, the different results are displayed in order of ranking from left to right. It is noted that the proposed inverse design is suitable for one-to-many prediction as well as ranking.

Although our model has achieved well performance in the field of ring metamaterials. In an attempt to explore the reusability of the framework, we further verify the capability of the model in the lattice metamaterial. Space-shaped metamaterial samples are then input to the neural network using the same framework. We modified the neural network slightly for the special form of lattice metamaterials. (The details of encoders are presented in Supplement 1, Fig. S3.) Figure 5 shows an overview of the results of inverse design in the lattice metamaterials. Being similar to ring metamaterials, Figs. 5(a)–5(c) illustrate the results of the same expected spectrum that is randomly chosen. Figures 5(d)–5(f) show the other result for different spectrum. The ranking is still listed in order. In Fig. 4 and Fig. 5, there was a significant positive correlation between predictions and expectations.

 figure: Fig. 5.

Fig. 5. Performance of inverse design of square-lattice metamaterial. Expected spectra (not training data) are plotted by red dots. Predicted spectra are plotted by black lines. (a)-(c) The top three ranking results of square-lattice metamaterial for the same expected spectrum. (d)-(f) The top three ranking results of square-lattice metamaterial for another expected spectrum.

Download Full Size | PDF

Although our model satisfies the needs of a particular domain, there are often more complex requirements in practical applications. In order to make our model more suitable for real application scenarios, the training process of metamaterials is combined with transfer learning to predict different kinds of structures. With respect to ring metamaterials, the whole training process of our neural network framework is used as well. Before the end of the training, the frozen encoder is employed to the spectral encoder of the trained neural network (Fig. 2(c)). The frozen spectral encoder is then transferred to constitute a new neural network with an untrained structure encoder. The lattice metamaterials are trained using the new neural network. After training, two different-looking neural networks have been achieved, but they share the same spectral encoder parameters. After the predicted data is fed into the shared spectral encoder, the output is entered into the ring structure encoder and lattice encoder. Both the two structure encoders yield different results separately, one for the ring metamaterial and the other for the lattice metamaterial (Fig. 2(d)). The results of combined contrastive and transfer learning are illustrated in Fig. 6.

 figure: Fig. 6.

Fig. 6. Diversity results that originate from the inverse design of transferring the spectral encoder of the ring neural network to square-lattice form neural network. (a), (d) The top ranking results of ring and lattice metamaterials for one randomly selected spectrum. (b), (e) The top ranking results of ring and lattice metamaterials for the other randomly selected spectrum. (c), (f) The top ranking results of ring and lattice metamaterials for another randomly selected spectrum.

Download Full Size | PDF

Figure 6 presents an overview of reverse design results for ring and lattice metamaterials. The above results are taken from the optimal prediction results for different kinds of metamaterials, i.e. selected according to the ranking order (The rest of the other ranking results are included in Supplement 1. Each column is displayed separately in Supplement 1, Figures S4-S6.). As can be seen from Fig. 6, the neural network can predict different metamaterial structures for the same spectrum after combining transfer learning. Obviously, predicting multiple types of metamaterials with the same spectrum is a little less than ideal prediction. This may result from the fact that the structure itself cannot meet the expected spectrum [51,52], rather than an error of the reverse design. If more data is adopted to train a larger pre-trained model, much better transfer learning results should be achieved.

4. Conclusion

In conclusion, we present a deep learning framework for ranking metamaterial inverse design based on the combination of contrastive and transfer learning. Additionally, transfer learning is leveraged to enable simultaneous prediction of multiple types of metamaterial structures. We collected data from two completely different structures and experimented with our model on them. Our model has excellent performance on both datasets. The results of this research have shown our method represents a significant advancement in the field of metamaterial inverse design and holds great potential for practical applications. Ranking inverse design solves non-unique problem in a simpler and clearer way. Going one step further, the proposed framework fills a gap in metamaterial inverse design in the simultaneous design of multiple metamaterial structures. Further research could usefully explore how to apply our method in other metamaterial scopes, like chiral metamaterials, imaging, and so on. Alternatively, combining other similarity calculation methods [53], our work’s applications will be extended.

Funding

National Natural Science Foundation of China (62175049, 62275061); Natural Science Foundation of Heilongjiang Province (ZD2020F002); Fundamental Research Funds for the Central Universities (3072022TS2509).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. L. Raju, K. T. Lee, Z. Liu, D. Zhu, M. Zhu, E. Poutrina, A. Urbas, and W. Cai, “Maximized Frequency Doubling through the Inverse Design of Nonlinear Metamaterials,” ACS Nano 16(3), 3926–3933 (2022). [CrossRef]  

2. L. Li, H. Zhao, C. Liu, L. Li, and T. J. Cui, “Intelligent metasurfaces: control, communication and computing,” eLight 2(1), 7 (2022). [CrossRef]  

3. C. Qian, X. Lin, X. Lin, J. Xu, Y. Sun, E. Li, B. Zhang, and H. Chen, “Performing optical logic operations by a diffractive neural network,” Light: Sci. Appl. 9(1), 59 (2020). [CrossRef]  

4. J. Weng, Y. Ding, C. Hu, X. F. Zhu, B. Liang, J. Yang, and J. Cheng, “Meta-neural-network for real-time and passive deep-learning-based object recognition,” Nat. Commun. 11(1), 6309 (2020). [CrossRef]  

5. W. J. Padilla and R. D. Averitt, “Imaging with metamaterials,” Nat. Rev. Phys. 4(2), 85–100 (2021). [CrossRef]  

6. C. Liu, Q. Ma, Z. J. Luo, Q. R. Hong, Q. Xiao, H. C. Zhang, L. Miao, W. M. Yu, Q. Cheng, L. Li, and T. J. Cui, “A programmable diffractive deep neural network based on a digital-coding metasurface array,” Nat. Electron. 5(2), 113–122 (2022). [CrossRef]  

7. J. M. Gonzalez Estevez, J. Antonio Sánchez-Gil, A. Ali, A. Mitra, and B. Aïssa, “Metamaterials and Metasurfaces: A Review from the Perspectives of Materials, Mechanisms and Advanced Metadevices,” Nanomaterials 12(6), 1027 (2022). [CrossRef]  

8. X. Zhao, G. Duan, K. Wu, S. W. Anderson, and X. Zhang, “Intelligent Metamaterials Based on Nonlinearity for Magnetic Resonance Imaging,” Adv. Mater. 31(49), 1905461 (2019). [CrossRef]  

9. Z. Zhen, Z. Zhen, Z. Zhen, et al., “Realizing transmitted metasurface cloak by a tandem neural network,” Photonics Res. 9(5), B229–B235 (2021). [CrossRef]  

10. C. Qian, B. Zheng, Y. Shen, L. Jing, E. Li, L. Shen, and H. Chen, “Deep-learning-enabled self-adaptive microwave cloak without human intervention,” Nat. Photonics 14(6), 383–390 (2020). [CrossRef]  

11. C. Qian and H. Chen, “A perspective on the next generation of invisibility cloaks—Intelligent cloaks,” Appl. Phys. Lett. 118(18), 180501 (2021). [CrossRef]  

12. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13(3), 220–226 (2018). [CrossRef]  

13. J. W. You, Q. Ma, Z. Lan, Q. Xiao, N. C. Panoiu, and T. J. Cui, “Reprogrammable plasmonic topological insulators with ultrafast control,” Nat. Commun. 12(1), 5468 (2021). [CrossRef]  

14. Q. Ma, G. D. Bai, H. B. Jing, C. Yang, L. Li, and T. J. Cui, “Smart metasurface with self-adaptively reprogrammable functions,” Light: Sci. Appl. 8(1), 98 (2019). [CrossRef]  

15. O. Kulce, D. Mengu, Y. Rivenson, and A. Ozcan, “All-optical information-processing capacity of diffractive surfaces,” Light: Sci. Appl. 10(1), 25 (2021). [CrossRef]  

16. X. Luo, Y. Hu, X. Ou, X. Li, J. Lai, N. Liu, X. Cheng, A. Pan, and H. Duan, “Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible,” Light: Sci. Appl. 11(1), 158 (2022). [CrossRef]  

17. M. D. Huntington, L. J. Lauhon, and T. W. Odom, “Subwavelength lattice optics by evolutionary design,” Nano Lett. 14(12), 7195–7200 (2014). [CrossRef]  

18. W. Zhang, K. Cheng, C. Wu, Y. Wang, H. Li, and X. Zhang, “Implementing Quantum Search Algorithm with Metamaterials,” Adv. Mater. 30(1), 1703986 (2018). [CrossRef]  

19. H. Cai, S. Srinivasan, D. A. Czaplewski, A. B. F. Martinson, D. J. Gosztola, L. Stan, T. Loeffler, S. K. R. S. Sankaranarayanan, and D. López, “Inverse design of metasurfaces with non-local interactions,” npj Comput. Mater. 6(1), 116 (2020). [CrossRef]  

20. M. M. R. Elsawy, S. Lanteri, R. Duvigneau, J. A. Fan, and P. Genevet, “Numerical Optimization Methods for Metasurfaces,” Laser Photonics. Rev. 14(10), 1900445 (2020). [CrossRef]  

21. M. Mansouree, A. McClung, S. Samudrala, and A. Arbabi, “Large-Scale Parametrized Metasurface Design Using Adjoint Optimization,” ACS Photonics 8(2), 455–463 (2021). [CrossRef]  

22. C. H. Lin, Y. S. Chen, J. T. Lin, H. C. Wu, H. T. Kuo, C. F. Lin, P. Chen, and P. C. Wu, “Automatic Inverse Design of High-Performance Beam-Steering Metasurfaces via Genetic-type Tree Optimization,” Nano Lett. 21(12), 4981–4989 (2021). [CrossRef]  

23. A. McClung, H. Kwon, A. Faraon, A. Arbabi, E. Arbabi, and M. Mansouree, “Multifunctional 2.5D metastructures enabled by adjoint optimization,” Optica 7(1), 77–84 (2020). [CrossRef]  

24. W. Ma, Z. Liu, Z. A. Kudyshev, A. Boltasseva, W. Cai, and Y. Liu, “Deep learning for the design of photonic structures,” Nat. Photonics 15(2), 77–90 (2021). [CrossRef]  

25. N. Wang, W. Yan, Y. Qu, S. Ma, S. Z. Li, and M. Qiu, “Intelligent designs in nanophotonics: from optimization towards inverse creation,” PhotoniX 2(1), 22 (2021). [CrossRef]  

26. Y. Jin, L. He, Z. Wen, B. Mortazavi, H. Guo, D. Torrent, B. Djafari-Rouhani, T. Rabczuk, X. Zhuang, and Y. Li, “Intelligent on-demand design of phononic metamaterials,” Nanophotonics 11(3), 439–460 (2022). [CrossRef]  

27. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

28. P. R. Wiecha, A. Arbouet, A. Arbouet, C. Girard, C. Girard, O. L. Muskens, and O. L. Muskens, “Deep learning in nano-photonics: inverse design and beyond,” Photonics Res. 9(5), B182–B200 (2021). [CrossRef]  

29. J. Jiang, M. Chen, and J. A. Fan, “Deep neural networks for the evaluation and design of photonic devices,” Nat. Rev. Mater. 6(8), 679–700 (2020). [CrossRef]  

30. Z. Li, R. Pestourie, Z. Lin, S. G. Johnson, and F. Capasso, “Empowering Metasurfaces with Inverse Design: Principles and Applications,” ACS Photonics 9(7), 2178–2192 (2022). [CrossRef]  

31. W. Ma, F. Cheng, and Y. Liu, “Deep-Learning-Enabled On-Demand Design of Chiral Metamaterials,” ACS Nano 12(6), 6326–6334 (2018). [CrossRef]  

32. Y. Li, Y. Xu, M. Jiang, B. Li, T. Han, C. Chi, F. Lin, B. Shen, X. Zhu, L. Lai, and Z. Fang, “Self-Learning Perfect Optical Chirality via a Deep Neural Network,” Phys. Rev. Lett. 123(21), 213902 (2019). [CrossRef]  

33. E. Ashalley, K. Acheampong, L. V. Besteiro, L. V. Besteiro, P. Yu, A. Neogi, A. O. Govorov, A. O. Govorov, and Z. M. Wang, “Multitask deep-learning-based design of chiral plasmonic metamaterials,” Photonics Res. 8(7), 1213–1225 (2020). [CrossRef]  

34. J. Jiang and J. A. Fan, “Global Optimization of Dielectric Metasurfaces Using a Physics-Driven Neural Network,” Nano Lett. 19(8), 5366–5372 (2019). [CrossRef]  

35. I. Tanriover, W. Hadibrata, and K. Aydin, “Physics-Based Approach for a Neural Networks Enabled Design of All-Dielectric Metasurfaces,” ACS Photonics 7(8), 1957–1964 (2020). [CrossRef]  

36. P. Liu, L. Chen, and Z. N. Chen, “Prior-Knowledge-Guided Deep-Learning-Enabled Synthesis for Broadband and Large Phase Shift Range Metacells in Metalens Antenna,” IEEE Trans. Antennas Propagat. 70(7), 5024–5034 (2022). [CrossRef]  

37. I. Sajedian, H. Lee, and J. Rho, “Double-deep Q-learning to increase the efficiency of metasurface holograms,” Sci. Rep. 9(1), 10899 (2019). [CrossRef]  

38. C. Liu, C. Liu, W. M. Yu, W. M. Yu, Q. Ma, Q. Ma, L. Li, T. J. Cui, and T. J. Cui, “Intelligent coding metasurface holograms by physics-assisted unsupervised generative adversarial network,” Photonics Res. 9(4), B159–B167 (2021). [CrossRef]  

39. W. Ma, F. Cheng, Y. Xu, Q. Wen, and Y. Liu, “Probabilistic Representation and Inverse Design of Metamaterials Based on a Deep Generative Model with Semi-Supervised Learning Strategy,” Adv. Mater. 31(35), 1901111 (2019). [CrossRef]  

40. C. Yeung, R. Tsai, B. Pham, B. King, Y. Kawagoe, D. Ho, J. Liang, M. W. Knight, and A. P. Raman, “Global Inverse Design across Multiple Photonic Structure Classes Using Generative Deep Learning,” Adv. Opt. Mater. 9(20), 2100548 (2021). [CrossRef]  

41. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning Transferable Visual Models From Natural Language Supervision,” 8748–8763 (2021).

42. R. Zhu, T. Qiu, J. Wang, S. Sui, C. Hao, T. Liu, Y. Li, M. Feng, A. Zhang, C. W. Qiu, and S. Qu, “Phase-to-pattern inverse design paradigm for fast realization of functional metasurfaces via transfer learning,” Nat. Commun. 12(1), 2974 (2021). [CrossRef]  

43. Z. Fan, C. Qian, Y. Jia, M. Chen, J. Zhang, X. Cui, E. P. Li, B. Zheng, T. Cai, and H. Chen, “Transfer-Learning-Assisted Inverse Metasurface Design for 30% Data Savings,” Phys. Rev. Appl. 18(2), 024022 (2022). [CrossRef]  

44. C. Qiu, X. Wu, Z. Luo, H. Yang, G. He, and B. Huang, “Nanophotonic inverse design with deep neural networks based on knowledge transfer using imbalanced datasets,” Opt. Express 29(18), 28406–28415 (2021). [CrossRef]  

45. Y. Zhang, H. Jiang, Y. Miura, C. D. Manning, and C. P. Langlotz, “Contrastive learning of medical visual representations from paired images and text,” in Machine Learning for Healthcare Conference (PMLR, 2022), pp. 2–25.

46. C. Yeung, J. M. Tsai, B. King, B. Pham, D. Ho, J. Liang, M. W. Knight, and A. P. Raman, “Multiplexed supercell metasurface design and optimization with tandem residual networks,” Nanophotonics 10(3), 1133–1143 (2021). [CrossRef]  

47. J. Jiang and J. A. Fan, “Multiobjective and categorical global optimization of photonic structures based on ResNet generative neural networks,” Nanophotonics 10(1), 361–369 (2020). [CrossRef]  

48. Y. Cai, Y. Huang, N. Feng, and Z. Huang, “Improved Transformer-Based Target Matching of Terahertz Broadband Reflective Metamaterials With Monolayer Graphene,” IEEE Trans. Microwave Theory Techn. 71(8), 3284–3293 (2023). [CrossRef]  

49. T. Qiu, X. Shi, J. Wang, Y. Li, S. Qu, Q. Cheng, T. Cui, S. T. Qiu, S Sui, J. F. Wang, Y. F. Li, S. B. Qu, S. Sui, X. Shi, Q. Cheng, and T. J. Cui, “Deep Learning: A Rapid and Efficient Route to Automatic Metasurface Design,” Adv. Sci. 6(12), 1900128 (2019). [CrossRef]  

50. M. Zandehshahvar, Y. Kiarashi, M. Chen, R. Barton, and A. Adibi, “Inverse design of photonic nanostructures using dimensionality reduction: reducing the computational complexity,” Opt. Lett. 46(11), 2634–2637 (2021). [CrossRef]  

51. Y. Kiarashinejad, M. Zandehshahvar, S. Abdollahramezani, O. Hemmatyar, R. Pourabolghasem, and A. Adibi, “Knowledge discovery in nanophotonics using geometric deep learning,” Advanced Intelligent Systems 2(2), 1900132 (2020). [CrossRef]  

52. M. Zandehshahvar, Y. Kiarashinejad, M. Zhu, H. Maleki, T. Brown, and A. Adibi, “Manifold learning for knowledge discovery and intelligent inverse design of photonic nanostructures: breaking the geometric complexity,” ACS Photonics 9(2), 714–721 (2022). [CrossRef]  

53. M. Zandehshahvar, Y. Kiarashi, M. Zhu, D. Bao, M. H Javani, R. Pourabolghasem, and A. Adibi, “Metric Learning: Harnessing the Power of Machine Learning in Nanophotonics,” ACS Photonics 10(4), 900–909 (2023). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Details of neural network and some results

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Conceptual comparison of three inverse design mechanisms. (a): Inverse design result only contains an optimal solution [24]. (b): Inverse design of one-to-many but no ranking mapping [39,40]. (c): Schematic of the proposed method. Our method provides a ranked one-to-many inverse design result.
Fig. 2.
Fig. 2. Overview of the process of the neural network framework. (a), (c) Display the architecture of contrastive learning neural network. (a) The training phase of the neural network, aimed to maximize the similarity of corresponding elements. (c) The prediction phase of the framework. After all global structure matrices are generated, they are compared with the target spectral. (b), (d) Combination of transfer learning into our framework. Additional structure encoder shares the spectral encoder with the previous structure encoder.
Fig. 3.
Fig. 3. Schematic of the unit cells of the metasurfaces and their matrix representations. (a) A circular ring metamaterial. The diameter of the ring accounts for one bit in the 16-bit matrix every 0.5 mm. (b) A square-lattice metamaterial. Using the operation of symmetry twice, the surface pattern of the metamaterial can be represented as a 4 × 4 matrix.
Fig. 4.
Fig. 4. Performance of inverse design of ring-shaped metamaterial. Expected spectra (not training data) are plotted by red dots. Predicted spectra are plotted by black lines. (a)-(c) The top three ranking results of ring metamaterials for the same expected spectrum. (d)-(f) The top three ranking results of ring metamaterial for another expected spectrum.
Fig. 5.
Fig. 5. Performance of inverse design of square-lattice metamaterial. Expected spectra (not training data) are plotted by red dots. Predicted spectra are plotted by black lines. (a)-(c) The top three ranking results of square-lattice metamaterial for the same expected spectrum. (d)-(f) The top three ranking results of square-lattice metamaterial for another expected spectrum.
Fig. 6.
Fig. 6. Diversity results that originate from the inverse design of transferring the spectral encoder of the ring neural network to square-lattice form neural network. (a), (d) The top ranking results of ring and lattice metamaterials for one randomly selected spectrum. (b), (e) The top ranking results of ring and lattice metamaterials for the other randomly selected spectrum. (c), (f) The top ranking results of ring and lattice metamaterials for another randomly selected spectrum.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

L = log exp ( x n , y n / τ ) i = 1 b a t c h   s i z e exp ( x n , y i / τ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.