Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adiabatic evolution on a spatial-photonic Ising machine

Open Access Open Access

Abstract

Combinatorial optimization problems are crucial for widespread applications but remain difficult to solve on a large scale with conventional hardware. Novel optical platforms, known as coherent or photonic Ising machines, are attracting considerable attention as accelerators on optimization tasks formulable as Ising models. Annealing is a well-known technique based on adiabatic evolution for finding optimal solutions in classical and quantum systems made by atoms, electrons, or photons. Although various Ising machines employ annealing in some form, adiabatic computing on optical settings has been only partially investigated. Here, we realize the adiabatic evolution of frustrated Ising models with 100 spins programmed by spatial light modulation. We use holographic and optical control to change the spin couplings adiabatically, and exploit experimental noise to explore the energy landscape. Annealing enhances the convergence to the Ising ground state and allows to find the problem solution with probability close to unity. Our results demonstrate a photonic scheme for combinatorial optimization in analogy with adiabatic quantum algorithms and classical annealing methods but enforced by optical vector-matrix multiplications and scalable photonic technology.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Ising machines are physical devices aimed to accelerate the minimization of Ising Hamiltonians. Scalable implementations of Ising machines are of paramount importance because many of the most challenging combinatorial optimization problems in science, engineering, and social life can be cast in terms of an Ising model [1,2]. Finding the ground state of the Ising spin system gives the solution to the optimization, but requires resources growing exponentially with the problem size. For this reason, intense research focuses on unconventional architectures that use computational units such as light pulses [3], superconducting [4] and magnetic junctions [5], electromechanical modes [6], lasers and nonlinear waves [712], or polariton and photon condensates [13,14].

Adiabatic computing is a valuable technique to solve combinatorial optimizations by slowly evolving an easy-to-prepare initial configuration towards the ground state of a target Hamiltonian, which encodes the combinatorial problem [15,16]. Examples are adiabatic quantum computing using nuclear magnetic resonance [17] and superconducting gates [18], as well as quantum annealing with superconducting circuits [19], and simulated annealing (SA) on CMOS networks [20]. Annealing is a form of adiabatic computing at non-zero temperatures in which classical, quantum, or nonlinear perturbations enable the exploration of the complex energy landscape [2124]. As a heuristic (probabilistic) algorithm, it is used widely for quickly obtaining an approximate ground-state solution of a general spin problem. The method harnesses system fluctuations to promote in-depth exploration of the problem phase space, while preventing trapping in local minima. However, its effectiveness comes at an ever increasing cost for large problems, a fact that strongly motivates implementations of annealing schemes on non-conventional hardware. Quantum annealers designed to solve classical Ising problems succeed in optimization tasks ranging from protein folding [25] to prime factorization [26]. Whether there are advantages from entanglement and quantum tunneling is still debated [2729]. Therefore, recently broad interest is centered also on the realization of non-electronic annealing devices that can exploit classical nonlinear and photonic properties.

Optical Ising machines use multiple frequency or spatial channels to process data at high speed and in parallel. Coherent Ising machines (CIMs) employ optical parametric oscillators [3032], fiber lasers [33], or opto-electronic oscillators [34] to solve optimization problems with remarkable performance for hundreds of spins [35]. These machines exploit a gain-dissipative principle [3641]: the search for the minimum energy configuration is conducted upward by gradually raising the gain. Other platforms based on integrated nanophotonic circuits [4247] operate as optical recurrent neural networks that converge to Ising ground states. Recurrent feedback also has a key role in the recently demonstrated large-scale photonic Ising machine by spatial light modulation [4850]. In all these optical settings, the relaxation into a local, instead of global, energy minimum is a relevant problem. An appealing solution leverages environmental noise as a resource [43,49], and annealing by varying external noise on a photonic Ising machine has been recently proposed [43]. On the other hand, the possibility of performing an evolution of the machine’s parameters to improve the computation remains largely unexplored.

 figure: Fig. 1.

Fig. 1. Annealing Ising models on a spatial-photonic device. (a) Illustration of the adiabatic computing principle. The energetic landscape with several minima varies as the Hamiltonian evolves from the initial ${H_0}$ to ${H_P}$, corresponding to the target combinatorial problem. When the dynamics is slow enough (adiabatic condition), the system remains in the low-energy state towards the final state, which gives the optimal solution. (b) Realization of the adiabatic evolution on a spatial-photonic Ising machine. In the experimental setup, the spins ${\sigma _i}$ are encoded by SLM2 into binary optical phases (the inset shows a representative configuration), whereas SLM1 modulates the beam amplitudes ${\xi _i}$ to control spin interaction (top panels). Adiabatic evolution is implemented by the controlled variation of the couplings via the amplitude-modulated light, as illustrated by the random intensity matrices in the top panels (M1-M2, mirrors; L1, lens; CCD, camera).

Download Full Size | PDF

In this paper, we demonstrate adiabatic evolution on a spatial-photonic Ising machine (SPIM). We exploit the SPIM features that encode the spins and their couplings via spatial light modulators (SLMs). Starting from the energy minimum of a simple Hamiltonian, and changing the system on a time scale much larger than the machine relaxation time, we find the low-energy state of a target model. For the adiabatic transformation of the Ising Hamiltonian, we vary the spin couplings at a fixed experimental noise level [Fig. 1(a)]. The annealing protocol occurs optically by amplitude modulation. We also test a different holographic annealing scheme, which uses the image formed during light propagation to control the instantaneous Hamiltonian. We consider Mattis spin glasses with 100 spins initially prepared in a uniform ferromagnetic state. For a sufficiently slow evolution of the Hamiltonian, the success probability approaches unity. Good performances are found also when the evolution is performed at a fixed Hamiltonian by decreasing the noise level. The effectiveness of the optical adiabatic computation is maintained as the number of spins is increased. Our findings demonstrate a novel and scalable approach that allows applying adiabatic computing principles on photonic devices.

2. RESULTS

A. Spin Dynamics on a Spatial-Photonic Ising Machine

In a SPIM, a coherent wavefront encodes binary spin variables by spatial light modulation [48]. The device takes advantage of optical vector-matrix multiplications and of the large pixel density of SLMs, properties that enable implementing large-scale photonic computing and machine learning [5157]. Figure 1(b) shows the SPIM with an optical path with two SLMs: SLM1 fixes the couplings of the Hamiltonian by amplitude modulation, and SLM2 controls the spin variables (see Methods). Ising spins ${\sigma _i} = \pm 1$ are imprinted on a continuous beam by $0 {\text -} \pi$ phase-delay values (SLM2). The spin interaction occurs by interference on the detection plane. Spatial modulation of the input intensity (SLM1) fixes the interaction strength. Minimizing the difference between the image detected on the camera and a chosen target image ${I_T}$ is equivalent to minimizing an Ising Hamiltonian with couplings determined by SLM1 and ${I_T}$ [48].

Classical annealing is a well-known strategy to minimize problems having many local minima. A time-dependent Hamiltonian $H(t)$ is made evolving adiabatically from a simple $H(0) = {H_0}$ to the target problem $H(T) = {H_P}$ [Fig. 1(a)], with $T$ the annealing time. If the evolution is slow enough, the system remains trapped in the ground state of $H(t)$, and a final measurement provides the spin configuration minimizing ${H_P}$. In our implementation, we first perform a state preparation phase in which the system reaches the minimum of a Hamiltonian ${H_0}$ with homogeneous couplings; then we have the adiabatic evolution, during which the Hamiltonian reaches the target model ${H_P}$.

State preparation. The optical machine works in a measurement and feedback loop. Once initialized to a random spin state, recurrent feedback from the detected intensity allows the phase distribution on SLM2 to converge towards the ground state of an Ising Hamiltonian ${H_0} = - \sum\nolimits_{{ij}} {J_{{ij}}}{\sigma _i}{\sigma _j}$, with couplings ${J_{{ij}}} = {\xi _i}{\xi _j}{\tilde I_T}(i,j)$ [48]. The quenched variable ${\xi _i}$ is the optical amplitude impinging on the $ i $th spin; ${\tilde I_T}$ is the Fourier transform of a pre-determined target image. The difference between ${I_T}$ and the image detected on the CCD is the cost function. The optimization is performed with a measurement and feedback method. For each machine iteration, we flip a spin, measure the resulting intensity pattern, and keep the update if the difference from the target image decreases. Due to experimental noise, a spin changes its state with a certain probability also when this variation increases the cost function (see Methods).

 figure: Fig. 2.

Fig. 2. Optical computing of Mattis spin glasses by holographic annealing. (a) Evolution of the target image, corresponding to an Ising Hamiltonian that starts from ${H_0}$ (state preparation) and evolves towards the problem ${H_P}$ in $M$ discrete steps. (b) Magnetization dynamics for 32 independent realizations of the annealing process. The thick purple line gives the average evolution ($N = 64$, $M = 20$). Insets show the graphs of ${H_0}$ and ${H_P}$; for clarity, only positive/negative (orange/blue) edges that start from the black nodes are shown. (c) Spin configuration as a function of time for a representative Mattis instance. (d) Example of a ground state measured at equilibrium and exact combinatorial solution of the realized problem.

Download Full Size | PDF

Adiabatic evolution. To realize annealing, we vary the Hamiltonian with time, and the evolution is discretized in $M$ steps [17]. We implement the time-dependent Ising Hamiltonian

$$H(t) = - \sum\limits_{{ij}} {J_{{ij}}}(t){\sigma _i}{\sigma _j},$$
with $t \in [0, \ldots,nM]$ and $H(0) = {H_0}$. The parameter $M$ determines the speed of the adiabatic trajectory. For each step, the Ising machine performs a fixed number of iterations $n$, until the final Hamiltonian $H(nM) = {H_P}$ is reached (see also Methods). The spin couplings ${J_{{ij}}}$ can be changed by the target image ${I_T}$ (“holographic annealing”), and also by varying the modulated amplitudes ${\xi _i}$ (“optical annealing”). This feature enables different and versatile adiabatic evolution protocols. We remark that, due to the experimental noise, the spin system is coupled to an effective thermal bath [49].

B. Holographic Annealing

A random spin state is prepared in a low-temperature homogeneous ferromagnetic configuration (${J_{{ij}}} = 1$) and evolves toward a Mattis spin glass [58,59]. This target ${H_P}$ belongs to a class of frustrated Ising models whose zero-temperature ground states are characterized by ferromagnetic and anti-ferromagnetic domain blocks. In the holographic annealing scheme, we vary the target image ${I_T}$ to perform the temporal evolution in Eq. (1), as shown in Fig. 2(a). The graphs for an initial and final problem are inset in Fig. 2(b), where we report the time evolution of the magnetization $m = \langle {\sigma _i}\rangle$ for $M = 20$. The results correspond to 32 realizations with $N = 64$ spins evolving under a specific time-dependent Mattis instance. We set the number of local machine iterations to $n = 6N$, a condition ensuring that the SPIM equilibrates in each annealing step for all the investigated sizes. We observe that the final ground state always has zero mean magnetization, as expected for a spin glass with vanishing mean interaction. Fluctuations around the average trajectory [thick purple line in Fig. 2(b)] unveil the influence of experimental noise, which helps the exploration of the various energy configurations during annealing.

The considered Mattis models enable direct testing of the minimization trajectory, as Ising Hamiltonians with such symmetry admit exact zero-temperature ground states [59]. Therefore, we can verify the obtained probabilistic solutions by inspecting the spin configurations. Figure 2(c) shows an example of the dynamics of the spins. The computation occurs successfully when the final state is strongly correlated with the expected solution. We quantify the success probability ${p_s}$, i.e., the probability of obtaining an approximate solution, as the fraction of independent runs converging to the correct ground state (see Methods). For example, Figure 2(d) reports one of the best ground state configurations and its comparison with the corresponding exact solution. Remarkably, the two spin states have the same energy and differ only for two spin flips. This discrepancy is due to the effective thermal fluctuations. The result indicates that the performed evolution allows finding the global minimum of the problem instance. As we show hereafter, the probability ${p_s}$ of finding an approximate solution depends crucially on the number of evolution steps $M$.

 figure: Fig. 3.

Fig. 3. Adiabatic evolution by optical control of the spin couplings. (a) Variation of the spatially modulated intensity during the trajectory. The initial model ${H_0}$ has uniform couplings; after $M$ steps, $H(t)$ is equal to the target ${H_P}$ with random interactions. (b) Dynamics of the magnetization for replicated experiments with $N = 64$ and $M = 10$. The thick purple line gives the average behavior. Insets show the graphs for ${H_0}$ and ${H_P}$; for clarity, only positive (orange) edges that start from the black nodes are shown. (c) Spin dynamics for a representative realization of couplings. The initial uniform state forms clusters when evolving towards the ground state of ${H_P}$. (d) Best experimental ground state compared with the exact solution of the corresponding Mattis model. The orange box indicates the set of interacting spins, while the other part corresponds to those with zero-couplings that fluctuate thermally.

Download Full Size | PDF

C. Optical Annealing

We implement Eq. (1) by varying the Ising machine parameters optically. Adiabatic evolution is performed by evolving the spatially modulated intensity inside the photonic machine while keeping constant the target image ${I_T}$. In each of the $M$ steps, the machine operates with a different amplitude mask ${\xi _i}$ (see also Methods). The instantaneous Hamiltonian $H(t)$ is specified by ${J_{{ij}}}(t) = {\xi _i}(t){\xi _j}(t)$, within some multiplicative constant. The initial Hamiltonian ${H_0}$ corresponds to a homogeneous amplitude distribution [Fig. 3(a)]. We choose a random set of amplitudes $\xi$, and gradually decrease their values towards zero. The corresponding system is an instance of a target Mattis model ${H_P}$ with randomly coupled clusters of spin, as represented by the graph inset in Fig. 3(b). Figure 3(b) shows the time-evolving magnetization when approaching the Mattis ground state during the optical annealing. Each trajectory is affected by noise-driven fluctuations, and, at the end of the annealing, the expected mean magnetization $m = \pm 0.5$ is reached. This value arises from the fact that half of the spins have ${J_{{ij}}} = 0$ in ${H_P}$. Figure 3(c) shows the evolution of a configuration with $N = 64$ spins for a representative case. The final ground state averaged over thermal fluctuations coincides with the minimum energy state of the programmed Hamiltonian ${H_P}$. Figure 3(d) shows an example of the remarkable agreement between the obtained ground state and the corresponding theoretical zero-temperature solution. Specifically, in both configurations, the interacting spins are in the same state, while the non-interacting ones point randomly and fluctuate under the effect of noise. This demonstrates that the spins maintain their state at the lowest energy during the optical change of the Hamiltonian. As reported below, adiabatic evolution of the Ising machine improves the search for the global solution to the encoded optimization problem.

To test the holographic and optical annealing performances, we vary the number of discretization steps $M$, i.e., the annealing time. The number of machine iterations in each step is kept constant to $n = 6N$. Figure 4(a) shows the success probability ${p_s}$ versus $M$ for the holographic annealing; results refer to the problems in Fig. 2 ($N = 64$). At a small $M$, when the evolution occurs rapidly, the excitation of high-energy states reduces the effectiveness of the minimization. The probability of converging to the optimal solution increases with the annealing time. The adiabatic condition is identified by ${p_s}$ reaching a plateau with values exceeding 90% when averaging over thermal fluctuations [magenta bars in Fig. 4(a)]. This occurs for $M \approx 30{n_r}$, being ${n_r}$ the measured relaxation time of the Ising machine (see Methods). Figure 4(b) shows the results for the optical annealing scheme. The adiabatic condition is reached on a much shorter annealing time ($M \approx 17{n_r}$) with respect to the holographic case in Fig. 4(a), i.e., the ground state is also found for rapid processes. This fact can be explained considering that, in the holographic case, during its evolution, the system passes through various instantaneous Hamiltonians giving a phase space very different from that of ${H_P}$.

 figure: Fig. 4.

Fig. 4. Optical computing performance and scaling properties. Success probability as a function of the annealing time for (a) holographic and (b) optical annealing of Ising models with Mattis-type interaction. Blue and magenta bars indicate mean values obtained by measures at a fixed time and averaging over configurations at different times (see Methods). (c), (d) Success probability varying the spin number at different $M$. Empty and filled dots are for data from single-shot and noise-averaged measurements.

Download Full Size | PDF

The quality of each solution is given by its Hamming distance, i.e., the number of spins that should be changed to obtain the known zero-temperature ground state [19]. The mean Hamming distance we measure in adiabatic condition is $h = 8 \pm 1$ ($h = 7.5 \pm 0.8$) for the optical (holographic) annealing. In both methods, we found that adiabatic evolution provides a substantial enhancement of the success probability with respect to a “no-annealing” strategy, in which the Hamiltonian $H(t) = {H_P}$ is kept constant since the initial instant [dashed lines in Figs. 4(a) and 4(b)]. Experiments at a fixed Hamiltonian are performed at the same noise level and number of machine iterations. However, in the no-annealing case, the spins reach a stationary state in a time $M \approx 4{n_r}$. A longer computation does not improve the ground-state search, as it occurs for a system stuck in a local minimum. Moreover, Figs. 4(a) and 4(b) show that—in all the considered cases—the problem ground state can be found more efficiently if the spin fluctuations at equilibrium are observed and averaged out. This circumstance indicates that a large part of the SPIM error can be ascribed to effective thermal noise.

We also investigate the scaling properties of the photonic setting by varying the number of spins. In Figs. 4(c) and 4(d), we report the scaling of the success probability for holographic and optical annealing, respectively. The annealing time is kept proportional to the system size; the total machine iterations are $n \times N \times M$. For a fast evolution protocol [$M = 10$, Fig. 4(b)], a degrading effect when increasing the size is observed in the holographic case. However, as the adiabatic condition is reached, we find a remarkable property: the ability of finding an approximate solution is maintained as the problem size grows. For $N = 100$, values of ${p_s}$ close to unity correspond to a measured mean Hamming fraction ($h/N$) of 0.11. The results suggest that, on these specific problems, optical adiabatic computing can be extended to larger scales with comparable performance.

D. Noise-Driven Annealing

Annealing algorithms in classical systems generally operate by varying the noise or temperature instead of the Hamiltonian [21]. The transfer of this concept into the optical domain has been recently modeled for a photonic recurrent Ising machine (PRIS) architecture [43]. However, in many experimental cases, noise is a parameter difficult to access and control. To compare the various annealing schemes, we realize also noise-driven evolutions in our SPIM. We can control the noise level by using a noisy-feedback method [49] (see Methods). In fact, the noise level depends on the fluctuations of the detected intensity, which can be enhanced by decreasing the camera exposure time. We map the exposure time on a normalized noise level $\rho \in [ 0,1]$ that indicates the probability of erroneously flipping a spin.

We evolve the Mattis spin-glass problems in Fig. 2 by decreasing the noise in $M$ steps from the maximum value ${\rho _m}$ to zero. The Hamiltonian ${H_P}$ is kept constant during the noise-driven evolution comprising $n \times M$ iterations. Figure 5 shows the ${p_s}$ as a function of the annealing time for $N = 64$. The probability of finding the ground state increases with the annealing time, and the adiabatic condition is reached for $M \approx 10$, a behavior similar to optical annealing. The results can be benchmarked with numerical simulations of SA [60] on the same graphs. A ground state that coincides with the Mattis optimal solution is found by SA independently of the annealing trajectory. As shown in Fig. 5, the comparison indicates that our annealing platform can operate with an accuracy comparable to a robust optimization algorithm. The generality of the device functioning suggests this effectiveness can be extended also to more complex problems.

 figure: Fig. 5.

Fig. 5. Noise-driven annealing. Success probability varying the annealing time for noise-driven evolutions on the SPIM. Blue and magenta bars indicate results for trajectories starting from ${\rho _m} = 0.4$ and 0.2, respectively. The same evolutions are numerically implemented with a simulated annealing (SA) algorithm (dashed line).

Download Full Size | PDF

E. Time Scales Analysis

The basic time step in the SPIM architecture is the iteration time $\tau$, which includes the time necessary to send $N$ spins to the SLM2, measure and process the intensity on the camera, and generate the next spin configuration. In our optical annealing schemes, this operation is performed $n \propto N$ times in each evolution step, and the total annealing time is $T = nM\tau$. The actual iteration time we measure in our setup is $\tau = 0.085\;{\rm s}$ for $N = 64$, i.e., each annealing step takes $\sim30\;{\rm s}$. This value is not a fundamental scale but a quantity that depends solely on the setup components. As shown in Fig. 6, where we report measurements of $\tau$ versus $N$, we have only a weak increase in the iteration time with the problem size. The scaling ratio $\tau /N$ indicates that the single computing operation becomes increasingly convenient for large-scale systems. On the contrary, an iteration time that grows quadratically with the problem size is expected for basic annealing algorithms implemented on conventional electronic devices. This optical advantage at large scales is a general property that does not depend on the machine speed.

 figure: Fig. 6.

Fig. 6. Scaling of the iteration time. Measurements of the SPIM unit time (dots) versus the problem size. $\tau$ values are normalized to the iteration time for 64 spins (0.085 s). The inset scheme illustrates the single iteration. The scaling ratio $\tau /N$ (line) shows that the iteration time is nearly constant as the size grows.

Download Full Size | PDF

The main speed limitation of our optical computing approach is related to the frame rate of the SLM. Therefore, to compare the computation times with other optical platforms that do not rely on annealing, we consider various cutting-edge SLM technologies at present available. For example, a microelectromechanical grating light valve that can modulate 1088 modes at 350 kHz has been recently reported [61]. Optical annealing of ${10^3}$ spins with a similar device would take a total time on the order of 10 ms, which is comparable with the few milliseconds taken by a CIM to run problems of the same size [35]. However, novel modulators that exploit electro-optic microcavity arrays to achieve gigahertz speeds are under development [62]. This suggests that optical annealing approaches on a dedicated high-speed setup are also promising in terms of time performance. On the other hand, a commercial high-speed digital micromirror device (DMD) can control up to $1024 \times 768$ pixels at more than 20 kHz switching rate in binary mode. A device with these features, in principle, allows to anneal a large-scale system with ${10^6}$ spins in a few hundred seconds, a time scale currently achievable only with high-performance computing [40]. In fact, similar problem sizes would be several orders of magnitude larger than those accessible on other optical Ising machines.

3. DISCUSSION

Adiabatic evolution or annealing is one of the most general and consolidated heuristic approaches to solve combinatorial optimization problems and complex physical models. Its experimental implementation on unconventional physical systems operating at room temperature can impact future computing architectures. We have shown how to realize the method using a versatile optical setup. In particular, we demonstrated that an optical artificial spin system can avoid high-energy excitations during its evolution, i.e., that the adiabatic theorem has implications for classical experiments. A central aspect that remains open is how the approach can be generalized to any Ising Hamiltonian. In fact, for adiabatic computing to be effective it is necessary to find an initial state whose energy space morphs into that of the target problem. A strategy can be starting with an initial condition that retains some statistical properties of the problem graph. However, in many cases, this preparation can be as hard as solving the problem itself. For example, this can occur in the case of Ising spin-glass problems with fully random interaction, which can be encoded in our SPIM by using multiple light scattering [63,64]. For similar problems, we expect noise-driven annealing as the more practical method to find the ground state.

In conclusion, we have realized adiabatic computing schemes on a SPIM showing that ground states are found with enhanced success probability. Computing devices based on spatial light modulation are scalable to larger sizes and can potentially host systems consisting of millions of spins. Other developments may include the use of nonlinear elements [50], scattering media [63,64], or metasurfaces, to implement a larger class of combinatorial optimization problems, beyond Mattis instances of the Ising model, and for realizing compact devices without free-space propagation. Moreover, by encoding different replicas of the spin system on separate wavelengths (wavelength multiplexing), our scheme can allow us to run several problems fully in parallel, opening to the optical realization of multiple annealing algorithms such as parallel tempering. Exploiting optical vector-matrix multiplications, which can be performed easily and efficiently for large sizes, our SPIM represents a route to tackle hard optimization problems at an unprecedented scale, and opens the route to the experimental demonstration of various minimization strategies.

4. METHODS

A. Experimental Setup and Feedback Method

Light from a continuous-wave laser source with wavelength $\lambda = 532\;{\rm nm} $ (max. power 1.5 W) is expanded, polarization controlled, and spatially filtered. The beam is thus spatially modulated in amplitude by a first SLM (SLM1) and then independently phase modulated by the second modulator (SLM2). The optical path shown in Fig. 1(b) is realized by a single nematic liquid crystal reflective modulator (Holoeye LC-R 720, $1280 \times 768$ pixels, pixel pitch $20 \times 20\;\unicode{x00B5}{\rm m}$, 180 Hz maximum frame rate). A section of the modulator is employed in amplitude mode to generate controlled intensity distributions ${\xi _i}$, which are imaged by a $4 {\text -}\!{f}$ system [not shown in Fig. 1(b)] and mirrors M1-M2 on the second section, which perform binary phase modulation. By a combination of incident and analyzed polarizations, phase modulation occurs with less than 10% residual intensity variations. To maintain the setting optically stable, an active area of approximatively $200 \times 200$ SLM pixels is divided into $N$ optical spins by grouping several pixels. Modulated light is separated using a holographic grating and focused by a lens L1 (${f} = 500\;{\rm mm} $) on a CCD camera. The intensity is detected on a region of interest composed of $n \times m = 18 \times 18$ spatial modes, where the signal in each mode is obtained averaging over $10 \times 10$ camera pixels.

The measured intensity pattern determines the feedback signal. At each iteration, a spin is randomly flipped, the recorded pattern is compared with a reference image ${I_T}$ on the same number of modes [see Fig. 2(a)], and the spin configuration on the SLM2 is updated to minimize the difference between the two images. Due to intensity fluctuations, and the grouping procedure at the readout, there is a finite probability to update the spin configuration in any case. These are the classical fluctuations through which various energy configurations are explored. The rate at which readout errors occur determines the noise level, which depends on the magnitude of the intensity fluctuations on the detection plane. To control the noise level, we tune the camera exposure time, where a shorter exposure corresponds to larger error amplitudes. With this technique, we avoid applying an additional post-processing step to the recorded intensity. Fixing the CCD settings and analyzing the probability a spin erroneously changes its state during the dynamics, we map the exposure time into a normalized noise level $\rho$, which indicates the probability at each iteration to measure a false decrease in the cost function [49]. The minimum noise level $\rho = 0$ corresponds to spontaneous errors in the feedback loop. Optical annealing is performed at $\rho = 0.1$, a level close to the optimal noise level [49]. This value is kept fixed in all the presented results, with the exception of noise-driven experiments.

Another practical quantity characterizing the setting is the relaxation time ${n_r}$ of the Ising machine. This time scale indicates the number of iterations needed, given the feedback scheme, for the system to get a steady state. Since it may depend on the encoded problem class, we measure it directly by preparing the homogeneous system (${J_{{ij}}} = 1$), setting the Hamiltonian to a Mattis instance via ${I_T}$, and observing how the spin state equilibrates. From the decay of the magnetization, we estimate the equilibration time, ${n_r} = 220$ iterations for $N = 64$ spins.

B. Adiabatic Evolution Schemes

Holographic annealing. The machine starts from a random configuration of $N$ spins, and it is prepared on a uniform ferromagnetic state. Specifically, the initial Hamiltonian is ${H_0} = - \sum\nolimits_{{ij}} \bar J{\sigma _i}{\sigma _j}$, which is implemented using a plane wave of constant amplitude ${\xi _i} = {E_0}$ and a target image that is composed of a single spot [Fig. 2(a)], so that ${\tilde I_T} = c$, $c$ being an arbitrary constant, and $\bar J = cE_0^2$. By varying in time the target image ${I_T}$, being ${J_{{ij}}}(t) = {\xi _i}{\xi _j}{\tilde I_T}(t)$, we implement a linear evolution protocol of the form $H(t) = (1 - t/T){H_0} + (t/T){H_P}$, with ${H_P}$ an Ising problem where each spin–spin interaction can have only a positive or a negative value (frustrated Mattis model). Therefore, the time-dependent Hamiltonian is

$$H(t) = \left({1 - \frac{t}{T}} \right){H_0} - cE_0^2\frac{t}{T}\sum\limits_{{ij}} {G_{{ij}}}{\sigma _i}{\sigma _j},$$
where ${G_{{ij}}}$ is a block matrix with element values $\pm1$. For example, a four-block matrix $G = [{G_{11}},{G_{12}};{G_{21}},{G_{22}}]$, where ${G_{11}} = {G_{22}}$ is an all-ones matrix, and ${G_{12}} = {G_{21}} = - {G_{11}}$, corresponds to a target image ${I_T}$ with two horizontal intensity spots [Fig. 2(a)]. In this case, pairs of negatively coupled spins correspond to points of the optical field resulting in destructive interference on the central mode of the detection plane. To discretize the evolution, we divide the total time interval $[0,T]$ in $M$ equal intervals. For a fixed system size, we choose to keep constant the number of iterations $n$ performed by the machine in each step. We use $n = 6N$, a condition ensuring that the time spent in each annealing step is much larger than the relaxation time of the machine for all the investigated sizes ($n \gg {n_r}$). The annealing time thus increases with $M$. In Figs. 2 and 3, we normalize the evolving time (machine iterations) with respect to $n$.

Optical annealing. Starting from a random spin state, a uniform ferromagnetic configuration is prepared by implementing ${H_0}$ as in the holographic protocol. However, in the optical evolution method, the target image is maintained fixed to the initial single-spot profile, and the whole dynamics is performed by varying the amplitude distribution ${\xi _i}$ via intensity modulation on SLM1. This corresponds to individually changing the spin–spin coupling values. The evolution protocol we implement is thus

$$H(t) = - c\sum\limits_{{ij}} {\xi _i}(t){\xi _j}(t){\sigma _i}{\sigma _j},$$
where $c$ is an arbitrary constant. Since the interaction matrix is given in any case as a product of two variables ${J_{{ij}}} \propto {\xi _i}{\xi _j}$, the problem Hamiltonian ${H_P}$ still maintains the form of a Mattis model. In this case, each instance corresponds to a random set of positively interacting spins [Fig. 3(b)]. The time dependence of the optical amplitudes ${\xi _i}(t)$ can also be selected arbitrarily. We choose to implement couplings that change linearly in time, that is, to variate the optical intensity with a constant rate. This linear schedule corresponds to ${\xi _i}(t) = \sqrt {{E_0} - (t/T){E_0}}$, and it is shown in Fig. 3(a) for a random set of evolving ${\xi _i}$. The total run time $T$ is divided into $M$ equal intervals. Experimentally, the dynamics is implemented by applying an amplitude mask varied in each step. Digital modulation over 256 gray values (8-bit) corresponds to a maximum intensity modulation depth on the order of ${10^3}$.

Noise-driven annealing. A time-independent Hamiltonian $H(t) = {H_P}$ encoding a frustrated Mattis instance is implemented by means of the target image ${I_T}$. The dynamics starts from a random spin configuration at maximum noise level ${\rho _m}$, which corresponds to a high-temperature spin state. To perform an evolution varying the noise level, we divide the noise level interval $[{\rho _m},0]$ into $M$ equal intervals. In analogy with the optical adiabatic evolutions, $n = 6N$ iterations at noise level $\rho (t)$ occur in each step. The final state has $\rho (nM) = 0$, that is, the spins are close to zero temperature.

Simulated annealing. We have implemented a custom optimized SA algorithm on MATLAB following Ref. [60]. The code exploits various techniques including sequential updating, forward energy computation, and fast pre-computed random numbers. It has been benchmarked on various standard non-deterministic polynomial-time (NP) hard problems.

C. Ground States Analysis

To quantify the ground-state probability at finite temperature, we compute the correlation coefficients between the measured spin configuration and the known optimal solutions of the programmed instance, which for a Mattis graph is identical to the interaction configuration ${\xi _i}$, or to its sign reversal [59]. The correlation reads as $C = \sum\nolimits_i {\sigma _i}{\xi _i}/|{\xi _i}|$, being $C = \pm 1$ for the ideal spin system in the lowest energy state. It is a successful evolution whose final equilibrium state gives $|C| \gt 0.75$; the success probability ${p_s}$ is the fraction of runs converging to the correct ground state over a set of experiments with different random initial conditions. Instantaneous values of ${p_s}$ are obtained from a single measurement at a fixed time (machine iteration) during the equilibrium stage; values averaged over thermal fluctuations [magenta bars in Figs. 4(a) and 4(b)] result from averaging on a time interval the spin configuration measured at equilibrium and identifying the spin with the local magnetization sign. The Hamming distance $h$ is the number of spins that need to be flipped to reach the minimum energy configuration. The Hamming fraction is the Hamming distance normalized to the total number of spins, $h/N$.

Funding

Sapienza Università di Roma (SAPIExcellence 2019, SPIM project).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. F. Barahona, “On the computational complexity of Ising spin glass models,” J. Phys. A 15, 3241–3253 (1982). [CrossRef]  

2. A. Lucas, “Ising formulations of many NP problems,” Front. Phys. 2, 1–15 (2014). [CrossRef]  

3. A. Marandi, Z. Wang, K. Takata, R. L. Byer, and Y. Yamamoto, “Network of time-multiplexed optical parametric oscillators as coherent Ising machine,” Nat. Photonics 8, 937–942 (2014). [CrossRef]  

4. M. W. Johnson, M. H. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, and E. M. Chapple, “Quantum annealing with manufactured spins,” Nature 473, 194–198 (2011). [CrossRef]  

5. W. A. Borders, A. Z. Pervaiz, S. Fukami, K. Y. Camsari, H. Ohno, and S. Datta, “Integer factorization using stochastic magnetic tunnel junctions,” Nature 573, 390–393 (2019). [CrossRef]  

6. I. Mahboob, H. Okamoto, and H. Yamaguchi, “An electromechanical Ising Hamiltonian,” Sci. Adv. 2, e1600236 (2016). [CrossRef]  

7. D. Brunner, M. C. Soriano, C. Mirasso, and I. Fischer, “Parallel photonic information processing at gigabyte per second data rates using transient states,” Nat. Commun. 4, 1364 (2013). [CrossRef]  

8. N. Mohammadi Estakhri, B. Edwards, and N. Engheta, “Inverse-designed metastructures that solve equations,” Science 363, 1333–1338 (2019). [CrossRef]  

9. N. Ghofraniha, I. Viola, F. Di Maria, G. Barbarella, G. Gigli, L. Leuzzi, and C. Conti, “Experimental evidence of replica symmetry breaking in random lasers,” Nat. Commun. 6, 6058 (2015). [CrossRef]  

10. D. Pierangeli, A. Tavani, F. Di Mei, A. J. Agranat, C. Conti, and E. DelRe, “Observation of replica symmetry breaking in disordered nonlinear wave propagation,” Nat. Commun. 8, 1501 (2017). [CrossRef]  

11. M. Parto, W. Hayenga, A. Marandi, D. N. Christodoulides, and M. Khajavikhan, “Realizing spin Hamiltonians in nanoscale active photonic lattices,” Nat. Mater. 19, 725–731 (2020). [CrossRef]  

12. V. Pal, S. Mahler, C. Tradonsky, A. A. Friesem, and N. Davidson, “Rapid fair sampling of XY spin Hamiltonian with a laser simulator,” arXiv:1912.10689v1 (2019).

13. N. G. Berloff, M. Silva, K. Kalinin, A. Askitopoulos, J. D. Topfer, P. Cilibrizzi, W. Langbein, and P. G. Lagoudakis, “Realizing the classical XY Hamiltonian in polariton simulators,” Nat. Mater. 16, 1120–1126 (2017). [CrossRef]  

14. D. Dung, C. Kurtscheid, T. Damm, J. Schmitt, F. Vewinger, M. Weitz, and J. Klaers, “Variable potentials for thermalized light and coupled condensates,” Nat. Photonics 11, 565–569 (2017). [CrossRef]  

15. E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, “A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem,” Science 292, 472–475 (2001). [CrossRef]  

16. G. E. Santoro and E. Tosatti, “Optimization using quantum mechanics: quantum annealing through adiabatic evolution,” J. Phys. A 39, R393–R431 (2006). [CrossRef]  

17. M. Steffen, W. van Dam, T. Hogg, G. Breyta, and I. Chuang, “Experimental implementation of an adiabatic quantum optimization algorithm,” Phys. Rev. Lett. 90, 067903 (2003). [CrossRef]  

18. R. Barends, A. Shabani, L. Lamata, J. Kelly, A. Mezzacapo, U. Las Heras, R. Babbush, A. G. Fowler, B. Campbell, Y. Chen, and Z. Chen, “Digitized adiabatic quantum computing with a superconducting circuit,” Nature 534, 222–226 (2016). [CrossRef]  

19. S. Boixo, T. F. Rønnow, S. V. Isakov, Z. Wang, D. Wecker, D. A. Lidar, J. M. Martinis, and M. Troyer, “Evidence for quantum annealing with more than one hundred qubits,” Nat. Phys. 10, 218–224 (2014). [CrossRef]  

20. M. Yamaoka, C. Yoshimura, M. Hayashi, T. Okuyama, H. Aoki, and H. Mizuno, “A 20k-spin Ising chip to solve combinatorial optimization problems with CMOS annealing,” IEEE J. Solid-State Circuits 51, 303–309 (2015). [CrossRef]  

21. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science 220, 671–680 (1983). [CrossRef]  

22. A. Das and B. K. Chakrabarti, “Colloquium: quantum annealing and analog quantum computation,” Rev. Mod. Phys. 80, 1061–1081 (2008). [CrossRef]  

23. S. Puri, C. K. Andersen, A. L. Grimsmo, and A. Blais, “Quantum annealing with all-to-all connected nonlinear oscillators,” Nat. Commun. 8, 15785 (2017). [CrossRef]  

24. H. Goto, “Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network,” Sci. Rep. 6, 21686 (2016). [CrossRef]  

25. A. Perdomo-Ortiz, N. Dickson, M. Drew-Brook, G. Rose, and A. Aspuru-Guzik, “Finding low-energy conformations of lattice protein models by quantum annealing,” Sci. Rep. 2, 571 (2012). [CrossRef]  

26. S. Jiang, K. A. Britt, A. J. McCaskey, T. S. Humble, and S. Kais, “Quantum annealing for prime factorization,” Sci. Rep. 8, 17667 (2018). [CrossRef]  

27. T. Lanting, A. J. Przybysz, A. Y. Smirnov, F. M. Spedalieri, M. H. Amin, A. J. Berkley, R. Harris, F. Altomare, S. Boixo, P. Bunyk, and N. Dickson, “Entanglement in a quantum annealing processor,” Phys. Rev. X 4, 021041 (2014). [CrossRef]  

28. V. S. Denchev, S. Boixo, S. V. Isakov, N. Ding, R. Babbush, V. Smelyanskiy, J. Martinis, and H. Neven, “What is the computational value of finite-range tunneling?Phys. Rev. X 6, 031015 (2016). [CrossRef]  

29. S. Mandrà, Z. Zhu, and H. G. Katzgraber, “Exponentially biased ground-state sampling of quantum annealing machines with transverse-field driving Hamiltonians,” Phys. Rev. Lett. 118, 070502 (2017). [CrossRef]  

30. P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, and R. L. Byer, “A fully-programmable 100-spin coherent Ising machine with all-to-all connections,” Science 354, 614–617 (2016). [CrossRef]  

31. T. Inagaki, Y. Haribara, K. Igarashi, T. Sonobe, S. Tamate, T. Honjo, A. Marandi, P. L. McMahon, T. Umeki, K. Enbutsu, and O. Tadanaga, “A coherent Ising machine for 2000-node optimization problems,” Science 354, 603–606 (2016). [CrossRef]  

32. T. Inagaki, K. Inaba, R. Hamerly, K. Inoue, Y. Yamamoto, and H. Takesue, “Large-scale Ising spin network based on degenerate optical parametric oscillators,” Nat. Photonics 10, 415–419 (2016). [CrossRef]  

33. M. Babaeian, D. T. Nguyen, V. Demir, M. Akbulut, P. A. Blanche, Y. Kaneda, S. Guha, M. A. Neifeld, and N. Peyghambarian, “A single shot coherent Ising machine based on a network of injection-locked multicore fiber lasers,” Nat. Commun. 10, 3516 (2019). [CrossRef]  

34. F. Böhm, G. Verschaffelt, and G. Van der Sande, “A poor man’s coherent Ising machine based on opto-electronic feedback systems for solving optimization problems,” Nat. Commun. 10, 3538 (2019). [CrossRef]  

35. R. Hamerly, T. Inagaki, P. L. McMahon, D. Venturelli, A. Marandi, T. Onodera, E. Ng, C. Langrock, K. Inaba, T. Honjo, and K. Enbutsu, “Experimental investigation of performance differences between coherent Ising machines and a quantum annealer,” Sci. Adv. 5, eaau0823 (2019). [CrossRef]  

36. K. P. Kalinin and N. G. Berloff, “Global optimization of spin Hamiltonians with gain-dissipative systems,” Sci. Rep. 8, 17791 (2018). [CrossRef]  

37. L. Bello, M. Calvanese Strinati, E. G. Dalla Torre, and A. Pe’er, “Persistent coherent beating in coupled parametric oscillators,” Phys. Rev. Lett. 123, 083901 (2019). [CrossRef]  

38. C. Tradonsky, I. Gershenzon, V. Pal, R. Chriki, A. A. Friesem, O. Raz, and N. Davidson, “Rapid laser solver for the phase retrieval problem,”Sci. Adv. 5, eaax4530 (2019). [CrossRef]  

39. F. Bohm, T. Inagaki, K. Inaba, T. Honjo, K. Enbutsu, T. Umeki, R. Kasahara, and H. Takesue, “Understanding dynamics of coherent Ising machines through simulation of large-scale 2D Ising models,” Nat. Commun. 9, 5020 (2018). [CrossRef]  

40. H. Goto, K. Tatsumura, and A. R. Dixon, “Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems,” Sci. Adv. 5, eaav2372 (2019). [CrossRef]  

41. E. S. Tiunov, A. E. Ulanov, and A. I. Lvovsky, “Annealing by simulating the coherent Ising machine,” Opt. Express 27, 10288–10295 (2019). [CrossRef]  

42. C. Roques-Carmes, Y. Shen, C. Zanoci, M. Prabhu, F. Atieh, L. Jing, T. Dubcek, V. Ceperic, J. D. Joannopoulos, D. Englund, and M. Soljacic, “Heuristic recurrent algorithms for photonic Ising machines,” Nat. Commun. 11, 249 (2020). [CrossRef]  

43. M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. Čeperić, and J. D. Joannopoulos, “Accelerating recurrent Ising machines in photonic integrated circuits,” Optica 7, 551–558 (2020). [CrossRef]  

44. Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljačić, “Deep learning with coherent nanophotonic circuits,” Nat. Photonics 11, 441–446 (2017). [CrossRef]  

45. K. Wu, J. García de Abajo, C. Soci, P. P. Shum, and N. I. Zheludev, “An optical fiber network oracle for NP-complete problems,” Light Sci. Appl. 3, e147 (2014). [CrossRef]  

46. M. R. Vázquez, V. Bharadwaj, B. Sotillo, S. Z. A. Lo, R. Ramponi, N. I. Zheludev, G. Lanzani, S. M. Eaton, and C. Soci, “Optical NP problem solver on laser-written waveguide platform,” Opt. Express 26, 702–710 (2018). [CrossRef]  

47. Y. Okawachi, Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y. Zhao, B. Y. Kim, M. Lipson, and A. L. Gaeta, “Demonstration of chip-based coupled degenerate optical parametric oscillators for realizing a nanophotonic spin-glass,” Nat. Commun. 11, 4119 (2020). [CrossRef]  

48. D. Pierangeli, G. Marcucci, and C. Conti, “Large-scale photonic Ising machine by spatial light modulation,” Phys. Rev. Lett. 122, 213902 (2019). [CrossRef]  

49. D. Pierangeli, G. Marcucci, D. Brunner, and C. Conti, “Noise-enhanced spatial-photonic Ising machine,” Nanophotonics 9, 4109–4116 (2020). [CrossRef]  

50. S. Kumar, H. Zhang, and Y.-P. Huang, “Large-scale Ising emulation with four body interaction and all-to-all connections,” Commun. Phys. 3, 108 (2020). [CrossRef]  

51. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network, Optica 5, 756–760 (2018). [CrossRef]  

52. A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 6215–6219.

53. R. Hamerly, L. Bernstein, A. Sludds, M. Soljačić, and D. Englund, “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X 9, 021032 (2019). [CrossRef]  

54. M. W. Matthès, P. Del Hougne, J. De Rosny, G. Lerosey, and S. M. Popoff, “Optical complex media as universal reconfigurable linear operators,” Optica 6, 465–472 (2019). [CrossRef]  

55. G. Marcucci, D. Pierangeli, P. W. H. Pinkse, M. Malik, and C. Conti, “Programming multi-level quantum gates in disordered computing reservoirs via machine learning,” Opt. Express 28, 14018–14027 (2020). [CrossRef]  

56. Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y. C. Chen, P. Chen, G. B. Jo, J. Liu, and S. Du, “All-optical neural network with nonlinear activation functions," Optica 6, 1132–1137 (2019). [CrossRef]  

57. M. Rafayelyan, J. Dong, Y. Tan, F. Krzakala, and S. Gigan, “Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction,” arXiv:2001.09131v1 (2020).

58. D. C. Mattis, “Solvable spin systems with random interactions,” Phys. Lett. A 56, 421 (1976). [CrossRef]  

59. H. Nishimori, Statistical Physics of Spin Glasses and Information Processing (Clarendon, 2001), Vol. 111.

60. S. V. Isakov, I. N. Zintchenko, T. F. Ronnow, and M. Troyer, “Optimised simulated annealing for Ising spin glasses,” Comput. Phys. Commun. 192, 265–271 (2015). [CrossRef]  

61. O. Tzang, E. Niv, S. Singh, S. Labouesse, G. Myatt, and R. Piestun, “Wavefront shaping in complex media with a 350 KHz modulator via a 1D-to-2D transform,” Nat. Photonics 13, 788–793 (2019). [CrossRef]  

62. C. Peng, R. Hamerly, M. Soltani, and D. R. Englund, “Design of high-speed phase-only spatial light modulators with two-dimensional tunable microcavity arrays,” Opt. Express 27, 30669–30680 (2019). [CrossRef]  

63. M. Leonetti, E. Hormann, L. Leuzzi, G. Parisi, and G. Ruocco, “Optical computation of a spin glass dynamics with tunable complexity,” arXiv:2006.08378v2 (2020).

64. D. Pierangeli, M. Rafayelyan, C. Conti, and S. Gigan, “Scalable spin-glass optical simulator,” arXiv:2006.00828 (2020).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Annealing Ising models on a spatial-photonic device. (a) Illustration of the adiabatic computing principle. The energetic landscape with several minima varies as the Hamiltonian evolves from the initial ${H_0}$ to ${H_P}$ , corresponding to the target combinatorial problem. When the dynamics is slow enough (adiabatic condition), the system remains in the low-energy state towards the final state, which gives the optimal solution. (b) Realization of the adiabatic evolution on a spatial-photonic Ising machine. In the experimental setup, the spins ${\sigma _i}$ are encoded by SLM2 into binary optical phases (the inset shows a representative configuration), whereas SLM1 modulates the beam amplitudes ${\xi _i}$ to control spin interaction (top panels). Adiabatic evolution is implemented by the controlled variation of the couplings via the amplitude-modulated light, as illustrated by the random intensity matrices in the top panels (M1-M2, mirrors; L1, lens; CCD, camera).
Fig. 2.
Fig. 2. Optical computing of Mattis spin glasses by holographic annealing. (a) Evolution of the target image, corresponding to an Ising Hamiltonian that starts from ${H_0}$ (state preparation) and evolves towards the problem ${H_P}$ in $M$ discrete steps. (b) Magnetization dynamics for 32 independent realizations of the annealing process. The thick purple line gives the average evolution ( $N = 64$ , $M = 20$ ). Insets show the graphs of ${H_0}$ and ${H_P}$ ; for clarity, only positive/negative (orange/blue) edges that start from the black nodes are shown. (c) Spin configuration as a function of time for a representative Mattis instance. (d) Example of a ground state measured at equilibrium and exact combinatorial solution of the realized problem.
Fig. 3.
Fig. 3. Adiabatic evolution by optical control of the spin couplings. (a) Variation of the spatially modulated intensity during the trajectory. The initial model ${H_0}$ has uniform couplings; after $M$ steps, $H(t)$ is equal to the target ${H_P}$ with random interactions. (b) Dynamics of the magnetization for replicated experiments with $N = 64$ and $M = 10$ . The thick purple line gives the average behavior. Insets show the graphs for ${H_0}$ and ${H_P}$ ; for clarity, only positive (orange) edges that start from the black nodes are shown. (c) Spin dynamics for a representative realization of couplings. The initial uniform state forms clusters when evolving towards the ground state of ${H_P}$ . (d) Best experimental ground state compared with the exact solution of the corresponding Mattis model. The orange box indicates the set of interacting spins, while the other part corresponds to those with zero-couplings that fluctuate thermally.
Fig. 4.
Fig. 4. Optical computing performance and scaling properties. Success probability as a function of the annealing time for (a) holographic and (b) optical annealing of Ising models with Mattis-type interaction. Blue and magenta bars indicate mean values obtained by measures at a fixed time and averaging over configurations at different times (see Methods). (c), (d) Success probability varying the spin number at different $M$ . Empty and filled dots are for data from single-shot and noise-averaged measurements.
Fig. 5.
Fig. 5. Noise-driven annealing. Success probability varying the annealing time for noise-driven evolutions on the SPIM. Blue and magenta bars indicate results for trajectories starting from ${\rho _m} = 0.4$ and 0.2, respectively. The same evolutions are numerically implemented with a simulated annealing (SA) algorithm (dashed line).
Fig. 6.
Fig. 6. Scaling of the iteration time. Measurements of the SPIM unit time (dots) versus the problem size. $\tau$ values are normalized to the iteration time for 64 spins (0.085 s). The inset scheme illustrates the single iteration. The scaling ratio $\tau /N$ (line) shows that the iteration time is nearly constant as the size grows.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

H ( t ) = i j J i j ( t ) σ i σ j ,
H ( t ) = ( 1 t T ) H 0 c E 0 2 t T i j G i j σ i σ j ,
H ( t ) = c i j ξ i ( t ) ξ j ( t ) σ i σ j ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.