Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic mutation enhanced particle swarm optimization for optical wavefront shaping

Open Access Open Access

Abstract

Particle swarm optimization (PSO) is a well-known iterative algorithm commonly adopted in wavefront shaping for focusing light through or inside scattering media. The performance is, however, limited by premature convergence in an unstable environment. Therefore, we aim to solve this problem and enhance the focusing performance by adding a dynamic mutation operation into the plain PSO. With dynamic mutation, the “particles,” or the optimized masks, are mutated with quantifiable discrepancy between the current and theoretical optimal solution, i.e., the “error rate.” Gauged by that, the diversity of the “particles” is effectively expanded, and the adaptability of the algorithm to noise and instability is significantly promoted, yielding optimization approaching the theoretical optimum. The simulation and experimental results show that PSO with dynamic mutation demonstrates considerably better performance than PSO without mutation or with a constant mutation, especially under a noisy environment.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Multiple scattering occurs when light passes through a material with wavelength-scale inhomogeneity of refractive index, e.g., biological tissue. This has long been an obstacle for deep-tissue optical focusing and/or imaging, and other biomedical applications, such as optical coherence tomography (OCT) [1,2] and two-photon microscopy [3]. To address this dilemma, wavefront shaping has been invented [4], with which the input wavefront can be modulated or “shaped” by a spatial light modulator (SLM) before it reaches the scattering media. The scattering can be, therefore, reversed or compensated through optical phase conjugation [5], transmission matrix measurement [6], or iterative optimization [717]. Iterative optimization approaches are commonly adopted as they are straight-forward and easy to implement. By confining light to a specific output optical field, e.g., a focus, inside/through a scattering medium, an optimal SLM mask can be iteratively optimized so that the phase or intensity distortions can be overcome. In recent years, various iterative optimization algorithms, such as genetic algorithm (GA) [7,8], particle swarm optimization (PSO) [1013], simulated annealing algorithm (SA) [14,15], continuous sequential algorithm (CSA) [16], stepwise sequential algorithm (SSA) [16], partitioning algorithm (PA) [16], and artificial intelligence assisted algorithms [18,19], have been demonstrated for successful optical focusing.

Particle swarm optimization (PSO) is a well-established swarm intelligence algorithm developed by Kennedy and Eberhart in 1995 [20]. It is inspired by the social behavior of animals such as bird flocking and fish schooling. Every possible solution is a “bird” in the search space and is called a “particle”. The fitness of the particles is evaluated, and their positions and velocities are updated according to the individual and global best solutions in each iteration. The particles “fly” through the search space until the optimal solution is found. The PSO algorithm was originally developed to operate in a continuous problem space. With further development, Kennedy and Eberhart remodeled it to a discrete binary version in 1997 [21]. And Li et al. firstly applied it to amplitude optimization for wavefront shaping in 2017 [12]. Due to its simplicity in implementation (e.g., fewer parameters to be set) and robustness in global optimization, PSO is gaining popularity in the field of wavefront shaping [1013,17].

In principle, PSO and GA are similar: technically their optimizations start with a large population containing various randomly generated SLM masks. Nevertheless, each sample or SLM mask in PSO, called “particle”, evolves individually based on the fitness, without involving crossover between samples like GA. PSO has an intrinsic guidance strategy that leads the particles to move according to the best individual to reach better positions. It also has a memory mechanism to store the previous best solution achieved by each particle. Once the particles move to an undesirable path and reach an unsatisfactory position, their memories guide them back to the previous stage. These unique properties allow the entire population to converge concurrently to an optimal position quickly and be better immune to degradation. In general, a larger population size can lead to higher probability of approaching the global optimum, but it cannot be very large (i.e., no more than 50) [22]. By going through every single “particle” in a large-size population, the efficiency of the optimization will be greatly suppressed. A small population, however, limits the sample diversity and probably traps the optimization in the local optimum.

To balance the performance and efficiency of PSO, a mutation procedure is suggested: a tiny ratio (mutation rate) of the modulating modes of the “particle” is mutated in every iteration to expand the diversity [10,23]. Nevertheless, having a suitable mutation rate in wavefront shaping is challenging. Reported mutation rates so far are pre-set before the optimization, to be either a constant or a decaying value across the optimization. Such a design is no more than a guess and its adaptability to medium perturbations is doubted. A perturbed medium continuously varies the optimal SLM mask for optical focusing, while a pre-determined mutation essentially assumes the medium is unchanged or sufficiently stable.

Recently, an adaptive mutation concept was proposed to enhance the adaptability of iterative algorithms, such as GA [24], whose mutation rate is based on an instant error rate (r) of the modulating mask. In a strong scattering regime, the instant error rate can be calculated according to a square rule, i.e., Eq. (1), where $\eta ^{\prime}$ represents the relative peak-to-background ratio (PBR), ${\eta _r}$ is the experimental PBR, and ${\eta _0}$ is the theoretical average PBR (${\eta _0} = N/2\pi $ for binary-amplitude modulation and N is the number of mode used for modulation). The error rate indicates the number of modes of the SLM mask that incorrectly modulates the input wavefront. In other words, the status of the optimization, like how the current mask deviates from the theoretical one, can be directly accessed. For simplicity, the mutation rate ($\mu $) is set to be proportion to the error rate, i.e., Eq. (2), so that a adaptive mutation rate is obtained.

$$\eta ^{\prime} = {\raise0.7ex\hbox{${{\eta _r}}$} \!\mathord{\left/ {\vphantom {{{\eta_r}} {{\eta_0}}}} \right.}\!\lower0.7ex\hbox{${{\eta _0}}$}} = {({1 - 2r} )^2}$$
$$\mu = r/mutation\; constant$$

In this study, the adaptive mutation is introduced to the PSO for binary-amplitude modulation to achieve optical focusing in a strong scattering regime. The performances among PSO without mutation, with constant (single-point) mutation, and with dynamic mutation, are compared through both simulation and experiment. The effect of the dynamic mutation on improving the adaptability and overall performance of PSO will be validated.

The workflow of the PSO is depicted in Fig. 1(a). In the PSO for wavefront shaping, each “particle” is characterized by a “position”, i.e., the SLM mask, and a “velocity”, i.e., the variation factor for the SLM mask. In the first step, before optimization, a number of “particles” are randomly generated with a population size (M), whose “position”, ${x_i} = ({{x_{i,1}}, \ldots ,{x_{i,d}}, \ldots ,{x_{i,D}}} )$, and “velocity”, ${v_i} = ({{v_{i,1}} \ldots ,{v_{i,d}}, \ldots ,{v_{iD}}} )$, where D is the number of input mode N of the modulating mask and $i = 1,2, \ldots ,\; M$. Since the binary-amplitude modulation is of interest in this study, ${x_{i,d}}$ is equal to 0 or 1 and ${v_{i,d}}$ is in the range of [0,1] (Step 1 in Fig. 1(a)). Then, the optimization can be started. In every iteration, the fitness, i.e., PBR, of all the “particles” is measured, and “pbest” and “gbest” are recorded (Step 2), where “pbest” represents the “position” with the highest fitness achieved by an individual “particle” so far, and “gbest” represents the global best “position” among the entire population of the investigated “particles”. With “pbest” and “gbest” identified, the “velocities” of each “particle” are updated according to Eq. (3) and Eq. (4), respectively, for the next iteration (Step 3), where $v_{i,d}^k$ and $x_{i,d}^k$ denote the velocity and position of the ith particle at the kth iteration, c1 and c2 are learning factors affecting the convergence speed, and r1 and r2 are random numbers between 0 and 1. As the optimization works in a discrete space, the update of position is a probability problem. The new velocity ($v_{i,d}^{k + 1}$) is transformed by a logistic sigmoid function $S({v_{i,d}^{k + 1}} )$ and constrained to the interval [0, 1]. Thereby, the new position ($x_{i,d}^{k + 1}$) has a probability of $S({v_{i,d}^{k + 1}} )$ equal to one. A mutation operation is added after finding the new velocities and positions of the particles in each iteration (Step 4). Some of the pixels in the mask are flipped, i.e., 0 (OFF) becomes 1 (ON) or 1 (ON) becomes 0 (OFF). For constant mutation, a fixed number of pixels are flipped (i.e., 1 as single-point mutation). For dynamic mutation, the mutation rate of each particle is adjusted according to its error rate calculated from Eq. (1) and a total of $\mu \times N$ pixels are mutated. The concepts of different mutation schemes are illustrated in Fig. 1(b). The process continues until a termination condition (e.g., convergence of the PBR) has been met or it reaches the optimal solution.

$$v_{i,d}^{k + 1} = v_{i,d}^k + {c_1}r_1^k({pbest_{i,d}^k - x_{i,d}^k} )+ {c_2}r_2^k({gbest_d^k - x_{i,d}^k} )$$
$$\left\{ {\begin{array}{c} {x_{i,d}^{k + 1} = 1,\; if\; rand(\; )< S({v_{i,d}^{k + 1}} )}\\ {x_{i,d}^{k + 1} = 0,\; otherwise} \end{array}} \right.,where\; \; S({v_{i,d}^{k + 1}} )= \frac{1}{{1 + exp ({ - v_{i,d}^{k + 1}} )}}$$

 figure: Fig. 1.

Fig. 1. (a) Flow chart of the PSO algorithm with dynamic mutation. (b) Concepts of different mutation schemes

Download Full Size | PDF

The optimization performances of PSO without mutation, with constant mutation, and with dynamic mutation are first compared by simulations under different levels of noise. Each simulation is repeated for 50 times with a new transmission matrix generated following the circular Gaussian distribution, and then the results are averaged. N=64 × 64 input modes are used, and the output mode at the center of the detection plane is selected as the target for focusing optimization. The PBR of the focus, defined as the ratio of the average intensity of the focal region to the average background intensity, is employed as the fitness value. Every time the PBR is measured, it is counted as a measurement and an iteration is completed when the PBR of all particles are evaluated. To simulate the noise emanating from unstable optical systems, additive Gaussian noises with standard deviations of 30%, 60%, and 80% of the initial average intensity < I0> are added to the intensity in every PBR measurement to mimic the low-noise, medium-noise, and high-noise situations, respectively. The parameters used in the simulations, which are optimized for each noise level, are summarized in Table 1. From Eq. (2), the mutation rate is bounded between 0.5/mutation constant and 0 if we assume the initial and final error rates to be 0.5 and 0, respectively. Therefore, a smaller mutation constant is employed in higher noise situations to provide a wider range of mutation rates for greater solution diversity.

Tables Icon

Table 1. Simulation Parameters

The simulation results (Fig. 2) show that the PSO with dynamic mutation is of significant advantage in a noisy environment. Although optimizations without mutation, with constant mutation, and with dynamic mutation show similar focusing performance in the noise-free situation, the difference among the three mutation schemes becomes more obvious as the noise level increases. In the low-noise situation, the PSOs with mutation operations (both dynamic and constant mutation) achieve slightly better final PBR than the plain PSO. In the medium-noise situation, the PSO with dynamic mutation shows a better noise-resisting ability than the other two schemes and starts to outperform the PSO with constant mutation after 13000 measurements. When it comes to the high-noise situation, the PSO without mutation and with constant mutation suffer from premature convergence after around 10000 measurements. Meanwhile, with dynamic mutation, the PBR of the focal point keeps increasing throughout the optimization and the final PBR is ∼50% higher than that of the other two schemes.

 figure: Fig. 2.

Fig. 2. Simulations results of PSO without mutation, with constant mutation, and with dynamic mutation under different levels of noise. (a) Noise-free. (b) Low- noise. (c) Medium-noise. (d) High-noise.

Download Full Size | PDF

The premature convergence can be attributed to the small population size (i.e., 10 particles) as it limits the sample diversity. The black dashed line in Fig. 2(d) shows that with a larger population size (i.e., 50 particles), the PSO without mutation is less likely to be trapped in a local optimum. Nevertheless, the trade-off is a lower efficiency at the start. The optimization has a slow convergence speed and notably, the final PBR is still ∼16% lower than that of the PSO with dynamic mutation with a smaller population size. The result has proved that a small population PSO with dynamic mutation can beat a large population PSO. Having a dynamic mutation can effectively expand the population diversity to prevent the optimization from falling into local optima, and at the same time, achieve satisfactory focusing performance with a high efficiency.

To validate the simulation results, experiments are conducted with the setup shown in Fig. 3(a). A continuous 532 nm laser (EXLSR-532-300-CDRH, Spectra Physics, USA) is used as the light source. The light beam is expanded after passing through a pair of convex lenses (L1 and L2) and illuminates the digital micromirror device (DMD) (DLP4100, Texas Instruments Inc., USA). 64 × 64 input modes with 16 × 25 DMD pixels grouped as a superpixel (172.8 um × 270 um) are used. The modulated light beam is then contracted by L3 and L4, and focused onto a scattering sample, which is a 4 mm thick ground glass diffuser, by a 10x objective lens (NA=0.25). A CMOS camera (Blackfly S BFS-U3-04S2M-CS, FLIR, Canada) is used to capture the image behind the ground glass. The optical system is built on an optical table supported by pneumatic isolators (M-OTS-ST-48-12-I, Newport, USA), in which compressed air is used to provide vibration damping and isolation. In order to investigate the noise adaptability of different PSO schemes, the compressed air in the pneumatic isolators is released to reduce the stability of the system. The stability is evaluated by measuring the Pearson’s correlation coefficient (c.c.) between the speckle patterns before optimization and during optimization when all the DMD pixels are in the “ON” state. A larger c.c. indicates a higher stability. The parameters used in the experiments with and without the vibration isolation follow the simulation settings for noise=0.6<I0> and noise=0.8<I0>, respectively.

 figure: Fig. 3.

Fig. 3. (a) Experimental setup. L1: f=60 mm; L2 & L3: f=250 mm; L4: f=50 mm; DMD: 1920 × 1080 digital micromirror device; OBJ: 10x objective lens (NA=0.25); S: 4 mm ground glass. (b) Speckle patterns before optimization (top) and the focuses obtained by 3 different mutation schemes without vibration isolation after 10000 measurements (bottom). The 200 μm scale bar is applicable to all images in (b). (c-d) PBR change in 10,000 measurements of PSO without mutation, with constant mutation, and with dynamic mutation (c) with vibration isolation and (d) without vibration isolation. The insets in (c) and (d) show the stability change during the experiments.

Download Full Size | PDF

Figures 3(b)-(d) show the experimental results of PSO without mutation, with constant mutation, and with dynamic mutation. With pneumatic isolation in the optical table, the optical system is highly stable and the c.c. decreases to 0.97 after 10,000 measurements (∼60 minutes). The result with such a high system stability in Fig. 3(c) is comparable to the simulation result with a medium noise level (Fig. 2(c)) that the PSO with dynamic mutation can achieve a higher final PBR than the other two mutation schemes. When there is no vibration isolation, the optical system becomes less stable, and the stability gradually decreases to 0.9 after 10,000 measurements. As shown in Fig. 3(d), similar to the simulation result in the high-noise situation (Fig. 2(d)), the PSO without mutation encounters premature convergence after around 4,000 measurements. The growth of PBR in the PSO with constant mutation also becomes slow after 5,000 measurements. With dynamic mutation, the final PBR is ∼51% and ∼28% higher than those obtained in PSO without mutation and with constant mutation, respectively. From Fig. 3(b), the peak intensity of the focus obtained by PSO with dynamic mutation is 2.6 times and 1.2 times stronger than that without mutation and with constant mutation, respectively.

As each experiment lasts around 60 minutes, the speckle patterns gradually decorrelate due to system instability. Without a suitable mutation operation, the diversity of the PSO population is not large enough to adapt to the decorrelation effect and the optimization is prone to fall into local optima. Although increasing the population size can alleviate this problem, the optimization efficiency is acutely affected (grey dashed line in Fig. 3(d). By employing a dynamic mutation, we can strike a balance between the performance and the efficiency. The mutation rate is adaptively adjusted according to the real-time DMD error rate. An increase of error rate may imply speckle decorrelation, and thus a larger mutation rate is required to increase the particles’ diversity to help it escape from local maxima.

In this study, a dynamic mutation is applied to PSO for binary-amplitude modulation based wavefront shaping. From the simulation and experimental results, by including a dynamic mutation operation into the PSO, premature convergence of the focusing optimization can be effectively prevented. And the diversity of solution for a small population size can be effectively expanded such that the good optimization efficiency and performance can be assured, even in high-noise situations. This study also demonstrates a further application of the dynamic mutation concept and the square rule described in Ref. [24]. With the enhanced focusing performance by dynamic mutation, PSO is more applicable to be used in unstable scattering media and holds great potential for applications in biomedical scenes, such as deep tissue focusing and imaging. Likewise, the dynamic mutation operation can also be incorporated into other optimization algorithms for iterative wavefront shaping, such as genetic algorithm (GA) and simulated annealing (SA), to enhance their focusing performance and adaptability. It is also possible to extend the dynamic mutation operation to phase modulation by deriving a phase version error rate and square rule for a broader application and more promising optimization results.

Funding

National Natural Science Foundation of China (81627805, 81671726, 81930048); Guangdong Science and Technology Department (2019A1515011374, 2019BT02X105); Innovation and Technology Commission (GHP/043/19SZ, GHP/044/19GD, ITS/022/18); Research Grants Council, University Grants Committee (25204416, R5029-19); Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20170818104421564).

Disclosures

The authors declare no conflicts of interest.

Data availability

Requests for the code should be addressed to the corresponding author.

References

1. H. Yu, J. Jang, J. Lim, J.-H. Park, W. Jang, J.-Y. Kim, and Y. Park, “Depth-enhanced 2-D optical coherence tomography using complex wavefront shaping,” Opt. Express 22(7), 7514–7523 (2014). [CrossRef]  

2. J. Jang, J. Lim, H. Yu, H. Choi, J. Ha, J.-H. Park, W.-Y. Oh, W. Jang, S. Lee, and Y. Park, “Complex wavefront shaping for optimal depth-selective focusing in optical coherence tomography,” Opt. Express 21(3), 2890–2902 (2013). [CrossRef]  

3. J. Tang, R. N. Germain, and M. Cui, “Superpenetration optical microscopy by iterative multiphoton adaptive compensation technique,” Proc. Natl. Acad. Sci. 109(22), 8434–8439 (2012). [CrossRef]  

4. I. M. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

5. D. Wang, E. H. Zhou, J. Brake, H. Ruan, M. Jang, and C. Yang, “Focusing through dynamic tissue with millisecond digital optical phase conjugation,” Optica 2(8), 728–735 (2015). [CrossRef]  

6. S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

7. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimization for focusing through turbid media in noisy environments,” Opt. Express 20(5), 4840–4849 (2012). [CrossRef]  

8. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, “Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media,” Nat. Photonics 9(2), 126–132 (2015). [CrossRef]  

9. D. Wu, J. Luo, Z. Li, and Y. Shen, “A thorough study on genetic algorithms in feedback-based wavefront shaping,” J. Innovative Opt. Health Sci. 12(04), 1942004 (2019). [CrossRef]  

10. B.-Q. Li, B. Zhang, Q. Feng, X.-M. Cheng, Y.-C. Ding, and Q. Liu, “Shaping the Wavefront of Incident Light with a Strong Robustness Particle Swarm Optimization Algorithm,” Chin. Phys. Lett. 35(12), 124201 (2018). [CrossRef]  

11. H. Hui-Ling, C. Zi-Yang, S. Cun-Zhi, L. Ji-Lin, and P. Ji-Xiong, “Light focusing through scattering media by particle swarm optimization,” Chin. Phys. Lett. 32(10), 104202 (2015). [CrossRef]  

12. Q. Feng, B. Zhang, Z. Liu, C. Lin, and Y. Ding, “Research on intelligent algorithms for amplitude optimization of wavefront shaping,” Appl. Opt. 56(12), 3240–3244 (2017). [CrossRef]  

13. L. Fang, H. Zuo, Z. Yang, X. Zhang, J. Du, and L. Pang, “Binary wavefront optimization using particle swarm algorithm,” Laser Phys. 28(7), 076204 (2018). [CrossRef]  

14. L. Fang, H. Zuo, Z. Yang, X. Zhang, J. Du, and L. Pang, “Binary wavefront optimization using a simulated annealing algorithm,” Appl. Opt. 57(8), 1744–1751 (2018). [CrossRef]  

15. Z. Fayyaz, N. Mohammadian, F. Salimi, A. Fatima, M. R. R. Tabar, and M. R. Avanaki, “Simulated annealing optimization in wavefront shaping controlled transmission,” Appl. Opt. 57(21), 6233–6242 (2018). [CrossRef]  

16. I. M. Vellekoop and A. Mosk, “Phase control algorithms for focusing light through turbid media,” Opt. Commun. 281(11), 3071–3080 (2008). [CrossRef]  

17. Z. Fayyaz, N. Mohammadian, M. Reza Rahimi Tabar, R. Manwar, and K. Avanaki, “A comparative study of optimization algorithms for wavefront shaping,” J. Innovative Opt. Health Sci., 12 (2019).

18. S. Cheng, H. Li, Y. Luo, Y. Zheng, and P. Lai, “Artificial intelligence-assisted light control and computational imaging through scattering media,” J. Innovative Opt. Health Sci. (2019).

19. Y. Luo, S. Yan, H. Li, P. Lai, and Y. Zheng, “Focusing light through scattering media by reinforced hybrid algorithms,” APL Photonics 5(1), 016109 (2020). [CrossRef]  

20. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN'95-International Conference on Neural Networks, (IEEE, 1995), 1942–1948.

21. J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in 1997 IEEE International conference on systems, man, and cybernetics. Computational cybernetics and simulation, (IEEE, 1997), 4104–4108.

22. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (Ieee, 1995), 39–43.

23. A. Stacey, M. Jancic, and I. Grundy, “Particle swarm optimization with mutation,” in The 2003 Congress on Evolutionary Computation, 2003. CEC'03., (IEEE, 2003), 1425–1430.

24. H. Li, C. M. Woo, T. Zhong, Z. Yu, Y. Luo, Y. Zheng, X. Yang, H. Hui, and P. Lai, “Adaptive optical focusing through perturbed scattering media with a dynamic mutation algorithm,” Photonics Res. 9(2), 202–212 (2021). [CrossRef]  

Data availability

Requests for the code should be addressed to the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. (a) Flow chart of the PSO algorithm with dynamic mutation. (b) Concepts of different mutation schemes
Fig. 2.
Fig. 2. Simulations results of PSO without mutation, with constant mutation, and with dynamic mutation under different levels of noise. (a) Noise-free. (b) Low- noise. (c) Medium-noise. (d) High-noise.
Fig. 3.
Fig. 3. (a) Experimental setup. L1: f=60 mm; L2 & L3: f=250 mm; L4: f=50 mm; DMD: 1920 × 1080 digital micromirror device; OBJ: 10x objective lens (NA=0.25); S: 4 mm ground glass. (b) Speckle patterns before optimization (top) and the focuses obtained by 3 different mutation schemes without vibration isolation after 10000 measurements (bottom). The 200 μm scale bar is applicable to all images in (b). (c-d) PBR change in 10,000 measurements of PSO without mutation, with constant mutation, and with dynamic mutation (c) with vibration isolation and (d) without vibration isolation. The insets in (c) and (d) show the stability change during the experiments.

Tables (1)

Tables Icon

Table 1. Simulation Parameters

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

η = η r / η r η 0 η 0 = ( 1 2 r ) 2
μ = r / m u t a t i o n c o n s t a n t
v i , d k + 1 = v i , d k + c 1 r 1 k ( p b e s t i , d k x i , d k ) + c 2 r 2 k ( g b e s t d k x i , d k )
{ x i , d k + 1 = 1 , i f r a n d ( ) < S ( v i , d k + 1 ) x i , d k + 1 = 0 , o t h e r w i s e , w h e r e S ( v i , d k + 1 ) = 1 1 + e x p ( v i , d k + 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.