Change is an inescapable aspect of natural and arti˝cial systems, and adaptation is central to their resilience [1]. Optimization problems are no exception to this maxim. Indeed, viability of businesses and their operational success depends heavily on their e˙ectiveness in responding to a change in the myriad of optimization problems they entail. For an optimization problem, this boils down to the e°ciency of an algorithm to ˝nd and maintain a quality solution to an ever changing problem. Ubiquity of dynamic optimization problems demands extensive research into design and development of algorithms capable of dealing with various types of change [2].
Inspired by biological evolution and natural self-organized systems, evolutionary algorithms and swarm intelligence methods have been vastly used for optimizing dynamic optimization problems due to their natural capability in dealing with environmental changes [3]. Indeed, both classes have been successfully applied to dynamic optimization problems with various environmental and dynamic characteristics [2,4]. However, one cannot directly apply them to tackle dynamic optimization problems as these methods are originally designed for optimization in static environments and cannot cope with the challenges of a dynamic optimization problem alone. Hence, they are usually used together with some other components to form evolutionary dynamic optimization methods.
This Tutorial is dedicated to exploring the recent advances in the ˝eld of evolutionary continuous dynamic optimization.This tutorial ˝rst describes the de˝nition of dynamic optimization problems and explains di˙erent classes of these problems [5, 6]. Then, components of evolutionary continuous dynamic optimization methods [7] are described. This is followed by describing the state-of-the-art and well-known used benchmarks in the ˝eld. This tutorial also introduces the commonly used performance indicators that have been used for analyzing and comparing the performance of the algorithms. After introducing several real-world applications, the tutorial is concluded by discussing some of the current challenges, the gap between academic research and real-world problems, and some potential future research directions.
Targeted audience, learning outcomes, level of the tutorial, and
expected length
This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in continuous dynamic optimization problems. The tutorial is speci˝cally targeted for Ph.D. students, and early career researchers who want to gain an overview of the ˝eld and wish to identify the most important open questions and challenges in the ˝eld to bootstrap their research in continuous dynamic optimization problems. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the ˝eld. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The level of this tutorial is introductory and its expected duration is two hours.
Presenters:
Danial Yazdani (M’20) received his Ph.D. degree in computer science from Liverpool John Moores University, Liverpool, UK, in 2018. He is currently a Research Assistant Professor with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. Besides dynamically changing environments, his main research interests include evolutionary algorithms, large-scale optimization, simulation optimization, and their applications. He was a recipient of the Best Thesis Award from the Faculty of Engineering and Technology, Liverpool John Moores University, and the SUSTech Presidential Outstanding Postdoctoral Award from the Southern University of Science and Technology. He is a member of the IEEE Task Force on Evolutionary Computation in Dynamic and Uncertain Environments, and the IEEE Task Force on Large-Scale Global Optimization.
Dr. Yazdani has been working on evolutionary dynamic optimization for more than 10 years and both his MSc and PhD [8] theses were focused on this topic, which resulted in publishing more than 15 peer-reviewed papers on this ˝eld (all as the ˝rst author) [7, 9-22].
Xin Yao (M’91SM’96F’03) obtained his Ph.D. in 1990 from the University of Science and Technol-ogy of China (USTC), MSc in 1985 from North China Institute of Computing Technologies and BSc in 1982 from USTC. He is a Chair Professor of Computer Science at the Southern University of Science and Technology, Shenzhen, China, and a part-time Professor of Computer Science at the University of Birmingham, UK. He is an IEEE Fellow and was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS). His major research interests include evolutionary computation, ensemble learn-ing, and their applications to software engineering.
Prof. Yao’s paper on evolving arti˝cial neural networks won the 2001 IEEE Donald G. Fink Prize Paper Award. He also won 2010, 2016, and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards, 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards. He received a prestigious Royal Society Wolfson Research Merit Award in 2012, the IEEE CIS Evolutionary Computation Pioneer Award in 2013 and the 2020 IEEE Frank Rosenblatt Award. He was the President (2014-15) of IEEE CIS and the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation.
References
[1] J. H. Holland et al., Adaptation in natural and arti˝cial systems: an introductory analysis with applications to biology, control, and arti˝cial intelligence. MIT press, 1992.
[2] T. T. Nguyen, S. Yang, and J. Branke, Evolutionary dynamic optimization: A survey of the state of the art, Swarm Evol. Comput., vol. 6, pp. 1 24, 2012.
[3] C. Cruz, J. R. González, and D. A. Pelta, Optimization in dynamic environments: a survey on problems, methods and measures, Soft Comput., vol. 15, no. 7, pp. 14271448, 2011.
[4] M. Mavrovouniotis, C. Li, and S. Yang, A survey of swarm intelligence for dynamic optimization: Algorithms and applications, Swarm Evol. Comput., vol. 33, pp. 1 17, 2017.
[5] J. Branke, Evolutionary optimization in dynamic environments. Springer Science & Business Media, 2012, vol. 3.
[6] T. T. Nguyen and X. Yao, Continuous dynamic constrained optimizationthe challenges, IEEE Trans. Evol. Comput., vol. 16, no. 6, pp. 769786, 2012.
[7] D. Yazdani, R. Cheng, D. Yazdani, J. Branke, Y. Jin, and X. Yao, A survey of evolutionary continuous dynamic optimization over two decades part A, IEEE Transactions on Evolutionary Computation, 2021.
[8] D. Yazdani, Particle swarm optimization for dynamically changing environments with particular focus on scalability and switching cost, Ph.D. dissertation, Liverpool John Moores University, Liv-erpool, UK, 2018.
[9] D. Yazdani, M. R. Akbarzadeh-Totonchi, B. Nasiri, and M. R. Meybodi, A new arti˝cial ˝sh swarm algorithm for dynamic optimization problems, in IEEE Congr. Evol. Comput. IEEE, 2012, pp. 18.
[10] D. Yazdani, B. Nasiri, R. Azizi, A. Sepas-Moghaddam, and M. R. Meybodi, Optimization in dynamic environments utilizing a novel method based on particle swarm optimization, International Journal of Arti˝cial Intelligence, vol. 11, no. A13, pp. 170192, 2013.
[11] D. Yazdani, B. Nasiri, A. Sepas-Moghaddam, and M. R. Meybodi, A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization, Appl. Soft Com-put., vol. 13, no. 4, pp. 21442158, 2013.
[12] D. Yazdani, B. Nasiri, A. Sepas-Moghaddam, M. Meybodi, and M. Akbarzadeh-Totonchi, mNAFSA: a novel approach for optimization in dynamic environments with global changes, Swarm Evol. Comput., vol. 18, pp. 38 53, 2014.
[13] D. Yazdani, A. Sepas-Moghaddam, A. Dehban, and N. Horta, A novel approach for optimization in dynamic environments based on modi˝ed arti˝cial ˝sh swarm algorithm, Int. J. Comput. Intell. Appl., vol. 15, no. 02, pp. 1 650 010 1 650 034, 2016.
[14] D. Yazdani, T. T. Nguyen, J. Branke, and J. Wang, A new multi-swarm particle swarm optimization for robust optimization over time, in European Conference on the Applications of Evolutionary Computation. Springer, 2017, pp. 99109.
[15] D. Yazdani, J. Branke, M. N. Omidvar, T. T. Nguyen, and X. Yao, Changing or keeping solutions in dynamic optimization problems with switching costs, in Genet. Evol. Comput. Conf. ACM, 2018, pp. 10951102.
[16] D. Yazdani, T. T. Nguyen, J. Branke, and J. Wang, A multi-objective time-linkage approach for dynamic optimization problems with previous-solution displacement restriction, in Applications of Evolutionary Computation, K. Sim and P. Kaufmann, Eds. Springer International Publishing, 2018, pp. 864878.
[17] , A new multi-swarm particle swarm optimization for robust optimization over time, in Ap-plications of Evolutionary Computation, G. Squillero and K. Sim, Eds. Springer International Publishing, 2017, pp. 99109.
[18] D. Yazdani, T. T. Nguyen, and J. Branke, Robust optimization over time by learning problem space characteristics, IEEE Trans. Evol. Comput., vol. 23, no. 1, pp. 143155, 2019.
[19] D. Yazdani, M. N. Omidvar, J. Branke, T. T. Nguyen, and X. Yao, Scaling up dynamic optimization problems: A divide-and-conquer approach, IEEE Trans. Evol. Comput., vol. 24, no. 1, pp. 115, 2019.
[20] D. Yazdani, M. N. Omidvar, R. Cheng, J. Branke, T. T. Nguyen, and X. Yao, Benchmarking continuous dynamic optimization: Survey and generalized test suite, IEEE Trans. Cybern., pp. 1 14, 2020.
[21] D. Yazdani, R. Cheng, C. He, and J. Branke, Adaptive control of subpopulations in evolutionary dynamic optimization, IEEE Transactions on Cybernetics, 2020.
[22] D. Yazdani, R. Cheng, D. Yazdani, J. Branke, Y. Jin, and X. Yao, A survey of evolutionary continuous dynamic optimization over two decades part B, IEEE Transactions on Evolutionary Computation, 2021.