Aim

Nature-inspired algorithms have been successfully applied to neural network training [1], [2], [3], [4], neural network architecture optimization [5], [6], and neural network architecture construction [7], [8] in the past. Applications of nature-inspired algorithms to neural networks are diverse and often hybridized with more traditional gradient-descent based methods [9], [10], [11]. Compared to gradient-based methods, nature-inspired algorithms are less sensitive to weight initialisation, less likely to become trapped in local optima, and independent of the activation function gradient [12]. Despite the relative success of nature-inspired algorithms in the neural network context, a solid theoretical foundation for such applications is often lacking. Successful applications of nature-inspired methods to newer neural network paradigms such as deep learning are yet to be seen [13]. Some nature-inspired algorithms were shown to suffer from stagnation when applied to neural networks [14]. Optimizing large real-world neural networks is a challenging task due to the inherent high dimensionality of the weight space, high correlation between the individual weights, and our limited knowledge of the error landscape properties in high dimensions. The proposed nature-inspired methods must scale well to high dimensions to be usable in a real-world context.

The aim of this special session is to investigate the existing nature-inspired approaches to neural network optimization, to encourage discussion of the existing challenges, to identify problems, and to propose solutions. The proposed special session will provide an excellent forum for fellow researchers in this exciting cross-disciplinary field.

REFERENCES


[1] A. P. Engelbrecht and A. Ismail, “Training product unit neural networks,” Stability and Control: Theory and Applications, vol. 2, no. 1–2, pp. 59–74, 1999.
[2] J. N. Gupta and R. S. Sexton, “Comparing backpropagation with a genetic algorithm for neural network training,” Omega, vol. 27, no. 6, pp. 679–684, 1999.
[3] E. Valian, S. Mohanna, and S. Tavakoli, “Improved cuckoo search algorithm for feedforward neural network training,” International Journal of Artificial Intelligence & Applications, vol. 2, no. 3, pp. 36–43, 2011.
[4] G. Das, P. K. Pattnaik, and S. K. Padhy, “Artificial neural network trained by particle swarm optimization for non-linear channel equalization,” Expert Systems with Applications, vol. 41, no. 7, pp. 3491–3496, 2014.
[5] S. Ding, H. Li, C. Su, J. Yu, and F. Jin, “Evolutionary artificial neural networks: a review,” Artificial Intelligence Review, vol. 39, no. 3, pp. 251–260, 2013.
[6] S. A. Harp and T. Samad, “Optimizing neural networks with genetic algorithms,” in Proceedings of the 54th American Power Conference, Chicago, vol. 2, 2013.
[7] X. Yao, “Evolutionary artificial neural networks,” International Journal of Neural Systems, vol. 4, no. 3, pp. 203–222, 1993.
[8] C. Zhang, H. Shao, and Y. Li, “Particle swarm optimisation for evolving artificial neural network,” in IEEE International Conference on Systems, Man, and Cybernetics, vol. 4. IEEE, 2000, pp. 2487–2490.
[9] X. Cai, N. Zhang, G. K. Venayagamoorthy, and D. C. Wunsch, “Time series prediction with recurrent neural networks trained by a hybrid PSO–EA algorithm,” Neurocomputing, vol. 70, no. 13, pp. 2342–2353, 2007.
[10] J.-R. Zhang, J. Zhang, T.-M. Lok, and M. R. Lyu, “A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training,” Applied Mathematics and Computation, vol. 185, no. 2, pp. 1026–1037, 2007.
[11] S. Mirjalili, S. Z. M. Hashim, and H. M. Sardroudi, “Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm,” Applied Mathematics and Computation, vol. 218, no. 22, pp. 11 125–11 137, 2012.
[12] E. A. Grimaldi, F. Grimaccia, M. Mussetta, and R. E. Zich, “PSO as an effective learning algorithm for neural network applications,” in Proceeding of the 3rd International Conference on Computational Electromagnetics and Its Applications (ICCEA). IEEE, 2004, pp. 557–560.
[13] A. Rakitianskaia and A. P. Engelbrecht, “Training high-dimensional neural networks with cooperative particle swarm optimiser,” in International Joint Conference on Neural Networks (IJCNN). IEEE, 2014, pp. 4011–4018.
[14] A. P. Piotrowski, “Differential evolution algorithms applied to neural network training suffer from stagnation,” Applied Soft Computing, vol. 21, pp. 382–406, 2014.