Neuroevolution untuk optimalisasi parameter jaringan saraf tiruan
DOI:
https://doi.org/10.24246/aiti.v20i2.125-134Keywords:
Neuroevolution, Particle swarm optimization, Neural Network, TuningAbstract
Artificial Neural Network is a supervised learning method for various classification problems. Artificial Neural Network uses training data to identify patterns in the data; therefore, training phase is crucial. During this stage, the network weight is adjusted so that they can recognize patterns in the data. In this research, a neuroevolution approach is proposed to optimize artificial neural network parameters (weight) Neuroevolution is a combination of evolutionary algorithms, including various metaheuristics algorithms, to optimize neural network parameters and configuration. In particular, this research implemented particle swarm optimization as the artificial neural network optimizer. The performance of the proposed model was compared to backpropagation, which uses gradient information to adjust the neural network parameter. There are five datasets used as the benchmark problems. The datasets are iris, wine, breast cancer, ecoli, and wheat seeds. The experiment results show that the proposed method has better accuracy than the backpropagation in three out of five problems and has the same accuracy in two problems. The proposed method is also faster than the backpropagation method in all problems. These results reveal that neuroevolution is a promising approach to improving the performance of artificial neural networks. Further studies are needed to explore more benefits of this approach.
Downloads
Metrics
References
O. Kwon, et al., "A deep neural network for classification of melt-pool images in metal additive manufacturing", Journal of Intelligent Manufacturing, vol 31, pp. 375-389, 2020
H. H. Sultan, et al., "Multi-classification of Brain Tumor Images using Deep Neural Network", IEEE Access, vol 7, pp. 69215-69225, May 2019
Hecht-Nielsen, "Theory of the backpropagation neural network", International Joint Conference on Neural Network (IJCNN), pp. 598-605, 1989
S. Ioeffe and C. Szegedy., "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", https://arxiv.org/pdf/1502.03167.pdf, diakses 20 Juli 2022.
S. Fong, et al., "How Metaheuristics Algorithm Contribute to Deep Learning in the Hype of Big Data Analytics", in Proceeding in Intelligent Computing Techniques: Theory, Practice and Applications, Advances in Intelligent System and Computing, vol 518, pp. 3-25, 2017
J. Schmidhuber., "Deep learning in neural networks: An overview", Neural Networks, vol 61, pp. 85-117, Jan 2015
Q. Meng, et al., "Convergence analysis of distributued stochastic gradient descent with shuffling", Neurocomputing, vol 337, pp. 46-57, April 2019
J. Duchi, et al., "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", Journal of Machine Learning Research, vol 12, pp. 2121-2159, July 2011
E. Real, et al., "Regularized Evolution for Image Classifier Architecture Search", Proceeding of the AAAI Conference on Artificial Intelligence, vol 33, no 1, pp. 4780-4789, 2019
D. P. Kingma and J. Ba., "Adam : A Method for Stochastic Optimization", International conference on learning representation (ICLR), poster presentation, May 2015
K.O. Stanley, et al., "Designing neural networks through neuroevolution", Nature Machine Learning, vol 1, pp 24-35, Jan 2019
R. Pellerin, et al., "A survey of hybrid metaheuristics for the resource-constrained project scheduling problem", European Journal of Operation Research, vol 280, no 2, pp. 395-416, Jan 2020
R. Elshaer and H. Awad, "A taxonomy review of metaheuristics algorithms for solving the vehicle routing problem and its variants", Computer & Industrial Engineering, vol 140, pp. 106242, Feb 2020
H. D. Purnomo and H. M. Wee., "Maximizing production rate and workload balancing in two-sided assembly line using harmony search", Computer & Industrial Engineering, vol 76, 222-230, Oct 2014
F.H.F. Leung, et al., "Tuning of the structure and parameters of a neural network using an improved genetic algorithm", IEEE Transactions on Neural Network, vol 14, no 1, pp. 79-88, Jan 2003
C. F. Juang., "A Hybrid of Genetic Algorithm and Particle Swarm Optimization for Recurrent Network Desing", IEEE Transactions on System, Man and Cybernetics, vol 34, no 2, pp. 997-1006 April 2004
L. M. R. Rere, et al., "Simulated Annealing Algorithm for Deep Learning", Procedia Computer Science, vol 72, pp. 137-144, 2015
T. Salimans, et al., "Evolution strategies as a scalable alternative to reinforce learning", https://arxiv.org/abs/1703.03864, diakses 5 Juli 2022
C. Liu, et al., "Auto-Deeplab: Hierarchical Neural Architecture Search for Semantic Image Segmentation", Proceeding of the IEEE conference on computer vision and pattern recognition, pp. 82-92, June 2019
P. Lim, et al., "Evolutionary Cluster-Based Synthetic Oversampling Ensemble (ECO-Ensemble) for Imbalance Learning", IEEE transaction on cybernetics, vol 47, no 9, pp. 2850-2861, June 2016
-, UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets.php, diakses 10 Juni 2022
Y. Shi and R. Eberhart, "A modified particle swarm optimization", IEEE International Conference on Evolutionary Computation Proceeding. IEEE World Congress on Computational Intelligence, 1998
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 AITI
This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in AITI: Jurnal Teknologi Informasi is licensed under a Creative Commons Attribution 4.0 International License.