Computing the Performance of FFNN for Classifying Purposes

  • Hadaate Ullah Southern University Bangladesh & University of Dhaka
  • Adnan Kiber Electrical and Electronic Engineering, Southern University Bangladesh, Bangladesh
  • Asadul Huq Electrical and Electronic Engineering, Southern University Bangladesh, Bangladesh
  • Mohammad Arif Sobhan Bhuiyan Electrical and Electronics Engineering, Xiamen University Malaysia, Sepang, Selangor, Malaysia.

Abstract

 Classification is one of the most hourly encountered problems in real world. Neural networks have emerged as one of the tools that can handle the classification problem. Feed-Forward Neural Networks (FFNN's) have been widely applied in many different fields as a classification tool. Designing an efficient FFNN structure with the optimum number of hidden layers and minimum number of layer's neurons for a given specific application or dataset, is an open research problem and more challenging depend on the input data. The random selections of hidden layers and neurons may cause the problem of either under fitting or over fitting. Over fitting arises because the network matches the data so closely as to lose its generalization ability over the test data. In this research, the classification performance using the Mean Square Error (MSE) of Feed-Forward Neural Network (FFNN) with back-propagation algorithm with respect to the different number of hidden layers and hidden neurons is computed and analyzed to find out the optimum number of hidden layers and minimum number of layer's neurons to help the existing classification concepts by MATLAB version 13a. By this process, firstly the random data has been generated using an suitable matlab function to prepare the training data as the input and target vectors as the testing data for the classification purposes of FFNN. The generated input data is passed on to the output layer through the hidden layers which process these data. From this analysis, it is find out from the mean square error comparison graphs and regression plots that for getting the best performance form this network, it is better to use the high number of hidden layers and more neurons in the hidden layers in the network during designing its classifier but so more neurons in the hidden layers and the high number of hidden layers in the network makes it complex and takes more time to execute. So as the result it is suggested that three hidden layers and 26 hidden neurons in each hidden layers are better for designing the classifier of this network for this type of input data features.

Author Biography

Hadaate Ullah, Southern University Bangladesh & University of Dhaka

Assistant Professor,

Dept. of EEE

References

Ahmed, K. & Noureldien, A. (2014). Determining the efficient structure of feed-forward neural network to classify breast cancer dataset. International Journal of Advanced Computer Science and Applications, 5(12), 87-90.

Amardeep, R. & Swamy, K. T. (2017). Training feed forward neural network with back propagation algorithm. International Journal of Engineering and Computer Science, 6(1),19860-19866.

Angelini, E., di Tollo, G. & Roli, A. (2008). A neural network approach for credit risk evaluation. Quarterly Review of Economics and Finance, Elsevier, 48(4), 733–755.

Benardos, P. G. & Vosniakos, G. C. (2007). Optimizing feedforward artificial neural network architecture. Engineering Application of Artificial Intelligence, 20, 365-382.

Benediktsson, J., Swain, P. & Ersoy, O. (1990). Neural network approaches versus statistical methods in classification of multisource remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 28(4), 540–552.

Cybenko, G. (1989). Approximation by super-positions of a sigmoidal function. Math. Control Signals System, 2, 303-314.

Danaher, S., Datta, S., Waddle, I. & Hackney, P. (2004). Erosion modelling using Bayesian regulated artificial neural networks. Wear, 256(9-10), 879–888.

Engelbrecht, A. P. (2007). Computational Intelligence: An Introduction - second edition. John Wiley & Sons Ltd.

Hagan, M. T., Demuth, H. B., & Beale, M. (1995). Neural Network Design. USA, Boston, Mass, PWS Publishing Company.

Han, D. Li, M. & Wang, J. (2012). Chaotic time series prediction based on a novel robust echo state network. IEEE Transactions on Neural Networks and Learning Systems, 23(5), 787– 799.

Haykin, S. (1994). Neural Networks: A Comprehensive Foundation. MacMillan, New York, USA.

Hornik, K., Stinchcombe, M. & White, H. (1990). Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3(5), 551–560.

Huang, G. B. & Babri, H. (1997). General approximation theorem on feed-forward networks. 1997 In International Conference on Information, Communications and Signal Processing, ICICS ’97 (pp.698-702), Singapore.

Huang, G. B. (2003). Learning capability and storage capacity of two-hidden-layer feed-forward networks. IEEE Transactions on Neural Networks,14(2), 274–281.

Jaeger, H. (2002). Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach (pp. 48). GMD Report 159, German National Research Center for Information Technology.

Jaeger, H. (2007). Echo state network. Scholarpedia, 2(9), article no. 2330

Kurkova, V. (1992). Kolmogorov’s theorem and multilayer neural networks. Neural Networks, 5, 501–506.

Lin, T., Horne, B. G. Tiˇno, P. & Giles, C. L. (1996). Learning long term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6),1329–1338.

Lin, Y. Y., Chang, J. Y., Pal, N. R., and Lin, C. T. (2013). A mutually recurrent interval type-2 neural fuzzy system (MRIT2NFS) with self-evolving structure and parameters, Fuzzy Systems. IEEE Transactions on 21.3, 2013: 492-509.

Mas, J.F. & Flores, J. J. (2008). The application of artificial neural networks to the analysis of remotely sensed data. International Journal of Remote Sensing, 29(3), 617–663.

Naguib, R. N. & Sherbet, G. V. (2001). Artificial neural networks in cancer diagnosis, prognosis, and patient management (pp. 212-213). New York, CRC Press LLC.

Nielsen, R. H. (1987). Kolmogorov’s mapping neural network existence theorem. 1987 In IEEE First Annual International Conference on Neural Networks (pp.11-13), 3.

Pacelli, V. & Azzollini, M. (2011). An artificial neural network approach for credit risk management. Journal of Intelligent Learning Systems and Applications, 3(2), 103–112.

Panchal, F. & Panchal, M. (2015). Optimizing number of hidden nodes for artificial neural network using competitive learning approach. International Journal of Computer Science and Mobile Computing, 4(5), 358 – 364.

Rani, N. & Vashisth, S. (2016). Brain tumor detection and classification with feed forward back propagation neural network. International Journal of Computer Applications, 146(12), 1-6.

Rodan, A. & Tiˇno, P. (2011). Minimum complexity echo state network. IEEE Transactions on Neural Networks, 22(1), 131–144.

Rodan, A., Faris, H. & Alqatawna, J. (2016). Optimizing feed-forward neural networks using biogeography based optimization for E-mail spam identification. Int. J. Communications, Network and System Sciences, 9, 19-28.

Stathakis, D. (2009). How many hidden layers and nodes? International Journal of Remote Sensing (Taylor & Francis Publishing), 30(8), 2133–2147.

Valiente, A. D. C., Sequera, J. L. C., Martinez, A. C., Pulido, J. M. G. & Martinez, J. M. G. (2017). An artificial neural network for analyzing overall uniformity in outdoor lighting systems. Energies, 10(2), 1-18.

Wilamowski, B. M. (2009). Neural network architectures and learning algorithms. IEEE Industrial

Electronics Magazine, 3(4), 56–63..

Published
2018-12-30
How to Cite
Ullah, H., Kiber, A., Huq, A., & Sobhan Bhuiyan, M. A. (2018). Computing the Performance of FFNN for Classifying Purposes. Malaysian Journal of Applied Sciences, 3(2), 8-20. Retrieved from https://journal.unisza.edu.my/myjas/index.php/myjas/article/view/118
Section
Research Articles