Title Negative Correlation Hidden Layer for the Extreme Learning Machine
Authors PERALES GONZÁLEZ, CARLOS, FERNÁNDEZ NAVARRO, FRANCISCO DE ASÍS, PÉREZ RODRÍGUEZ, JAVIER, CARBONERO RUZ, MARIANO
External publication No
Means Appl. Soft Comput.
Scope Article
Nature Científica
JCR Quartile 1
SJR Quartile 1
Area International
Web https://www.scopus.com/inward/record.uri?eid=2-s2.0-85105924861&doi=10.1016%2fj.asoc.2021.107482&partnerID=40&md5=f02adebc423d15b826de3e859495fc52
Publication date 01/01/2021
Scopus Id 2-s2.0-85105924861
DOI 10.1016/j.asoc.2021.107482
Abstract Extreme Learning Machine (ELM) algorithms have achieved unprecedented performance in supervised machine learning tasks. However, the preconfiguration of the nodes in the hidden layer in ELM models through randomness does not always lead to a suitable transformation of the original features. Consequently, the performance of these models relies on broad exploration of these feature mappings, generally using a large number of nodes in the hidden layer. In this paper, a novel ELM architecture is presented, called Negative Correlation Hidden Layer ELM (NCHL-ELM), based on the Negative Correlation Learning (NCL) framework. This model incorporates a parameter into each node in the original ELM hidden layer, and these parameters are optimized by reducing the error in the training set and promoting the diversity among them in order to improve the generalization results. Mathematically, the ELM minimization problem is perturbed by a penalty term, which represents a measure of diversity among the parameters. A variety of regression and classification benchmark datasets have been selected in order to compare NCHL-ELM with other state-of-the-art ELM models. Statistical tests indicate the superiority of our method in both regression and classification problems. © 2021 Elsevier B.V.
Keywords Diversity; Extreme Learning Machine; Feature mapping; Hidden layer; Negative Correlation Learning
Universidad Loyola members