Author
Listed:
- Subramani, Chinnamuthu
- Jagannath, Ravi Prasad K.
- Kuppili, Venkatanareshbabu
Abstract
Extreme Learning Machines (ELMs) are a class of single hidden-layer feedforward neural networks known for their rapid training process, structural simplicity, and strong generalization capabilities. ELM training requires solving a system of linear equations, where solution accuracy directly impacts model performance. However, conventional ELMs rely on the Moore–Penrose inverse, which is computationally expensive, memory-intensive, and numerically unstable in ill-conditioned problems. Additionally, stabilizing matrix inversion requires a hyperparameter, whose optimal selection further increases computational complexity. Iterative numerical techniques offer a promising alternative; however, the stochastic nature of the feature matrix challenges deterministic methods, while stochastic gradient approaches are hyperparameter-sensitive and prone to local minima. To address these limitations, this study introduces randomized iterative algorithms that solve the original linear system without requiring matrix inversion or full-system computation, instead leveraging random subsets of data in a hyperparameter-free framework. Although these methods incorporate randomness, they are not arbitrary but remain system-dependent, dynamically adapting to the structure of the feature matrix. Theoretical analysis establishes upper bounds on the expected number of iterations, expressed in terms of statistical properties of the feature matrix, providing insights into near-singularity, condition number, and network size. Empirical evaluations on classification datasets demonstrate that the proposed methods consistently outperform conventional ELM, deterministic solvers, and gradient descent-based methods in accuracy, efficiency, and robustness. Statistical validation using Friedman’s rank test and Wilcoxon post-hoc analysis confirms the superior performance and reliability of these randomized algorithms, establishing them as a computationally efficient and numerically stable alternative to existing approaches.
Suggested Citation
Subramani, Chinnamuthu & Jagannath, Ravi Prasad K. & Kuppili, Venkatanareshbabu, 2025.
"Randomized Gauss–Seidel iterative algorithms for Extreme Learning Machines,"
Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 666(C).
Handle:
RePEc:eee:phsmap:v:666:y:2025:i:c:s0378437125001670
DOI: 10.1016/j.physa.2025.130515
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:phsmap:v:666:y:2025:i:c:s0378437125001670. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/physica-a-statistical-mechpplications/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.