A random-weighted plane-Gaussian artificial neural network
Multilayer perceptron (MLP) and radial basis function network (RBFN) have received
considerable attentions in data classification and regression. As a bridge between MLP and
RBFN, plane-Gaussian (PG) network is capable of exhibiting globality and locality
simultaneously by so-called PG activation function. Due to tuning network weight values by
back propagation or clustering method in the training phase, they all confront with slow
convergence rate, time-consuming, and easily dropping in local minima. To speed training …
considerable attentions in data classification and regression. As a bridge between MLP and
RBFN, plane-Gaussian (PG) network is capable of exhibiting globality and locality
simultaneously by so-called PG activation function. Due to tuning network weight values by
back propagation or clustering method in the training phase, they all confront with slow
convergence rate, time-consuming, and easily dropping in local minima. To speed training …
Abstract
Multilayer perceptron (MLP) and radial basis function network (RBFN) have received considerable attentions in data classification and regression. As a bridge between MLP and RBFN, plane-Gaussian (PG) network is capable of exhibiting globality and locality simultaneously by so-called PG activation function. Due to tuning network weight values by back propagation or clustering method in the training phase, they all confront with slow convergence rate, time-consuming, and easily dropping in local minima. To speed training networks, random projection technologies, for instance, extreme learning machine (ELM), have brightened up in recent decades. In this paper, we propose a random-weighted PG network, termed as RwPG. Instead of plane clustering in PG network, our RwPG adopts random values as network weight, and then analytically calculates network output by matrix inversion. Compared to PG and ELM, the advantages of the proposed RwPG list in fourfold: (1) It will be proved that the RwPG is also a universal approximator. (2) It inherits the geometrical interpretation of PG network, and is also suitable for capturing linearity in data, especially for plane distribution cases. (3) It has comparable training speed for ELM, but significantly faster than that of PG network. (4) Owing to random-weighted technology, RwPG is probably capable of breaking through local extremum problems. Finally, experiments on artificial and benchmark datasets will show its superiorities.
Springer
Showing the best result for this search. See all results