Up: Neural Networks
Previous: SOM
Subsections
- Multiquadrics
- Inverse multiquadrics
- Gaussian's
In all cases
and
- An RBF network will usually have only 1 hidden layer, but
a MLP will usually have more than 1
- Usually hidden and output neurons in an MLP share the same
neuronal model, but this is not true of RBF networks
- The hidden layer of a RBF network is non linear and the
output layer is linear. In an MLP both layers are non linear
- The argument of the activation function in a hidden RBF
network neuron computes the Euclidean norm between the input
vector and the center of the unit. In an MLP the activation
function calculates the dot product (inner product) of the input
vector and the synaptic weight vector
- MLP's construct global approximations to non linear input
- output mapping but RBF networks produce local approximations
(when using a exponentially decaying function such as Gaussians)
Up: Neural Networks
Previous: SOM
2003-06-08