Solved – Delimitation: feed forward- and radial basis networks

neural networksrbf-network

I am trying to get myself involved with the topic of neural networks for the purpose of a GPGPU project at university. Now, Wikipedia distinguishes between "Feed forward neural networks" and "Radial Basis function networks".

As far as I understood, the defining attribute of a feed forward network is that there are no loops involved, thus making the Radial Basis function network a subtype of the former. Is that correct? If not, why?

I haven't found any explicit answer to that question and the fact that these two types are treated as equal on Wikipedia makes me doubt my comprehension.

Best Answer

Yes, feedforward neural networks (FFNN) are networks without loops. The source of confusion seems to be that Wikipedia (as well as other literature) uses it more or less as a synonym for Perceptrons and Multi-Layer Perceptrons (MLP). But technically, RBFNs are FFNNs too, by definition, since information flows only in one direction. The differences between MLPs and RBFNs are:

  • MLP: uses dot products (between inputs and weights) and sigmoidal activation functions (or other monotonic functions) and training is usually done through backpropagation for all layers (which can be as many as you want);
  • RBF: uses Euclidean distances (between inputs and weights) and Gaussian activation functions, which makes neurons more locally sensitive. Also, RBFs may use backpropagation for learning, or hybrid approaches with unsupervised learning in the hidden layer (they have just 1 hidden layer).