I am trying to get myself involved with the topic of neural networks for the purpose of a GPGPU project at university. Now, Wikipedia distinguishes between "Feed forward neural networks" and "Radial Basis function networks".
As far as I understood, the defining attribute of a feed forward network is that there are no loops involved, thus making the Radial Basis function network a subtype of the former. Is that correct? If not, why?
I haven't found any explicit answer to that question and the fact that these two types are treated as equal on Wikipedia makes me doubt my comprehension.
Best Answer
Yes, feedforward neural networks (FFNN) are networks without loops. The source of confusion seems to be that Wikipedia (as well as other literature) uses it more or less as a synonym for Perceptrons and Multi-Layer Perceptrons (MLP). But technically, RBFNs are FFNNs too, by definition, since information flows only in one direction. The differences between MLPs and RBFNs are: