Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.
Feedback (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.
Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called multi-layer perceptrons (MLP), are the most widely studied and used neural network model in practice.
As an example of feedback network, I can recall Hopfield’s network. The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network.
Kohonen’s self-organizing maps (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data.
(The diagrams are from Dana Vrajitoru's C463 / B551 Artificial Intelligence web site.)
Best Answer
I assume you're thinking of what used to be, and perhaps still are referred to as 'multilayer perceptrons' in your question about neural networks. If so then I'd explain the whole thing in terms of flexibility about the form of the decision boundary as a function of explanatory variables. In particular, for this audience, I wouldn't mention link functions / log odds etc. Just keep with the idea that the probability of an event is being predicted on the basis of some observations.
Here's a possible sequence:
The advantages of this approach is that you don't have to really get into any mathematical detail to give the correct idea. In fact they don't have to understand either logistic regression or neural networks already to understand the similarities and differences.
The disadvantage of the approach is that you have to make a lot of pictures, and strongly resist the temptation to drop down into the algebra to explain things.