Hi everyone.
I recently started my PhD and therefore working with artificial neural networks (ANN's).
I'd like to try an architecture where each parameter (input/output) can have its own delay but first I decided to explore the NAR concept.
From what I understood, for NAR the function "narnet" allows the definition of the output delays and the function "preparets" applies the delays and structures accordingly the parameter vector that will be used for training the ANN.
Is this equivalent to using a feedforwardnet function where I prepare input vector(s) as shifted version(s) of the output and I remove the output initial values (for all to have the same number of elements)?
Thanks in advance, Rodrigo
Best Answer