Solved – What are the meanings of the terms “Tapped Delay Line” and “Delay Unit” in the Context of TDNNs

neural networks

After reading some of the literature on Time-Delay Neural Networks (TDNNs) I'm fairly confident that I can build one; sources like the user manual for the Stuttgart Neural Network Simulator and the original paper by Waibel et al.1 provide pretty good directions. The concept doesn't go very far beyond ordinary feed-forward neural nets; the main difference is that copies of each input neuron and its connections are made prior to introducing new input or adjusting the weights and activations, so that a memory of past inputs it maintained by essentially altering the topology in a dynamic way. The sticking point I've run across may strictly be one of terminology: throughout the literature on TDNNs, as well as Dyanmic Neural Nets (DNNs) and Recurrent Neural Nets (RNNs), I've seen the terms "Delay Unit" and "Tapped Delay Line" introduced now and then, sometimes inconsistently. For example,

• p. 1155 of El-Shafie et al.2 refers to "a time delay unit (or shift register) that constitutes the tapped delay line" in the context of an Input Delay Neural Network (IDNN), which seems quite similar or even identical in design to a TDNN. On p. 1155 they refer to a diagram on the following page that depicts a Tapped Delay Line in a way that closely resembles the diagrams of TDNNs in other sources.

• On pp. 492-493 Sinha et al.3 say the chief characteristic of a TDNN is a "Tapped Delay Line." They don't provide any diagrams or further info though.

• A "tap delay line" is mentioned in conjunction with TDNNs on p. 2, Prashar's review of neural net architectures.4 The diagram shows a topology practically identical to those shown in the SNNS manual and Waibel.

My first guess from these and other sources would be that a Tapped Delay Line refers to the whole sequence of neurons added to the input layer to account for past inputs and that a Delay Unit refers to an individual neuron within it. Then again, I believe I've seen the term "Delay Unit" used in RNNs to indicate neurons designated to receive recurrent feedback from later layers, rather than an input layer (cases in point might be the members of context layers of Jordan and Elman Networks). Perhaps I'm being too cautious, but I want to make sure I'm not confusing my terms before implementing TDNNs, which could lead to spurious results and wasted time if my best guess turns out to be wrong. Is this what is meant by Delay Units and Tapped Delay Lines in the context of TDNNs, or do they refer to some other object(s)? Note that this question is relevant to this CrossValidated thread on RNNs and this one on the difference between Tapped Delay Lines and sliding windows, but does not duplicate or overlap them.

References:
1 Waibel, Alexander; Hanazawa, Toshiyuki; Hinton, Geoffrey; Shikano, Kiyohiro and Lang, Kevin J., 1989, "Phoneme Recognition Using Time-Delay Neural Networks," pp. 328-339 in IEEE Transactions on Acoustics, Speech and Signal Processing, March 1989. Vol. 37 No. 3.

2 El-Shafie, A.; Noureldin, A.; Taha, M.; Hussain, A. and Mukhlisin, M., 2012, "Dynamic Versus Static Neural Network Model for Rainfall Forecasting at Klang River Basin, Malaysia," pp. 1151-1169 in Hydrology and Earth System Sciences, 2012, Vol. 16.

3 See Sinha, N.K.; Gupta, M.M. and Rao, D H., 2000, "Dynamic Neural Networks: An Overview," pp. 491-496 in Proceedings of IEEE International Conference on Industrial Technology, February, 2000. Vol. 1.

4 Prashar, Parul, 2014, "Neural Networks in Machine Learning," pp. 1-3 in International Journal of Computer Applications, November 2014. Vol. 105, No. 14.

Best Answer

The Tap Delay Line(TDL) is as you say an input vector that consists of the current time-step data instance(input node) and past time-step data instances (input nodes). You may find my answer on this question about TDLs useful.

It seems to my that you have the right idea for the "delay units" as there are neurons that output a value dependent on their previous state and their input together.

More specifically on your examples:

  • Example No1 I could not find a reference to "time delay unit" in the link you provided.
  • Example No2 refers to a hidden layer neuron as a "delay unit" whose output is not only affected by the input but also the internal state. In other words it could be a recurrent neuron (that implements a delay) and it is characterised as a delay unit. See this post for further explanation maybe.
  • Example No3 has a bit vague diagram for the Tap Delay Line (TDL). I would suggest this paper on NARX NNs that describes TDLs. Or this paper also on NARX.

I have never seen the term "delay unit" used explicitly for a data instance that acts as an input node to a NN. Although there may be a paper out there that uses this terminology, it seem to me that they would also then use the term "unit" to describe the rest of the input nodes(units)

Hope this helps!

Related Question