Solved – Incremental training of Neural Networks

continuous datamachine learningneural networks

Is it valid to train a neural network over and over again with new arriving data (including pruning after each new training)?

I plan to collect data for a period of time, train/cv/test the networ, then again collect new data and train the existing and already trained network with the new data.

The time series prediction setup, e.g. described here, doesn't fit I think, as the input and the output isn't the same data type, i.e. the input features are volume based count of a data stream and the output is a class label, so I cannot just "move the data to the left".

My searches on Google yielded mainly results regarding "incremental vs. batch" training of the weights of a model or incremental growing of the hidden layer. This paper seems to be what I'm looking for, but I'm still not completely confident about the usage of recurrent neural networks.

I could also create a new network for each time period, but thus I'd lose the knowledge gathered from the previous time periods.

So what do you suggest?

Best Answer

I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh and small dataset.

Try reading: A Survey on Transfer Learning and the algorithm TrAdaBoost.

Related Question