Solved – How to speed up training of a Neural Network

backpropagationefficiencyneural networks

I'm writing a thesis where I developed a script that generates NN and precalculates weights and biases to reduce a required number of epochs when I train a network. In my work, using examples I managed to prove the efficiency of precalculated weights and biases, but wondering, is there any other ways to reduce a required number of epochs to train a network? I just want to enrich my thesis, if you can share a reference I would really appreciate it

I am using feedforward and recurrent NN, applying backpropagation and stochastic gradient descent optimization

Best Answer

This is on of the famous problems among the deep learning community. There are two solutions that I have come across so far.

  1. Deep Networks with Stochastic Depth (https://arxiv.org/abs/1603.09382)

This paper talks about a training method where you train only a set of randomly chosen layers and drop the rest with identity function. This method also works as a regularizer to avoid overfitting of the model.

  1. FreezeOut: Accelerate Training by Progressively Freezing Layers (https://arxiv.org/abs/1706.04983)

This paper proposes to only train the hidden layers for a set portion of the training run, freezing them out one-by-one and excluding them from the backward pass.

Related Question