I have created a neural network and DQN agent using the MATLAB reinforcement learning toolbox, using the following code
createEnvironmentcreateDQNetwork % Produces critic, criticOptions & GPU
createDQNOptions % Produces agentOptions
createDQNTrainingOptions % Produces trainOptions & parrallel processing
agent = rlDQNAgent(critic,agentOptions); % Create the agent
validateEnvironment(env)
After this, I begin training the agent using the following code.
trainingResults = train(agent,env,trainOptions);curDir = pwd;saveDir = 'savedAgents';cd(saveDir)save(['trainedAgent' datestr(now,'mm_DD_YYYY_HHMM')],'agent','-v7.3');% save(['trainedAgent' datestr(now,'mm_DD_YYYY_HHMM')],'agent','trainingResults','-v7.3');
cd(curDir)
The agent begins training succesfully and I can observe it is learning how to control the system. Due to system memory constraints, I need to run the training process multiple times. When the first training process is finished, I simply run the following command again:
trainingResults = train(agent,env,trainOptions);
as I don't need to create a brand new agent, network, environment etc. from scratch. However, the behaviour of the agent when training begins the second time has obviously reverted back to what is was when it was first created. How can I begin retraining the agent, while keeping the progress from the previous training session?
Edit: My system has 64GB of RAM, getting more isn't really an option….
Best Answer