I want to train a DDPG model like this architecture.
I train this model with 500 episodes and 1 episodes have 1000 step.
But when I run the m-file.
The result is train 'RL Agent_1' 500 episodes then train 'RL Agent_2' 500 episodes.
This result will let the parameters 'x1' in the environment cannot return to 'Subsystem_1'.
How to fix this problem?
Best Answer