MATLAB: Invalid input argument type or size such as observation, reward, isdone or loggedSignals. (Reinforcement learning toolbox)

MATLABReinforcement Learning Toolbox

% Create observation specifications.
numObservations = 6;
obsInfo = rlNumericSpec([numObservations 1]);
obsInfo.Name = 'observations';
obsInfo.Description = 'Information on reference voltage, measured capacitor voltage and load current';
% Create action specifications.
load('Actions.mat')
actInfo = rlFiniteSetSpec(num2cell(actions,2));
actInfo.Name = 'states';
agentblk = 'Reinforcement_learning_controller_discrete/RL_controller/RL Agent';
env = rlSimulinkEnv(mdl,agentblk,obsInfo,actInfo);
rng(0)
dnn = [
featureInputLayer(numObservations,'Normalization','none','Name','state')
fullyConnectedLayer(24, 'Name','actorFC1') % why 24,48
reluLayer('Name','CriticRelu1')
fullyConnectedLayer(24, 'Name','CriticStateFC2')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(length(actInfo.Elements),'Name','output')];
agentOptions = rlDQNAgentOptions(...
'SampleTime',20e-6,...
'TargetSmoothFactor',1e-3,...
'ExperienceBufferLength',3000,...
'UseDoubleDQN',false,...
'DiscountFactor',0.9,...
'MiniBatchSize',64);
agent = rlDQNAgent(critic,agentOptions);
trainingOptions = rlTrainingOptions(...
'MaxEpisodes',1000,...
'MaxStepsPerEpisode',500,...
'ScoreAveragingWindowLength',5,...
'Verbose',false,...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',200,...
'SaveAgentCriteria','EpisodeReward',...
'SaveAgentValue',200);
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainingOptions);
else
% Load the pretrained agent for the example.
load('SimulinkVSCDQN.mat','agent');
end
simOptions = rlSimulationOptions('MaxSteps',500);
experience = sim(env,agent,simOptions);
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Unable to compute gradient from representation.
Unable to evaluate the loss function. Check the loss function and ensure it runs successfully.
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the appropriate size for that dimension.
The elements of action are 128×1 cell. It has 7 action, each with 2 possibile value, which results in 128×1 cell. When I set two possible elements in the actInfo manually, the model works well. However, the error presented above occurs when I use the 128×1 cell as the elements.

Best Answer

Hello,
It's challenging to reproduce this without having access to a reproduction model (including the environment definition).
I would recommend comparing your code with this example which is similar in nature (has multiple discrete actions) and particularly lines 237-248 in RocketLander.m. Make sure each element in your cell array has appropriate dimensions whether that's 1x2 or 2x1.
If this does not work, check the dimensions of the IsDone and reward signal as well and make sure these are scalars.
Related Question