MATLAB: Reinforcement Learning Toolbox- Multiple Discrete Actions for actor critic agent (imageInputLayer issues)

imageinputlayerreinforcement learning

I am workig on setting up a rlACAgent using the reinforcement learning toolbox. I have succesffully created this agent with a system that has only one set of finite actions but I am looking to expand the set of finite actions to any arbitary number but in this case 4 and I think i am messing up something with the layer creation.
My code is below everything runs successfully until I try to create the Agent and I get the following error: "The dimensions of observations are not compatible with those of Observation Info."
I feel like I'm missing something fundamental about the layer construction here but I've been scratching my head for a while. Any help would be appreciated!
obsInfo = rlNumericSpec([2 1]);
obsInfo.Name = 'Car Position';
obsInfo.Description = {'x, y'};
% Actions
actInfo = rlFiniteSetSpec({[-1 -.8 -.6 -.4 -.2 0 .2 .4 .6 .8 1],...
[-1 -.8 -.6 -.4 -.2 0 .2 .4 .6 .8 1],...
[-1 -.8 -.6 -.4 -.2 0 .2 .4 .6 .8 1],...
[-1 -.8 -.6 -.4 -.2 0 .2 .4 .6 .8 1]});
actInfo.Name='Wheel Speeds';
actInfo.Description = {'Front Right Speed','Front Left Speed','Rear Right Speed',...
'Rear Left Speed'};
%% Build Custom Environment
env=rlFunctionEnv(obsInfo,actInfo,'DriveStepFunction','DriveResetFunction')
%% Extract Data from Environment
obsInfo = getObservationInfo(env)
numObservation = obsInfo.Dimension(1);
actInfo = getActionInfo(env)
numActions = actInfo.Dimension(2);
%% Develop Critic
criticNetwork = [
imageInputLayer([numObservation numActions 1],'Normalization','none','Name','state')
fullyConnectedLayer(numObservation,'Name','CriticFC')];
criticOpts = rlRepresentationOptions('LearnRate',.01,'GradientThreshold',1);
critic = rlRepresentation(criticNetwork,obsInfo,'Observation',{'state'},criticOpts);
%% Develop Actor
actorNetwork = [
imageInputLayer([numObservation numActions 1],'Normalization','none','Name','state')
fullyConnectedLayer(numActions,'Name','action')];
actorOpts = rlRepresentationOptions('LearnRate',.01,'GradientThreshold',1);
actor = rlRepresentation(actorNetwork,obsInfo,actInfo,...
'Observation',{'state'},'Action',{'action'},actorOpts);
%% Develop Agent
agentOpts = rlACAgentOptions(...
'NumStepsToLookAhead',5,...
'DiscountFactor',1,...
'EntropyLossWeight',.4);
agent = rlACAgent(actor,critic,agentOpts);

Best Answer

Hi Anthony,
I believe this link should help. Looks like the action space is not set up correctly. For multiple discrete actions, you need to calculate all possible combinations of discrete actions, and use these with rlFiniteSetSpec.
Related Question