setActor

Set actor representation of reinforcement learning agent

Description

example

newAgent = setActor(oldAgent,actor) returns a new reinforcement learning agent, newAgent, that uses the specified actor representation. Apart from the actor representation, the new agent has the same configuration as the specified original agent, oldAgent.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.

load('DoubleIntegDDPG.mat','agent') 

Obtain the actor representation from the agent.

actor = getActor(agent);

Obtain the learnable parameters from the actor.

params = getLearnableParameters(actor);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameters(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

agent = setActor(agent,actor);

Assume that you have an existing reinforcement learning agent, agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System:

load('DoubleIntegDDPG.mat','agent')

Further, assume that this agent has an actor representation that contains the following shallow neural network structure:

oldActorNetwork = [
        imageInputLayer([2 1 1],'Normalization','none','Name','state')
        fullyConnectedLayer(1,'Name','action')];

Create the new network with the additional fully connected hidden layer:

newActorNetwork = [
        imageInputLayer([2 1 1],'Normalization','none','Name','state')
        fullyConnectedLayer(3,'Name','hidden');
        fullyConnectedLayer(1,'Name','action')];

Create the corresponding actor representation:

actor = rlDeterministicActorRepresentation(newActorNetwork,...
    getObservationInfo(agent),getActionInfo(agent),...
    'Observation',{'state'},...
    'Action',{'action'})
actor = 
  rlDeterministicActorRepresentation with properties:

         ActionInfo: [1x1 rl.util.rlNumericSpec]
    ObservationInfo: [1x1 rl.util.rlNumericSpec]
            Options: [1x1 rl.option.rlRepresentationOptions]

Set the actor representation of the agent to the new augmented actor:

agent = setActor(agent,actor);

To check your agent, use getAction to return the action from a random observation.

getAction(agent,{rand(2,1)})
ans = single
    1.4134

You can now test and train the agent against the environment.

Input Arguments

collapse all

Reinforcement learning agent that contains an actor representation, specified as one of the following:

Actor representation object, specified as one of the following:

The input and output layers of the specified representation must match the observation and action specifications of the original agent.

To create a policy or value function representation, use one of the following methods:

  • Create a representation using the corresponding representation object.

  • Obtain the existing policy representation from an agent using getActor.

Output Arguments

collapse all

Updated reinforcement learning agent, returned as an agent object that uses the specified actor representation. Apart from the actor representation, the new agent has the same configuration as oldAgent.

Introduced in R2019a