rlDDPGAgent

Deep deterministic policy gradient reinforcement learning agent

Description

The deep deterministic policy gradient (DDPG) algorithm is an actor-critic, model-free, online, off-policy reinforcement learning method which computes an optimal policy that maximizes the long-term reward. The action space can only be continuous.

For more information, see Deep Deterministic Policy Gradient Agents. For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

Create Agent from Observation and Action Specifications

example

agent = rlDDPGAgent(observationInfo,actionInfo) creates a deep deterministic policy gradient agent for an environment with the given observation and action specifications, using default initialization options. The actor and critic representations in the agent use default deep neural networks built from the observation specification observationInfo and the action specification actionInfo.

example

agent = rlDDPGAgent(observationInfo,actionInfo,initOpts) creates a deep deterministic policy gradient agent for an environment with the given observation and action specifications. The agent uses default networks configured using options specified in the initOpts object. For more information on the initialization options, see rlAgentInitializationOptions.

Create Agent from Actor and Critic Representations

example

agent = rlDDPGAgent(actor,critic,agentOptions) creates a DDPG agent with the specified actor and critic networks, using default DDPG agent options.

Specify Agent Options

agent = rlDDPGAgent(___,agentOptions) creates a DDPG agent and sets the AgentOptions property to the agentOptions input argument. Use this syntax after any of the input arguments in the previous syntaxes.

Input Arguments

expand all

Observation specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data type, and names of the observation signals.

You can extract observationInfo from an existing environment or agent using getObservationInfo. You can also construct the specifications manually using rlFiniteSetSpec or rlNumericSpec.

Action specifications, specified as a reinforcement learning specification object defining properties such as dimensions, data type, and names of the action signals.

Since a DDPG agent operates in a continuous action space, you must specify actionInfo as an rlNumericSpec object.

You can extract actionInfo from an existing environment or agent using getActionInfo. You can also construct the specification manually using rlNumericSpec.

Agent initialization options, specified as an rlAgentInitializationOptions object.

Actor network representation, specified as an rlDeterministicActorRepresentation. For more information on creating actor representations, see Create Policy and Value Function Representations.

Critic network representation, specified as an rlQValueRepresentation object. For more information on creating critic representations, see Create Policy and Value Function Representations.

Properties

expand all

Agent options, specified as an rlDDPGAgentOptions object.

Experience buffer, specified as an ExperienceBuffer object. During training the agent stores each of its experiences (S,A,R,S') in a buffer. Here:

  • S is the current observation of the environment.

  • A is the action taken by the agent.

  • R is the reward for taking action A.

  • S' is the next observation after taking action A.

For more information on how the agent samples experience from the buffer during training, see Deep Deterministic Policy Gradient Agents.

Object Functions

trainTrain reinforcement learning agents within a specified environment
simSimulate trained reinforcement learning agents within specified environment
getActionObtain action from agent or actor representation given environment observations
getActorGet actor representation from reinforcement learning agent
setActorSet actor representation of reinforcement learning agent
getCriticGet critic representation from reinforcement learning agent
setCriticSet critic representation of reinforcement learning agent
generatePolicyFunctionCreate function that evaluates trained policy of reinforcement learning agent

Examples

collapse all

Create an environment with a continuous action space, and obtain its observation and action specifications. For this example, load the environment used in the example Train DDPG Agent to Control Double Integrator System. The observations from the environment is a vector containing the position and velocity of a mass. The action is a scalar representing a force, applied to the mass, ranging continuously from -2 to 2 Newton.

% load predefined environment
env = rlPredefinedEnv("DoubleIntegrator-Continuous");

% obtain observation and action specifications
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

The agent creation function initializes the actor and critic networks randomly. You can ensure reproducibility by fixing the seed of the random generator. To do so, uncomment the following line.

% rng(0)

Create a policy gradient agent from the environment observation and action specifications.

agent = rlDDPGAgent(obsInfo,actInfo);

To check your agent, use getAction to return the action from a random observation.

getAction(agent,{rand(obsInfo(1).Dimension)})
ans = 1x1 cell array
    {[0.0182]}

You can now test and train the agent within the environment.

Create an environment with a continuous action space and obtain its observation and action specifications. For this example, load the environment used in the example Train DDPG Agent to Swing Up and Balance Pendulum with Image Observation. This environment has two observations: a 50-by-50 grayscale image and a scalar (the angular velocity of the pendulum). The action is a scalar representing a torque ranging continuously from -2 to 2 Nm.

% load predefined environment
env = rlPredefinedEnv("SimplePendulumWithImage-Continuous");

% obtain observation and action specifications
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

Create an agent initialization option object, specifying that each hidden fully connected layer in the network must have 128 neurons (instead of the default number, 256).

initOpts = rlAgentInitializationOptions('NumHiddenUnit',128);

The agent creation function initializes the actor and critic networks randomly. You can ensure reproducibility by fixing the seed of the random generator. To do so, uncomment the following line.

% rng(0)

Create a DDPG agent from the environment observation and action specifications.

agent = rlDDPGAgent(obsInfo,actInfo,initOpts);

Reduce the critic learning rate to 1e-3.

critic = getCritic(agent);
critic.Options.LearnRate = 1e-3;
agent  = setCritic(agent,critic);

Extract the deep neural networks from both the agent actor and critic.

actorNet = getModel(getActor(agent));
criticNet = getModel(getCritic(agent));

Display the layers of the critic network, and verify that each hidden fully connected layer has 128 neurons

criticNet.Layers
ans = 
  14x1 Layer array with layers:

     1   'concat'               Concatenation       Concatenation of 3 inputs along dimension 3
     2   'relu_body'            ReLU                ReLU
     3   'fc_body'              Fully Connected     128 fully connected layer
     4   'body_output'          ReLU                ReLU
     5   'input_1'              Image Input         50x50x1 images
     6   'conv_1'               Convolution         64 3x3x1 convolutions with stride [1  1] and padding [0  0  0  0]
     7   'relu_input_1'         ReLU                ReLU
     8   'fc_1'                 Fully Connected     128 fully connected layer
     9   'input_2'              Image Input         1x1x1 images
    10   'fc_2'                 Fully Connected     128 fully connected layer
    11   'input_3'              Image Input         1x1x1 images
    12   'fc_3'                 Fully Connected     128 fully connected layer
    13   'output'               Fully Connected     1 fully connected layer
    14   'RepresentationLoss'   Regression Output   mean-squared-error

Plot actor and critic networks

plot(actorNet)

plot(criticNet)

To check your agent, use getAction to return the action from a random observation.

getAction(agent,{rand(obsInfo(1).Dimension),rand(obsInfo(2).Dimension)})
ans = 1x1 cell array
    {[-0.0364]}

You can now test and train the agent within the environment.

Create an environment with a continuous action space and obtain its observation and action specifications. For this example, load the environment used in the example Train DDPG Agent to Control Double Integrator System. The observation from the environment is a vector containing the position and velocity of a mass. The action is a scalar representing a force ranging continuously from -2 to 2 Newton.

% load predefined environment
env = rlPredefinedEnv("DoubleIntegrator-Continuous");

% get observation and specification info
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

Create a critic representation.

% create a network to be used as underlying critic approximator
statePath = imageInputLayer([obsInfo.Dimension(1) 1 1], 'Normalization', 'none', 'Name', 'state');
actionPath = imageInputLayer([numel(actInfo) 1 1], 'Normalization', 'none', 'Name', 'action');
commonPath = [concatenationLayer(1,2,'Name','concat')
             quadraticLayer('Name','quadratic')
             fullyConnectedLayer(1,'Name','StateValue','BiasLearnRateFactor', 0, 'Bias', 0)];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = addLayers(criticNetwork, commonPath);
criticNetwork = connectLayers(criticNetwork,'state','concat/in1');
criticNetwork = connectLayers(criticNetwork,'action','concat/in2');

% set some options for the critic
criticOpts = rlRepresentationOptions('LearnRate',5e-3,'GradientThreshold',1);

% create the critic based on the network approximator
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,...
    'Observation',{'state'},'Action',{'action'},criticOpts);

Create an actor representation.

% create a network to be used as underlying actor approximator
actorNetwork = [
    imageInputLayer([obsInfo.Dimension(1) 1 1], 'Normalization', 'none', 'Name', 'state')
    fullyConnectedLayer(numel(actInfo), 'Name', 'action', 'BiasLearnRateFactor', 0, 'Bias', 0)];

% set some options for the actor
actorOpts = rlRepresentationOptions('LearnRate',1e-04,'GradientThreshold',1);

% create the actor based on the network approximator
actor = rlDeterministicActorRepresentation(actorNetwork,obsInfo,actInfo,...
    'Observation',{'state'},'Action',{'action'},actorOpts);

Specify agent options, and create a DDPG agent using the environment, actor, and critic.

agentOpts = rlDDPGAgentOptions(...
    'SampleTime',env.Ts,...
    'TargetSmoothFactor',1e-3,...
    'ExperienceBufferLength',1e6,...
    'DiscountFactor',0.99,...
    'MiniBatchSize',32);
agent = rlDDPGAgent(actor,critic,agentOpts);

To check your agent, use getAction to return the action from a random observation.

getAction(agent,{rand(2,1)})
ans = 1x1 cell array
    {[-0.4719]}

You can now test and train the agent within the environment.

Introduced in R2019a