This example shows how to train a deep deterministic policy gradient (DDPG) agent to control a second-order dynamic system modeled in MATLAB®.
For more information on DDPG agents, see Deep Deterministic Policy Gradient Agents. For an example showing how to train a DDPG agent in Simulink®, see Train DDPG Agent to Swing Up and Balance Pendulum.
The reinforcement learning environment for this example is a second-order double-integrator system with a gain. The training goal is to control the position of a mass in the second-order system by applying a force input.
For this environment:
The mass starts at an initial position between –4 and 4 units.
The force action signal from the agent to the environment is from –2 to 2 N.
The observations from the environment are the position and velocity of the mass.
The episode terminates if the mass moves more than 5 m from the original position or if .
The reward , provided at every time step, is a discretization of :
Here:
is the state vector of the mass.
is the force applied to the mass.
is the weights on the control performance; .
is the weight on the control effort; .
For more information on this model, see Load Predefined Control System Environments.
Create a predefined environment interface for the double integrator system.
env = rlPredefinedEnv("DoubleIntegrator-Continuous")
env = DoubleIntegratorContinuousAction with properties: Gain: 1 Ts: 0.1000 MaxDistance: 5 GoalThreshold: 0.0100 Q: [2x2 double] R: 0.0100 MaxForce: Inf State: [2x1 double]
env.MaxForce = Inf;
The interface has a continuous action space where the agent can apply force values from -Inf
to Inf
to the mass.
Obtain the observation and action information from the environment interface.
obsInfo = getObservationInfo(env); numObservations = obsInfo.Dimension(1); actInfo = getActionInfo(env); numActions = numel(actInfo);
Fix the random generator seed for reproducibility.
rng(0)
A DDPG agent approximates the long-term reward, given observations and actions, using a critic value function representation. To create the critic, first create a deep neural network with two inputs (the state and action) and one output. For more information on creating a neural network value function representation, see Create Policy and Value Function Representations.
statePath = imageInputLayer([numObservations 1 1],'Normalization','none','Name','state'); actionPath = imageInputLayer([numActions 1 1],'Normalization','none','Name','action'); commonPath = [concatenationLayer(1,2,'Name','concat') quadraticLayer('Name','quadratic') fullyConnectedLayer(1,'Name','StateValue','BiasLearnRateFactor',0,'Bias',0)]; criticNetwork = layerGraph(statePath); criticNetwork = addLayers(criticNetwork,actionPath); criticNetwork = addLayers(criticNetwork,commonPath); criticNetwork = connectLayers(criticNetwork,'state','concat/in1'); criticNetwork = connectLayers(criticNetwork,'action','concat/in2');
View the critic network configuration.
figure plot(criticNetwork)
Specify options for the critic representation using rlRepresentationOptions
.
criticOpts = rlRepresentationOptions('LearnRate',5e-3,'GradientThreshold',1);
Create the critic representation using the specified neural network and options. You must also specify the action and observation info for the critic, which you obtain from the environment interface. For more information, see rlQValueRepresentation
.
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,'Observation',{'state'},'Action',{'action'},criticOpts);
A DDPG agent decides which action to take, given observations, using an actor representation. To create the actor, first create a deep neural network with one input (the observation) and one output (the action).
Construct the actor in a similar manner to the critic.
actorNetwork = [ imageInputLayer([numObservations 1 1],'Normalization','none','Name','state') fullyConnectedLayer(numActions,'Name','action','BiasLearnRateFactor',0,'Bias',0)]; actorOpts = rlRepresentationOptions('LearnRate',1e-04,'GradientThreshold',1); actor = rlDeterministicActorRepresentation(actorNetwork,obsInfo,actInfo,'Observation',{'state'},'Action',{'action'},actorOpts);
To create the DDPG agent, first specify the DDPG agent options using rlDDPGAgentOptions
.
agentOpts = rlDDPGAgentOptions(... 'SampleTime',env.Ts,... 'TargetSmoothFactor',1e-3,... 'ExperienceBufferLength',1e6,... 'DiscountFactor',0.99,... 'MiniBatchSize',32); agentOpts.NoiseOptions.Variance = 0.3; agentOpts.NoiseOptions.VarianceDecayRate = 1e-6;
Create the DDPG agent using the specified actor representation, critic representation, and agent options. For more information, see rlDDPGAgent
.
agent = rlDDPGAgent(actor,critic,agentOpts);
To train the agent, first specify the training options. For this example, use the following options.
Run at most 1000 episodes in the training session, with each episode lasting at most 200 time steps.
Display the training progress in the Episode Manager dialog box (set the Plots
option) and disable the command line display (set the Verbose
option).
Stop training when the agent receives a moving average cumulative reward greater than –66. At this point, the agent can control the position of the mass using minimal control effort.
For more information, see rlTrainingOptions
.
trainOpts = rlTrainingOptions(... 'MaxEpisodes', 5000, ... 'MaxStepsPerEpisode', 200, ... 'Verbose', false, ... 'Plots','training-progress',... 'StopTrainingCriteria','AverageReward',... 'StopTrainingValue',-66);
You can visualize the double integrator environment by using the plot
function during training or simulation.
plot(env)
Train the agent using the train
function. Training this agent is a computationally intensive process that takes several hours to complete. To save time while running this example, load a pretrained agent by setting doTraining
to false
. To train the agent yourself, set doTraining
to true
.
doTraining = false; if doTraining % Train the agent. trainingStats = train(agent,env,trainOpts); else % Load the pretrained agent for the example. load('DoubleIntegDDPG.mat','agent'); end
To validate the performance of the trained agent, simulate it within the double integrator environment. For more information on agent simulation, see rlSimulationOptions
and sim
.
simOptions = rlSimulationOptions('MaxSteps',500);
experience = sim(env,agent,simOptions);
totalReward = sum(experience.Reward)
totalReward = single
-65.9933