This example shows how to train a deep deterministic policy gradient (DDPG) agent to generate trajectories for a flying robot modeled in Simulink®. For more information on DDPG agents, see Deep Deterministic Policy Gradient Agents.
The reinforcement learning environment for this example is a flying robot with its initial condition randomized around a ring of radius 15
m where the orientation of the robot is also randomized. The robot has two thrusters mounted on the side of the body which are used to propel and steer the robot. The training goal is to drive the robot from its initial condition to the origin facing east.
Open the model and setup initial model variables.
mdl = 'rlFlyingRobotEnv'; open_system(mdl) % initial model state variables theta0 = 0; x0 = -15; y0 = 0; % sample time Ts = 0.4; % simulation length Tf = 30;
For this model:
The goal orientation is 0
radians (robot facing east).
The thrust from each actuator is bounded from -1
to 1
N
The observations from the environment are the position, orientation (sine and cosine of orientation), velocity and angular velocity of the robot.
The reward , provided at every time step is:
where:
is the position of the robot along the x-axis.
is the position of the robot along the y-axis.
is the orientation of the robot.
is the control effort from the left thruster.
is the control effort from the right thruster.
is the reward when the robot is close to the goal.
is the penalty when the robot drives beyond 20
m in either the x or y direction. The simulation is terminated when .
is a QR penalty that penalizes distance from the goal and control effort.
To train an agent for the FlyingRobotEnv
model, use the createIntegratedEnv
function to automatically generate an integrated model with the RL Agent block that is ready for training.
integratedMdl = 'IntegratedFlyingRobot';
[~,agentBlk,observationInfo,actionInfo] = createIntegratedEnv(mdl,integratedMdl);
Before creating the environment object, specify names for the observation and action specifications, and bound the thrust actions between -1
and 1
.
The observation signals for this environment are .
numObs = prod(observationInfo.Dimension);
observationInfo.Name = 'observations';
The action signals for this environment are .
numAct = prod(actionInfo.Dimension);
actionInfo.LowerLimit = -ones(numAct,1);
actionInfo.UpperLimit = ones(numAct,1);
actionInfo.Name = 'thrusts';
Create an environment interface for the flying robot with rlSimulinkEnv
with the generated model.
env = rlSimulinkEnv(integratedMdl,agentBlk,observationInfo,actionInfo);
Create a custom reset function that randomizes the initial position of the robot along a ring of radius 15
m and the initial orientation. See flyingRobotResetFcn
for details of the reset function.
env.ResetFcn = @(in) flyingRobotResetFcn(in);
Fix the random generator seed for reproducibility.
rng(0)
A DDPG agent approximates the long-term reward given observations and actions using a critic value function representation. To create the critic, first create a deep neural network with two inputs (the observation and action) and one output. For more information on creating a neural network value function representation, see Create Policy and Value Function Representations.
% specify the number of outputs for the hidden layers. hiddenLayerSize = 100; observationPath = [ imageInputLayer([numObs 1 1],'Normalization','none','Name','observation') fullyConnectedLayer(hiddenLayerSize,'Name','fc1') reluLayer('Name','relu1') fullyConnectedLayer(hiddenLayerSize,'Name','fc2') additionLayer(2,'Name','add') reluLayer('Name','relu2') fullyConnectedLayer(hiddenLayerSize,'Name','fc3') reluLayer('Name','relu3') fullyConnectedLayer(1,'Name','fc4')]; actionPath = [ imageInputLayer([numAct 1 1],'Normalization','none','Name','action') fullyConnectedLayer(hiddenLayerSize,'Name','fc5')]; % create the layerGraph criticNetwork = layerGraph(observationPath); criticNetwork = addLayers(criticNetwork,actionPath); % connect actionPath to obervationPath criticNetwork = connectLayers(criticNetwork,'fc5','add/in2');
Specify options for the critic using rlRepresentationOptions
.
criticOptions = rlRepresentationOptions('LearnRate',1e-03,'GradientThreshold',1);
Create the critic representation using the specified neural network and options. You must also specify the action and observation specification for the critic. For more information, see rlQValueRepresentation
.
critic = rlQValueRepresentation(criticNetwork,observationInfo,actionInfo,... 'Observation',{'observation'},'Action',{'action'},criticOptions);
A DDPG agent decides which action to take given observations using an actor representation. To create the actor, first create a deep neural network with one input (the observation) and one output (the action).
Construct the actor in a similar manner to the critic. For more information, see rlDeterministicActorRepresentation
.
actorNetwork = [ imageInputLayer([numObs 1 1],'Normalization','none','Name','observation') fullyConnectedLayer(hiddenLayerSize,'Name','fc1') reluLayer('Name','relu1') fullyConnectedLayer(hiddenLayerSize,'Name','fc2') reluLayer('Name','relu2') fullyConnectedLayer(hiddenLayerSize,'Name','fc3') reluLayer('Name','relu3') fullyConnectedLayer(numAct,'Name','fc4') tanhLayer('Name','tanh1')]; actorOptions = rlRepresentationOptions('LearnRate',1e-04,'GradientThreshold',1); actor = rlDeterministicActorRepresentation(actorNetwork,observationInfo,actionInfo,... 'Observation',{'observation'},'Action',{'tanh1'},actorOptions);
To create the DDPG agent, first specify the DDPG agent options using rlDDPGAgentOptions
.
agentOptions = rlDDPGAgentOptions(... 'SampleTime',Ts,... 'TargetSmoothFactor',1e-3,... 'ExperienceBufferLength',1e6 ,... 'DiscountFactor',0.99,... 'MiniBatchSize',256); agentOptions.NoiseOptions.Variance = 1e-1; agentOptions.NoiseOptions.VarianceDecayRate = 1e-6;
Then, create the agent using the specified actor representation, critic representation, and agent options. For more information, see rlDDPGAgent
.
agent = rlDDPGAgent(actor,critic,agentOptions);
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 20000
episodes, with each episode lasting at most ceil(Tf/Ts)
time steps.
Display the training progress in the Episode Manager dialog box (set the Plots
option) and disable the command line display (set the Verbose
option to false
).
Stop training when the agent receives an average cumulative reward greater than 415
over ten consecutive episodes. At this point, the agent can drive the flying robot to the goal position.
Save a copy of the agent for each episode where the cumulative reward is greater than 415
.
For more information, see rlTrainingOptions
.
maxepisodes = 20000; maxsteps = ceil(Tf/Ts); trainingOptions = rlTrainingOptions(... 'MaxEpisodes',maxepisodes,... 'MaxStepsPerEpisode',maxsteps,... 'StopOnError',"on",... 'Verbose',false,... 'Plots',"training-progress",... 'StopTrainingCriteria',"AverageReward",... 'StopTrainingValue',415,... 'ScoreAveragingWindowLength',10,... 'SaveAgentCriteria',"EpisodeReward",... 'SaveAgentValue',415);
Train the agent using the train
function. This is a computationally intensive process that takes several hours to complete. To save time while running this example, load a pretrained agent by setting doTraining
to false
. To train the agent yourself, set doTraining
to true
.
doTraining = false; if doTraining % Train the agent. trainingStats = train(agent,env,trainingOptions); else % Load pretrained agent for the example. load('FlyingRobotDDPG.mat','agent') end
To validate the performance of the trained agent, simulate it within the pendulum environment. For more information on agent simulation, see rlSimulationOptions
and sim
.
simOptions = rlSimulationOptions('MaxSteps',maxsteps);
experience = sim(env,agent,simOptions);