Proximal policy optimization (PPO) is a model-free, online, on-policy, policy gradient reinforcement learning method. This algorithm is a type of policy gradient training that alternates between sampling data through environmental interaction and optimizing a clipped surrogate objective function using stochastic gradient descent. The clipped surrogate objective function improves training stability by limiting the size of the policy change at each step [1].
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
PPO agents can be trained in environments with the following observation and action spaces.
Observation Space | Action Space |
---|---|
Discrete or continuous | Discrete or continuous |
During training, a PPO agent:
Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.
Interacts with the environment for multiple steps using the current policy before using mini-batches to update the actor and critic properties over multiple epochs.
To estimate the policy and value function, a PPO agent maintains two function approximators:
Actor μ(S) — The actor takes observation S and returns the probabilities of taking each action in the action space when in state S.
Critic V(S) — The critic takes observation S and returns the corresponding expectation of the discounted long-term reward.
When training is complete, the trained optimal policy is stored in actor μ(S).
For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.
You can create a PPO agent with default actor and critic representations based on the observation and action specifications from the environment. To do so, perform the following steps.
Create observation specifications for your environment. If you already have an
environment interface object, you can obtain these specifications using getObservationInfo
.
Create action specifications for your environment. If you already have an
environment interface object, you can obtain these specifications using getActionInfo
.
Specify agent options using an rlPPOAgentOptions
object.
Create the agent using an rlPPOAgent
object.
Alternatively, you can create actor and critic representations and use these representations to create your agent. In this case, ensure that the input and output dimensions of the actor and critic representations match the corresponding action and observation specifications of the environment.
Create an actor using an rlStochasticActorRepresentation
object.
Create a critic using an rlValueRepresentation
object.
If needed, specify the number of neurons in each learnable layer or whether to use
an LSTM layer. To do so, create an agent initialization option object using rlAgentInitializationOptions
.
If needed, specify agent options using an rlPPOAgentOptions
object.
Create the agent using the rlPPOAgent
function.
PPO agents support actors and critics that use recurrent deep neural networks as functions approximators.
For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.
PPO agents use the following training algorithm. To configure the training algorithm,
specify options using an rlPPOAgentOptions
object.
Initialize the actor μ(S) with random parameter values θμ.
Initialize the critic V(S) with random parameter values θV.
Generate N experiences by following the current policy. The experience sequence is
Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.
When in state St, the agent computes the probability of taking each action in the action space using μ(St) and randomly selects action At based on the probability distribution.
ts is the starting time step of the current set of N experiences. At the beginning of the training episode, ts = 1. For each subsequent set of N experiences in the same training episode, ts ← ts + N.
For each experience sequence that does not contain a terminal state,
N is equal to the ExperienceHorizon
option
value. Otherwise, N is less than ExperienceHorizon
and SN is the terminal state.
For each episode step t = ts+1,
ts+2, …, ts+N, compute the
return and advantage function using the method specified by the
AdvantageEstimateMethod
option.
Finite Horizon
(AdvantageEstimateMethod = "finite-horizon"
) — Compute the
return Gt, which is the sum of the reward
for that step and the discounted future reward [2].
Here, b is 0
if
Sts+N is a terminal state and
1
otherwise. That is, if
Sts+N is not a terminal state, the
discounted future reward includes the discounted state value function, computed
using the critic network V.
Compute the advantage function Dt.
Generalized Advantage Estimator
(AdvantageEstimateMethod = "gae"
) — Compute the advantage
function Dt, which is the discounted sum
of temporal difference errors [3].
Here, b is 0
if
Sts+N is a terminal state and
1
otherwise. λ is a smoothing factor
specified using the GAEFactor
option.
Compute the return Gt.
To specify the discount factor γ for either method, use the
DiscountFactor
option.
Learn from mini-batches of experiences over K epochs. To specify
K, use the NumEpoch
option. For each learning
epoch:
Sample a random mini-batch data set of size M from the
current set of experiences. To specify M, use the
MiniBatchSize
option. Each element of the mini-batch data set
contains a current experience and the corresponding return and advantage function
values.
Update the critic parameters by minimizing the loss Lcritic across all sampled mini-batch data.
Update the actor parameters by minimizing the loss
Lactor across all sampled mini-batch
data. If the EntropyLossWeight
option is greater than zero, then
additional entropy loss is added to
Lactor, which encourages policy
exploration.
Here:
Di and Gi are the advantage function and return value for the ith element of the mini-batch, respectively.
μi(Si|θμ) is the probability of taking action Ai when in state Si, given the updated policy parameters θμ.
μi(Si|θμ,old) is the probability of taking action Ai when in state Si, given the previous policy parameters (θμ,old) from before the current learning epoch.
ε is the clip factor specified using the
ClipFactor
option.
Repeat steps 3 through 5 until the training episode reaches a terminal state.
[1] Schulman, John, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. “Proximal Policy Optimization Algorithms.” ArXiv:1707.06347 [Cs], July 19, 2017. https://arxiv.org/abs/1707.06347.
[2] Mnih, Volodymyr, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. “Asynchronous Methods for Deep Reinforcement Learning.” ArXiv:1602.01783 [Cs], February 4, 2016. https://arxiv.org/abs/1602.01783.
[3] Schulman, John, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. “High-Dimensional Continuous Control Using Generalized Advantage Estimation.” ArXiv:1506.02438 [Cs], October 20, 2018. https://arxiv.org/abs/1506.02438.
rlPPOAgent
| rlPPOAgentOptions