rlPPOAgentOptions

Options for PPO agent

Description

Use an rlPPOAgentOptions object to specify options for proximal policy optimization (PPO) agents. To create a PPO agent, use rlPPOAgent

For more information on PPO agents, see Proximal Policy Optimization Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

opt = rlPPOAgentOptions creates an rlPPOAgentOptions object for use as an argument when creating a PPO agent using all default settings. You can modify the object properties using dot notation.

example

opt = rlPPOAgentOptions(Name,Value) sets option properties using name-value pairs. For example, rlPPOAgentOptions('DiscountFactor',0.95) creates an option set with a discount factor of 0.95. You can specify multiple name-value pairs. Enclose each property name in quotes.

Properties

expand all

Number of steps the agent interacts with the environment before learning from its experience, specified as a positive integer.

The ExperienceHorizon value must be greater than or equal to the MiniBatchSize value.

Clip factor for limiting the change in each policy update step, specified as a positive scalar less than 1.

Entropy loss weight, specified as a scalar value between 0 and 1. A higher loss weight value promotes agent exploration by applying a penalty for being too certain about which action to take. Doing so can help the agent move out of local optima.

For episode step t, the entropy loss function, which is added to the loss function for actor updates, is:

Ht=Ek=1Mμk(St|θμ)lnμk(St|θμ)

Here:

  • E is the entropy loss weight.

  • M is the number of possible actions.

  • μk(St|θμ) is the probability of taking action Ak when in state St following the current policy.

Mini-batch size used for each learning epoch, specified as a positive integer.

The MiniBatchSize value must be less than or equal to the ExperienceHorizon value.

Number of epochs for which the actor and critic networks learn from the current experience set, specified as a positive integer.

Method for estimating advantage values, specified as one of the following:

  • "gae" — Generalized advantage estimator

  • "finite-horizon" — Finite horizon estimation

For more information on these methods, see the training algorithm information in Proximal Policy Optimization Agents.

Smoothing factor for generalized advantage estimator, specified as a scalar value between 0 and 1, inclusive. This option applies only when the AdvantageEstimateMethod option is "gae"

Sample time of agent, specified as a positive scalar.

Within a Simulink environment, the agent gets executed every SampleTime seconds of simulation time.

Within a MATLAB environment, the agent gets executed every time the environment advances. However, SampleTime is the time interval between consecutive elements in the output experience returned by sim or train.

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

Object Functions

rlPPOAgentProximal policy optimization reinforcement learning agent

Examples

collapse all

Create a PPO agent options object, specifying the experience horizon.

opt = rlPPOAgentOptions('ExperienceHorizon',256)
opt = 
  rlPPOAgentOptions with properties:

          ExperienceHorizon: 256
              MiniBatchSize: 128
                 ClipFactor: 0.2000
          EntropyLossWeight: 0.0100
                   NumEpoch: 3
    AdvantageEstimateMethod: "gae"
                  GAEFactor: 0.9500
                 SampleTime: 1
             DiscountFactor: 0.9900

You can modify options using dot notation. For example, set the agent sample time to 0.5.

opt.SampleTime = 0.5;
Introduced in R2019b