Policy Gradient Agents

The policy gradient (PG) algorithm is a model-free, online, on-policy reinforcement learning method. A PG agent is a policy-based reinforcement learning agent which directly computes an optimal policy that maximizes the long-term reward.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

PG agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

During training, a PG agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Completes a full training episode using the current policy before learning from the experience and updating the policy parameters.

Actor and Critic Functions

PG agents represent the policy using an actor function approximator μ(S). The actor takes observation S and outputs the probabilities of taking each action in the action space when in state S.

To reduce the variance during gradient estimation, PG agents can use a baseline value function, which is estimated using a critic function approximator, V(S). The critic computes the value function for a given observation state.

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

Agent Creation

To create a PG agent:

  1. Create an actor representation using an rlStochasticActorRepresentation object.

  2. If you are using a baseline function, create a critic using an rlValueRepresentation object.

  3. Specify agent options using the rlPGAgentOptions object.

  4. Create the agent using an rlPGAgent object.

Training Algorithm

PG agents use the REINFORCE (Monte-Carlo policy gradient) algorithm either with or without a baseline. To configure the training algorithm, specify options using rlPGAgentOptions.

REINFORCE Algorithm

  1. Initialize the actor μ(S) with random parameter values θμ.

  2. For each training episode, generate the episode experience by following actor policy μ(S). To select an action, the actor generates probabilities for each action in the action space, then the agent randomly selects an action based on the probability distribution. The agent takes actions until it reaches the terminal state, ST. The episode experience consists of the sequence:

    S0,A0,R1,S1,,ST1,AT1,RT,ST

    Here, St is a state observation, At+1 is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.

  3. For each state in the episode sequence; that is, for t = 1, 2, …, T-1, calculate the return Gt, which is the discounted future reward.

    Gt=k=tTγktRk

  4. Accumulate the gradients for the actor network by following the policy gradient to maximize the expected discounted reward. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.

    dθμ=t=1T1Gtθμlnμ(St|θμ)

  5. Update the actor parameters by applying the gradients.

    θμ=θμ+αdθμ

    Here, α is the learning rate of the actor. Specify the learning rate when you create the actor representation by setting the LearnRate option in the rlRepresentationOptions object. For simplicity, this step shows a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer specified using rlRepresentationOptions.

  6. Repeat steps 2 through 5 for each training episode until training is complete.

REINFORCE with Baseline Algorithm

  1. Initialize the actor μ(S) with random parameter values θμ.

  2. Initialize the critic V(S) with random parameter values θQ.

  3. For each training episode, generate the episode experience by following actor policy μ(S). The episode experience consists of the sequence:

    S0,A0,R1,S1,,ST1,AT1,RT,ST

  4. For t = 1, 2, …, T:

    • Calculate the return Gt, which is the discounted future reward.

      Gt=k=tTγktRk

    • Compute the advantage function δt using the baseline value function estimate from the critic.

      δt=GtV(St|θV)

  5. Accumulate the gradients for the critic network.

    dθV=t=1T1δtθVV(St|θV)

  6. Accumulate the gradients for the actor network. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.

    dθμ=t=1T1δtθμlnμ(St|θμ)

  7. Update the critic parameters θV.

    θV=θV+βdθV

    Here, β is the learning rate of the critic. Specify the learning rate when you create the critic representation by setting the LearnRate option in the rlRepresentationOptions object.

  8. Update the actor parameters θμ.

    θμ=θμ+αdθμ

  9. Repeat steps 3 through 8 for each training episode until training is complete.

For simplicity, this actor and critic updates in this algorithm show a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer specified using rlRepresentationOptions.

References

[1] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning," Machine Learning, vol. 8, issue 3-4, pp. 229-256, 1992.

See Also

|

Related Topics