Actor-Critic Agents

You can use the actor-critic (AC) agent, which uses a model-free, online, on-policy reinforcement learning method, to implement actor-critic algorithms, such as A2C and A3C. The goal of this agent is to optimize the policy (actor) directly and train a critic to estimate the return or future rewards. [1]

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

AC agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

During training, an AC agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Interacts with the environment for multiple steps using the current policy before updating the actor and critic properties.

Actor and Critic Function

To estimate the policy and value function, an AC agent maintains two function approximators:

  • Actor μ(S) — The actor takes observation S and outputs the probabilities of taking each action in the action space when in state S.

  • Critic V(S) — The critic takes observation S and outputs the corresponding expectation of the discounted long-term reward.

When training is complete, the trained optimal policy is stored in actor μ(S).

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

Agent Creation

To create an AC agent:

  1. Create an actor using an rlStochasticActorRepresentation object.

  2. Create a critic using an rlValueRepresentation object.

  3. Specify agent options using an rlACAgentOptions object.

  4. Create the agent using an rlACAgent object.

Training Algorithm

AC agents use the following training algorithm. To configure the training algorithm, specify options using an rlACAgentOptions object.

  1. Initialize the actor μ(S) with random parameter values θμ.

  2. Initialize the critic V(S) with random parameter values θV.

  3. Generate N experiences by following the current policy. The episode experience sequence is:

    Sts,Ats,Rts+1,Sts+1,,Sts+N1,Ats+N1,Rts+N,Sts+N

    Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.

    When in state St, the agent computes the probability of taking each action in the action space using μ(St) and randomly selects action At based on the probability distribution.

    ts is the starting time step of the current set of N experiences. At the beginning of the training episode, ts = 1. For each subsequent set of N experiences in the same training episode, ts = ts + N.

    For each training episode that does not contain a terminal state, N is equal to the NumStepsToLookAhead option value. Otherwise, N is less than NumStepsToLookAhead and SN is the terminal state.

  4. For each episode step t = ts+1, ts+2, …, ts+N, compute the return Gt, which is the sum of the reward for that step and the discounted future reward. If Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network V.

    Gt=k=tts+N(γktRk)+bγNt+1V(Sts+N|θV)

    Here, b is 0 if Sts+N is a terminal state and 1 otherwise.

    To specify the discount factor γ, use the DiscountFactor option.

  5. Compute the advantage function Dt.

    Dt=GtV(St|θV)

  6. Accumulate the gradients for the actor network by following the policy gradient to maximize the expected discounted reward.

    dθμ=t=1Nθμlnμ(St|θμ)Dt

  7. Accumulate the gradients for the critic network by minimizing the mean square error loss between the estimated value function V (t) and the computed target return Gt across all N experiences. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.

    dθV=t=1NθV(GtV(St|θV))2

  8. Update the actor parameters by applying the gradients.

    θμ=θμ+αdθμ

    Here, α is the learning rate of the actor. Specify the learning rate when you create the actor representation by setting the LearnRate option in the rlRepresentationOptions object.

  9. Update the critic parameters by applying the gradients.

    θV=θV+βdθV

    Here, β is the learning rate of the critic. Specify the learning rate when you create the critic representation by setting the LearnRate option in the rlRepresentationOptions object.

  10. Repeat steps 3 through 9 for each training episode until training is complete.

For simplicity, the actor and critic updates in this algorithm show a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer specified using rlRepresentationOptions.

References

[1] Mnih, V, et al. "Asynchronous methods for deep reinforcement learning," International Conference on Machine Learning, 2016.

See Also

|

Related Topics