Deep Q-Network Agents

The deep Q-network (DQN) algorithm is a model-free, online, off-policy reinforcement learning method. A DQN agent is a value-based reinforcement learning agent that trains a critic to estimate the return or future rewards. DQN is a variant of Q-learning. For more information on Q-learning, see Q-Learning Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

DQN agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Continuous or discreteDiscrete

During training, the agent:

  • Updates the critic properties at each time step during learning.

  • Explores the action space using epsilon-greedy exploration. During each control interval the agent selects a random action with probability ϵ, otherwise it selects an action greedily with respect to the value function with probability 1-ϵ. This greedy action is the action for which the value function is greatest.

  • Stores past experience using a circular experience buffer. The agent updates the critic based on a mini-batch of experiences randomly sampled from the buffer.

Critic Function

To estimate the value function, a DQN agent maintains two function approximators:

  • Critic Q(S,A) — The critic takes observation S and action A as inputs and outputs the corresponding expectation of the long-term reward.

  • Target critic Q'(S,A) — To improve the stability of the optimization, the agent periodically updates the target critic based on the latest critic parameter values.

Both Q(S,A) and Q'(S,A) have the same structure and parameterization.

For more information on creating critics for value function approximation, see Create Policy and Value Function Representations.

When training is complete, the trained value function approximator is stored in critic Q(S,A).

Agent Creation

To create a DQN agent:

  1. Create a critic using an rlQValueRepresentation object.

  2. Specify agent options using an rlDQNAgentOptions object.

  3. Create the agent using an rlDQNAgent object.

DQN agents support critics that use recurrent deep neural networks as functions approximators.

Training Algorithm

DQN agents use the following training algorithm, in which they update their critic model at each time step. To configure the training algorithm, specify options using rlDQNAgentOptions.

  • Initialize the critic Q(s,a) with random parameter values θQ, and initialize the target critic with the same values: θQ'=θQ.

  • For each training time step:

    1. For the current observation S, select a random action A with probability ϵ. Otherwise, select the action for which the critic value function is greatest.

      A=argmaxAQ(S,A|θQ)

      To specify ϵ and its decay rate, use the EpsilonGreedyExploration option.

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer.

    4. Sample a random mini-batch of M experiences (Si,Ai,Ri,S'i) from the experience buffer. To specify M, use the MiniBatchSize option.

    5. If S'i is a terminal state, set the value function target yi to Ri. Otherwise set it to:

      Amax=argmaxA'Q(Si',A'|θQ)yi=Ri+γQ'(Si',Amax|θQ')(doubleDQN)yi=Ri+γmaxA'Q'(Si',A'|θQ')(DQN)

      To set the discount factor γ, use the DiscountFactor option. To use double DQN, set the UseDoubleDQN option to true.

    6. Update the critic parameters by one-step minimization of the loss L across all sampled experiences.

      Lk=1Mi=1M(yiQ(Si,Ai|θQ))2

    7. Update the target critic parameters depending on the target update method For more information, see Target Update Methods.

    8. Update the probability threshold ϵ for selecting a random action based on the decay rate specified in the EpsilonGreedyExploration option.

Target Update Methods

DQN agents update their target critic parameters using one of the following target update methods.

  • Smoothing — Update the target parameters at every time step using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option.

    θQ'=τθQ+(1τ)θQ'

  • Periodic — Update the target parameters periodically without smoothing (TargetSmoothFactor = 1). To specify the update period, use the TargetUpdateFrequency parameter.

  • Periodic Smoothing — Update the target parameters periodically with smoothing.

To configure the target update method, create a rlDQNAgentOptions object, and set the TargetUpdateFrequency and TargetSmoothFactor parameters as shown in the following table.

Update MethodTargetUpdateFrequencyTargetSmoothFactor
Smoothing (default)1Less than 1
PeriodicGreater than 11
Periodic smoothingGreater than 1Less than 1

References

[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari With Deep Reinforcement Learning,” NIPS Deep Learning Workshop, 2013.

See Also

|

Related Topics