Notes on Trust Region Policy Optimization

A summary of Schulman et al 2016 paper

June 25, 2017 - 5 minute read -
machine-learning notes

Introduction

Reinforcement learning has caught some attention in both academia and industry recently. Trust Region Policy Optimization (TRPO) is one of the notable fancy RL algorithms, developed by Schulman et al, that has nice theoretical monotonic improvement guarantee. The goal of this post is to give a brief and intuitive summary of the TRPO algorithm. This post assumes that readers have a basic understanding of reinforcement learning algorithms and policy gradient methods.

Ideas

The two main ideas behind TRPO, from my personal view, are MM algorithms and the Trust Region.

MM algorithms

The main idea of MM (minorization-maximization) algorithms is that, intuitively, for a maximization problem, we first find a approximated lower bound of the original objective as the surrogate objective and then maximize the approximated lower bound so as to optimize the original objective. (And vice versa for minimization problems) Widely known Expectation-Maximization (EM) algorithm is a subclass of MM algorithms.

In TRPO, Schulman et al developed a surrogate loss based on Kakade et al 2001 and Kakade & Langford 2002. The surrogate loss in TRPO is a lower bound of the original objective – the expected cumulative return of the policy.

Trust Region Methods

As described in Nocedal & Wright’s Numerical Optimization, “Trust-region methods define a region around the current iterative within which they trust the model to be an adequate representation of the objective function, and then choose the step to be the approximate minimizer of the model in this region”. Intuitively, during our optimization procedure, after we decided the gradient direction, when doing line search we want to constrain our step length to be within a “trust region” so that the local estimation of the gradient/curvature remains to be “trusted”.

In TRPO, Schulman et al used KL divergence between the old policy and updated policy as a measurement for trust region.

TRPO

After introducing the two main ideas, let’s dive into the algorithm itself. More math coming, keep on your seat belt!

Notations

An MDP is a tuple

  • is a finite set of states.
  • is a set of actions.
  • is the state transition probability of landing at state : upon taking the action at state .
  • is the discount factor.
  • is the reward function.
  • is the state distribution of the initial state
  • is the discounted visitation frequences,
  • is the expected discounted cumulative reward of policy . Where
  • is the action-value function
  • is the value function
  • is the advantage function. Where

Derivations

Hooo .., the notations seem long but if you look into them, you would be able to recognize that they are quite basic in RL. There comes the main part of the algorithm, time to pay attention!

Here is the important identity proved by Kakade & Langford 2002:

Where is the old policy and is the new policy. Note that we have the current policy but we don’t have yet, therefore, is hard to obtain. Instead, Schulman et al used as an approximation to :

We then define the following as the objective function,

Now is the time when the MM algorithm and trust region come in. Let . If we define the new policy as the following mixture:

Kakade & Langford 2002 proved that,

Where,

With this bound (r.h.s. of the inequality), we can constraint the update to be within some trust region.

Based on this bound, Schulman et al proved the following simpler bound involving KL-divergence between the new policy and the old policy:

Where

Unfortunately, computing the maximum-KL divergence term over the whole state space is intractable. Schulman et al proposed to use mean-KL divergence over state space as an approximation so that we can estimation it by

Then, we’ve arrived at the TRPO optimization problem

In Practice

Finally, in practice, Schulman suggests that we can choose one of the following variants of the algorithm:

  • directly use first order optimization methods to optimize the objective. (Known as Proximate Policy Optimization )
  • At each iteration, approximate the objective by first order approximation to and second order approximation to and then use second order methods like conjugate gradient to approximate the gradient direction , where, is the second order derivative of the KL-divergence or known as the Fisher Information Matrix (FIM).
  • Place hard constraint on the KL-divergence (trust region). We can still use conjugate gradient to solve the following formulation

In terms of conjugate gradient, here are two introductory articles about second order methods. For an implementation of the conjugate gradient, l-bfgs, etc, please see my github repo.

References