Traditional policy gradient methods are fundamentally flawed. Natural gradients converge quicker and better, forming the foundation of contemporary Reinforcement Learning algorithms.
Policy gradient algorithms are at the root of modern Reinforcement Learning. The idea is that, by simply following the gradient (i.e., vector of partial derivatives) of the objective function, we ultimately end up at an optimum. It is a clever way to (i) directly optimize policies (rather than learn indirect value functions) and (ii) let the reward function guide the search. However, policy gradients have fundamental flaws. This article explains the concept of natural gradients, exposing the shortcomings of traditional gradients and how to remedy them.
Although natural gradients have been surpassed in popularity by algorithms such as TRPO and PPO, a grasp on their fundamentals is essential to understand these contemporary RL algorithms. Natural policy gradients deploy different way of thinking, which is not always clear from just observing the loss function.
Nonetheless, a complete discussion of natural gradients is rather technical and requires many lengthy derivations. To keep this article (somewhat) compact, I focus mainly on the reasoning and intuition, providing external references to more in-depth derivations. Furthermore, a solid understanding of traditional (vanilla) policy gradients and the REINFORCE algorithm is assumed.
In traditional policy gradient methods, the gradient ∇ only gives the direction of the weight update. It does not tell how far to step into that direction. Derivatives are defined on an infinitesimal interval, meaning the gradient is valid only locally, and may be completely different at another part of the function.
Because of this, we iterate between sampling (with the current policy) and updating (based on sampled data). Each new policy rollout allows re-computing the gradient and update the policy weights θ.
Behavior is controlled with the step size α. This gives rise to the following well-known policy gradient update function:
Two common problems may arise during updates:
- Overshooting: The update misses the reward peak and lands in a suboptimal policy region.
- Undershooting: Taking needlessly small steps in the gradient direction causes slow convergence.
In supervised learning problems, overshooting is not too big of a deal, as data is fixed. If we overshoot, we can correct next epoch. However, if an RL update results in a poor policy, future sample batches may not provide much meaningful information. Somewhat dramatically: we may never recover from a single poor update. A very small learning rate might remedy the problem, but results in slow convergence.
After the update, we landed on a flat, suboptimal region. The low gradients stir only small weight updates, and it will take many iterations to escape again.
An interesting observation here is that we performed a large update when we should have performed a cautious one, and vice versa. As we will see later, natural gradients do the opposite.
Let’s perform a thought experiment. To take appropriately-sized weight updates, we might decide to put a cap on the parameter changes. Suppose we define a maximum distance in parameter space as a constraint. We could define this problem as follows:
where ||Δ θ ||represents the Euclidian distance between parameters before and after the update.
It sounds reasonable, as it should avoid overshooting, while also not needlessly restricting the update size. Unfortunately, it does not work as you might expect. For instance, suppose our policy is a Gaussian control parameterized by θ_1=μ and θ_2=σ, and we set a cap ϵ=1. Both updates in the figure below satisfy the constraint!
In both cases, the Euclidean distance is 1: sqrt[(1–0)²+(0.3–0.3)²] and sqrt[(1–0)²+(3–3)²]. However, the effect on the distribution (i.e., the stochastic policy) is completely different.
The problem is that capping the parameter space does not effectively cap the statistical manifold that we operate on. Remind that policies are probability distributions, and altering the probabilities alters the expected rewards. This is the manifold we optimize over and would like to control.
I like to think of a statistical manifold as a “family’’ of distributions. For instance, the collective family of normal distributions (parameterized by μ and σ) constitutes a manifold. Another example would be to consider a neural network as a distribution over output values. Altering a stochastic policy can be viewed as moving over the manifold (occupied by the parameters θ) over which the objective function is defined.
The parameter cap only works as intended if the statistical manifold is linear, which is rarely the case. To prevent the policy itself from changing too much during an update, we must consider how sensitive the distribution is to parameter changes. Traditional policy gradient algorithms do not take this curvature into account. To do that, we need to move into the realm of second-order derivatives, which is exactly what natural policy gradients do.
We established that the difference between distributions (i.e., policies parameterized by θ) is of interest, rather than the difference between parameters θ and θ_old itself. Fortunately, a variety of distances exist to calculate the difference between two probability distributions. This article will use the KL divergence, which is most common in literature. Technically it is not a measure (as it is not symmetric), but can be thought of as such (being approximately symmetric for small discrepancies):
In the normal distribution example shown before, KL divergences are 0.81661 and 0.023481, respectively [formula via Wikipedia]
At this point, it is good to introduce the link between KL divergence and the Fisher information matrix (we will see why later). The Fisher information matrix is the Riemannian metric describing the curvature of a statistical manifold, i.e., the sensitivity of the manifold to marginal parameter changes. The matrix may be viewed as a correction to the distance that accounts for the curvature — think of measuring distances on a globe rather than on a flat earth.
If we define KL divergence locally, i.e., Δθ=0, it turns out both are equivalent. Under this condition, the zeroth and first derivatives become 0 and can be stricken. The matrix of second derivatives is represented by the Hessian matrix, which in this case is equivalent to the Fisher information matrix:
This result will prove crucial for the practical implementation, but let’s put a pin in that for now.
If the Fisher matrix is an identity matrix, the distance over the manifold is simply the Euclidean distance. In that case, traditional- and natural policy gradients are equivalent. In practice, this is rare though.
Similar to before, we place a constraint on the allowed change of an update. This time, however, we impose it on the KL divergence of the policy, rather than on the Euclidean distance of the parameter space. The adjusted problem looks as follows:
By solving this expression, we ensure that we perform a large update in parameter space, while ensuring the policy itself does not change too much. However, computing the KL divergence requires evaluating all state-action pairs, so we need some simplifications to work with realistic RL problems.
For the upcoming section, an excellent and detailed derivation can be found in the lecture slides from Carnegie Mellon (by Katerina Fragkiadaki). To preserve focus on the intuition, I only highlight the most salient outcomes.
We are going to derive the solution for the problem. First, we use Lagrangian relaxation to transform the divergence constraint into a penalty, yielding an expression that is easier to solve:
Given that typical RL problems are way too large to compute the divergence D_KL for all states and actions, we must resort to approximation methods. With Taylor expansion — approximating a function in terms of its derivatives — we can approach the KL divergence based on sample trajectories obtained with the rollout policy π_θ.
The Taylor expansion for the abovementioned Lagrangian relaxation looks as follows (for notational convenience, consider θ=θ_old+Δθ):
In short, the loss term J(θ) is approximated with the first-order Taylor expansion (i.e., the gradient w.r.t. θ), similar to traditional policy gradients (essentially, local linearization). The KL divergence is approximated with a second-order Taylor expansion. When locally approximating KL divergence (i.e., Δθ = 0), the zeroth and first-order differences evaluate to 0, such that we can strike them. The second-order derivatives are of interest here.
To make the expression a bit less intimidating, we can (i) replace the second-order KL derivative with the Fisher information matrix and (ii) strike all terms that do not depend on Δθ. This leaves us with a slightly friendlier expression:
Notational compactness aside, why substitute the second-order derivative with the Fisher matrix? Well, it turns out the equivalence is very convenient. The Hessian matrix is a |θ|⋅ |θ| matrix, with each element being a second derivative. Full computation may be quite cumbersome. For the Fisher matrix we have an alternative expression however, which is the outer product of the gradients. As we already need these values for traditional policy gradients anyway, there is substantially less computational overhead:
Thus, if we can compute the gradients as we are used to, we have all the information necessary to perform the weight update. Also note the expectation implies we can use samples.
This is quite some information to take in, so let’s briefly recap what we accomplished so far:
- To prevent the policy drifting too far away, we placed a constraint on the KL divergence between new and old policy.
- Using Lagrangian relaxation, we transformed the constraint into a penalty, giving us a single (unconstrained) expression to work with.
- As we cannot directly compute KL divergence based on samples, we used Taylor expansion as an approximation for the weight update scheme.
- For small parameter changes, KL divergence is approximated using the Fisher information matrix, for which we have a readily available expression.
- The entire approximation is a local result, assuming θ=θ_old. As such, the entire rationale is only valid for small policy changes.
Now, let’s see how we can solve this problem.
Time to circle back to our Taylor expansion of the Lagrangian relaxation. How do we solve this expression, i.e., find the optimal weight update Δθ?
Well, we can find the desired update by setting the gradient w.r.t. Δθ to zero (Langrangian method). Solving the expression (converted to a minimization problem now, and assuming θ=θ_old) yields:
The solution can be rearranged to find the weight update Δθ:
Note that -1/λ is a constant that can be absorbed into the learning rate α. In fact, α can be derived analytically. From the initial constraint, we know that the KL-divergence should be at most ϵ. With a fixed learning rate α, we would be unable to guarantee that α F(θ)^-1 ∇_θJ(θ)≤ϵ. Algebraically, we can thus infer a dynamic learning rate α that ensures (again, by approximation) the size of the update equals ϵ. Abiding by this constraint yields the following learning rate:
Finally, from the re-arrangement, we extract the natural policy gradient, which is the gradient corrected for the curvature of the manifold:
This natural gradient gives — within the distant constraint — the steepest descent direction in the Riemannian space, rather than in the traditionally assumed Euclidean space. Note that, compared to traditional policy gradients, the only distinction is the multiplication with the inverse Fisher matrix! In fact, if the Fisher matrix is an identity matrix — which in practice it rarely is — traditional- and natural policy gradients are equivalent.
The final weight update scheme looks as follows
The power of this scheme is that it always changes the policy by the same magnitude, regardless of the distribution’s representation.
The end result differs from traditional policy gradients in two ways:
- The gradient is ‘corrected’ by the inverse Fisher matrix, taking into account the sensitivity of the policy to local changes. As the matrix is inverted, updates tend to be cautious at steep slopes (high sensitivity) and larger at flat surfaces (low sensitivity). Traditional gradient methods (erroneously) assume Euclidian distances between updates.
- The update weight/step size α has an dynamic expression that adapts to the gradients and local sensitivity, ensuring a policy change of magnitude ϵ regardless of parameterization. In traditional methods, α is a tunable parameter suspect to misfitting, often set at some standard value like 0.1 or 0.01.
Despite the considerably different mechanisms under the hood, at surface level traditional- and natural policy gradient methods are surprisingly similar.
The full outline of the natural policy gradient algorithm is summarized below. Note that in practice, we always use sample estimates for gradients and the Fisher matrix.
Natural gradients overcome fundamental flaws of traditional methods, taking into account how the manifold — over which the objective function is defined — changes with parameter updates. Concretely, natural gradients allow to escape for plateaus and cautiously approach reward peaks. Theoretically, natural policy gradient should therefore converge better and faster than their traditional counterparts.
In their purest form, natural gradient algorithms are often not practical though. There are a number of reasons for this.
First, the Taylor expansion offers a local approximation up to the second order. Due to this, the estimated Hessian may not be positive definite. In practice, natural gradient methods are numerically brittle and do not always yield stable outcomes. The abundance of mathematical derivations may look convincing, but the Taylor expansion, sample approximations and strictly local validity (assuming θ = θ_old) substantially impact real-world performance.
Second, the Fisher information matrix occupies an |θ|⋅|θ| space. Consider a neural network with 100,000 parameters, and you can imagine a 10 billion matrix on your laptop will not fly. Additionally, computing a matrix inverse is an operation of O(N³) complexity, which is rather tedious. Thus, for deep RL methods, natural policy gradients typically exceed both memory- and computational limits.
Finally, we are used to work with sophisticated first-order stochastic gradient optimizers — such as ADAM, which also takes into account second-order effects—that provide excellent results on a wide range of problems. Second-order optimization methods (i.e., natural gradient algorithms) do not take advantage of these optimizers.
Methods such as conjugate gradients and Kronecker-factored approximation curvature (K-FAC) may (partially) address the abovementioned problems. In practice, methods such as Trust Region Policy Optimization (TRPO) and especially Proximal Policy Optimization (PPO) have surpassed natural gradients in popularity, although being rooted in the same mathematical foundation.
When contrasting natural policy gradients to traditional ones, the difference looks fairly limited. In the end, we only added an inverted Fisher matrix — factoring in local sensitivity — to the gradient we are familiar with. Nonetheless, the way we optimize is vastly different, considering policy distances rather than parameter distances. By ensuring policies do not drift too far when updating weights, we can perform more stable and consistent updates.
Natural policy gradients come with a series of numerical challenges, particularly when dealing with large-scale optimizations (e.g., neural networks with large numbers of parameters). Also, a substantial number of approximations and simplifications are made in the theoretical foundation; practice may be more unruly. For real-world implementations, Proximal Policy Optimization is typically preferred nowadays.
Nonetheless, an understanding of natural gradients is fundamental for those wishing to understand the state-of-the-art in Reinforcement Learning.