Accelerated stochastic gradient descent, convergence rate and empirical results

Speaker: Adam Oberman

Where: Virtual.
When: May 01, 2020 at 16:00.

Reference(s)

Abstract

We present a coupled system of ODEs which, when discretized with a constant time step/learning rate, recovers Nesterov’s accelerated gradient descent algorithm. The same ODEs, when discretized with a decreasing learning rate, leads to novel stochastic gradient descent (SGD) algorithms, one in the convex and a second in the strongly convex case. In the strongly convex case, we obtain an algorithm superficially similar to momentum SGD, but with additional terms. In the convex case, we obtain an algorithm with a novel order k3/4 learning rate. We prove, extending the Lyapunov function approach from the full gradient case to the stochastic case, that the algorithms converge at the optimal rate for the last iterate of SGD, with rate constants which are better than previously available.

Share