Time and Memory Efficiency(时空复杂度). 在深度学习的优化问题中,由于模型参数的维度巨大,现有的拟牛顿法无法在实际问题中应用。比如经典的L-BFGS[1]算法一般需要记录至少之前5到10步的迭代历史,这在深度学习问题中是不实际的。而现有的一些为随机非凸问题设计的拟牛顿法,例如SdLBFGS[3],甚至需要比L-BFGS更多的时间和空间资源。
Time and Memory Efficiency(时空复杂度)。为了降低Apollo算法的时空复杂度,我们仿效之前的工作,用对角矩阵来近似Hessian,亦即约束每一步 为对角矩阵。为了满足这一约束,我们需要对公式(5)中的secant equation进行放松。一个常用的方法是weak secant equation[4,5]:
[1] Charles George Broyden. The convergence of a class of double-rank minimization algorithms. IMA Journal of Applied Mathematics, 6(1):76–90, 1970.
[2] William C Davidon. Variable metric method for minimization. SIAM Journal on Optimization, 1(1):1–17, 1991.
[3] Xiao Wang, Shiqian Ma, Donald Goldfarb, and Wei Liu. Stochastic quasi-newton methods for nonconvex stochastic optimization. SIAM Journal on Optimization, 27(2):927–956, 2017.
[4] John E Dennis, Jr and Henry Wolkowicz. Sizing and least-change secant methods. SIAM Journal on Numerical Analysis, 30(5):1291–1314, 1993.
[5] JL Nazareth. If quasi-newton then why not quasi-cauchy. SIAG/Opt Views-and-news, 6: 11–14, 1995.
[6] Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In International Conference on Learning Representations, 2018.
[7] X Chen, M Hong, S Liu, and R Sun. On the convergence of a class of adam-type algorithms for non-convex optimization. In 7th International Conference on Learning Representations, ICLR 2019, 2019.
本文亮点总结
1.拟牛顿法在深度学习优化问题中存在三个常见问题:(1)Time and Memory Efficiency(时空复杂度)(2)Stochastic Variance(3)Nonconvexity(非凸性) 2.Apollo对于Hessian的对角近似的时间和空间复杂度与自适应一阶优化方法一样。为了处理目标函数的非凸性,我们用Hessian的修正绝对值(recified absolute value)来代替原始的Hessian,保证它是正定的。