From 5222bc7193cf311abe4ee25a8511137e871fa6c2 Mon Sep 17 00:00:00 2001 From: Vectornaut Date: Wed, 22 Oct 2025 01:18:48 +0000 Subject: [PATCH] Start discussing uniform regularization --- Numerical-optimization.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/Numerical-optimization.md b/Numerical-optimization.md index 2c7c43e..f5dde98 100644 --- a/Numerical-optimization.md +++ b/Numerical-optimization.md @@ -63,7 +63,14 @@ If $f$ is convex, its second derivative is positive-definite everywhere, so the #### Uniform regularization -_To be added_ +Given an inner product $(\_\!\_, \_\!\_)$ on $V$, we can make the modified second derivative $f^{(2)}_p(v, \_\!\_) + \lambda (\_\!\_, \_\!\_)$ positive-definite by choosing a large enough coefficient $\lambda$. We can say more precisely what it means for $\lambda$ to be large enough by expressing $f^{(2)}_p$ as $(\_\!\_, \tilde{F}^{(2)}_p\_\!\_)$ and taking the lowest eigenvalue $\lambda_{\text{min}}$ of $\tilde{F}^{(2)}_p$. The modified second derivative is positive-definite when $\lambda > -\max\{\lambda_\text{min}, 0\}$. + +Uniform regularization can be seen as interpolating between Newton’s method and gradient descent. To see why, consider the regularized equation that defines the Newton step: +```math +f^{(1)}_p(\_\!\_) + f^{(2)}_p(v, \_\!\_) + \lambda (v, \_\!\_) = 0. +``` + +_To be continued_ - Kenji Ueda and Nobuo Yamashita. [“Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization”](https://doi.org/10.1007/s00245-009-9094-9) *Applied Mathematics and Optimization* 62, 2010.