Start discussing uniform regularization
parent
4ee434c770
commit
5222bc7193
1 changed files with 8 additions and 1 deletions
|
|
@ -63,7 +63,14 @@ If $f$ is convex, its second derivative is positive-definite everywhere, so the
|
|||
|
||||
#### Uniform regularization
|
||||
|
||||
_To be added_
|
||||
Given an inner product $(\_\!\_, \_\!\_)$ on $V$, we can make the modified second derivative $f^{(2)}_p(v, \_\!\_) + \lambda (\_\!\_, \_\!\_)$ positive-definite by choosing a large enough coefficient $\lambda$. We can say more precisely what it means for $\lambda$ to be large enough by expressing $f^{(2)}_p$ as $(\_\!\_, \tilde{F}^{(2)}_p\_\!\_)$ and taking the lowest eigenvalue $\lambda_{\text{min}}$ of $\tilde{F}^{(2)}_p$. The modified second derivative is positive-definite when $\lambda > -\max\{\lambda_\text{min}, 0\}$.
|
||||
|
||||
Uniform regularization can be seen as interpolating between Newton’s method and gradient descent. To see why, consider the regularized equation that defines the Newton step:
|
||||
```math
|
||||
f^{(1)}_p(\_\!\_) + f^{(2)}_p(v, \_\!\_) + \lambda (v, \_\!\_) = 0.
|
||||
```
|
||||
|
||||
_To be continued_
|
||||
|
||||
- Kenji Ueda and Nobuo Yamashita. [“Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization”](https://doi.org/10.1007/s00245-009-9094-9) *Applied Mathematics and Optimization* 62, 2010.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue