diff --git a/Numerical-optimization.md b/Numerical-optimization.md index 04bf8c7..057e2da 100644 --- a/Numerical-optimization.md +++ b/Numerical-optimization.md @@ -22,11 +22,11 @@ We saw above that the vanishingly rare gradient descent paths that lead to saddl #### Affected optimization methods -Uniform regularization can be seen as an interpolation between Newton's method and gradient descent, which kicks in when lowest eigenvalue of the Hessian drops below zero and brings the search direction closer to the gradient descent direction as the lowest eigenvalue gets more negative. Since the Hessian is indefinite near a saddle point, Newton's method with uniform regularization should act at least sort of like gradient descent near a saddle point. This suggests that it could get bogged down near saddle points in the same way. +Uniform regularization can be seen as an interpolation between Newton’s method and gradient descent, which kicks in when lowest eigenvalue of the Hessian drops below zero and brings the search direction closer to the gradient descent direction as the lowest eigenvalue gets more negative. Since the Hessian is indefinite near a saddle point, Newton’s method with uniform regularization should act at least sort of like gradient descent near a saddle point. This suggests that it could get bogged down near saddle points in the same way. ## Methods -### Newton's method +### Newton’s method _To be added_