diff --git a/Numerical-optimization.md b/Numerical-optimization.md index ff7389c..31ad4e4 100644 --- a/Numerical-optimization.md +++ b/Numerical-optimization.md @@ -87,3 +87,6 @@ _To be added_ #### Modified Cholesky decomposition +Recall from [above](#Review) that once we express the first and second derivatives of $f$ as a vector $F^{(1)}_p \in \mathbb{R}^n$ and a matrix $F^{(2)}_p \in \operatorname{End}(\mathbb{R}^n)$ with respect to a computational basis $\mathbb{R}^n \to V$, we can find the Newton step $v$ at $p$ by solving the equation $F^{(1)}_p + F^{(2)}_p v = 0$. More abstractly, if we express the first and second derivatives of $f$ as a vector $\tilde{F}^{(1)}_p \in V$ and an operator $\tilde{F}^{(2)}_p \in \operatorname{End}(V)$ with respect to a chosen inner product on $V$, as discussed [above](#Uniform_regularization), we can find the Newton step by solving the equation $\tilde{F}^{(1)}_p + \tilde{F}^{(2)}_p v = 0$. When $f^{(2)}_p$ is positive-definite, taking the Cholesky decomposition of $\tilde{F}^{(2)}_p$ with respect to the standard inner product on $\mathbb{R}^n$ provides an efficient and numerically stable solution method. Since the Newton step doesn’t depend on the choice of inner product, we typically use the inner product given by the computational basis. + +_To be continued, citing [NW, §3.4]_ \ No newline at end of file