diff --git a/Gram-matrix-parameterization.md b/Gram-matrix-parameterization.md index dad08d9..f38e8b5 100644 --- a/Gram-matrix-parameterization.md +++ b/Gram-matrix-parameterization.md @@ -95,7 +95,16 @@ This matrix is stored as `neg_grad` in the Rust and Julia implementations of the #### The second derivative of the loss function -*Writeup in progress. Implemented in `app-proto/src/engine.rs` and `engine-proto/gram-test/Engine.jl`.* +Recalling that +\[ -d\Delta = \mathcal{P}(dA^\top Q A + A^\top Q\,dA), \] +we can express the derivative of $\operatorname{grad}(f)$ as +\[ +\begin{align*} +d\operatorname{grad}(f) & = -4 Q\,dA\,\mathcal{P}(\Delta) - 4 Q A\,\mathcal{P}(d\Delta) \\ +& = -4 Q\big[dA\,\mathcal{P}(\Delta) + A\,\mathcal{P}(-d\Delta)\big]. +\end{align*} +\] +In the Rust and Julia implementations of the realization routine, we express $d\operatorname{grad}(f)$ as a matrix in the standard basis for $\operatorname{End}(\mathbb{R}^n)$. We apply the cotangent vector $d\operatorname{grad}(f)$ to each standard basis matrix $E_{ij}$ by setting the value of the matrix-valued 1-form $dA$ to $E_{ij}$. #### Finding minima