Write up the minimization routine
parent
823cceaa30
commit
bdd7926ce2
1 changed files with 17 additions and 2 deletions
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
### Elements as vectors
|
### Elements as vectors
|
||||||
|
|
||||||
Take a 5d vector space $V$ with a bilinear form $(\_\!\_, \_\!\_)$ of signature $++++-$, which we'll call the *Lorentz form*. In [inversive coordinates](https://code.studioinfinity.org/glen/dyna3/src/branch/main/notes/inversive.md), points and generalized spheres are represented, respectively, by timelike and spacelike vectors in $V$. If we normalize these vectors to pseudo-length $\pm 1$, and choose a vector on the lightlike 1d subspace representing the point at infinity, a lot of the constraints we care about can be expressed by fixing the Lorentz products between vectors.
|
Take a 5d vector space $V$ with a bilinear form $(\_\!\_, \_\!\_)$ of signature $++++-$, which we'll call the *Lorentz form*. In [inversive coordinates](../src/branch/main/notes/inversive.md), points and generalized spheres are represented, respectively, by timelike and spacelike vectors in $V$. If we normalize these vectors to pseudo-length $\pm 1$, and choose a vector on the lightlike 1d subspace representing the point at infinity, a lot of the constraints we care about can be expressed by fixing the Lorentz products between vectors.
|
||||||
|
|
||||||
### Constraints as Gram matrix entries
|
### Constraints as Gram matrix entries
|
||||||
|
|
||||||
|
@ -108,7 +108,22 @@ In the Rust and Julia implementations of the realization routine, we express $d\
|
||||||
|
|
||||||
#### Finding minima
|
#### Finding minima
|
||||||
|
|
||||||
*Writeup in progress. Implemented in `app-proto/src/engine.rs` and `engine-proto/gram-test/Engine.jl`.*
|
We minimize the loss function using a cheap imitation of Ueda and Yamashita's regularized Newton's method with backtracking.
|
||||||
|
|
||||||
|
* Kenji Ueda and Nobuo Yamashita. ["Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization,"](https://doi.org/10.1007/s00245-009-9094-9) 2009.
|
||||||
|
|
||||||
|
The minimization routine is implemented in [`engine.rs`](../src/branch/main/app-proto/src/engine.rs). (In the old Julia prototype of the engine, it's in [`Engine.jl`](../src/branch/main/engine-proto/gram-test/Engine.jl).) It works like this.
|
||||||
|
|
||||||
|
1. Do Newton steps, as described below, until either the loss gets tolerably close to zero or we reach the maximum allowed number of steps.
|
||||||
|
1. Find $-\operatorname{grad}(f)$, as described in "The first derivative of the loss function."
|
||||||
|
2. Find the Hessian $H(f) := d\operatorname{grad}(f)$, as described in "The second derivative of the loss function."
|
||||||
|
* Recall that we express $H(f)$ as a matrix in the standard basis for $\operatorname{End}(\mathbb{R}^n)$.
|
||||||
|
3. If the Hessian isn't positive-definite, make it positive definite by adding $-c \lambda_\text{min}$, where $\lambda_\text{min}$ is its lowest eigenvalue and $c > 1$ is a parameter of the minimization routine. In other words, find the regularized Hessian
|
||||||
|
$$H_\text{reg}(f) := H(f) + \begin{cases}0 & \lambda_\text{min} > 0 \\ -c \lambda_\text{min} & \text{otherwise} \end{cases}.$$
|
||||||
|
* The parameter $c$ is passed to `realize_gram` as the argument `reg_scale`.
|
||||||
|
* Ueda and Yamashita add an extra regularization term that's proportional to a power of $\|\operatorname{grad}(f)\|$, but we don't bother.
|
||||||
|
4. Find the base step $u$, which is defined by the property that $-\operatorname{grad}(f) = H(f)\,u$.
|
||||||
|
5. Backtrack by reducing the step size until we find a step that reduces the loss at a good fraction of the maximum possible rate.
|
||||||
|
|
||||||
### Reconstructing a rigid subassembly
|
### Reconstructing a rigid subassembly
|
||||||
|
|
||||||
|
|
Loading…
Add table
Reference in a new issue