Polish references
parent
ed25d4a613
commit
be59748ffc
1 changed files with 4 additions and 10 deletions
|
|
@ -110,11 +110,7 @@ $\operatorname{End}(\mathbb{R}^n)$. We apply the cotangent vector $d\operatornam
|
|||
|
||||
#### Finding minima
|
||||
|
||||
We minimize the loss function using a cheap imitation of Ueda and Yamashita's regularized Newton's method with backtracking.
|
||||
|
||||
* Kenji Ueda and Nobuo Yamashita. ["Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization,"](https://doi.org/10.1007/s00245-009-9094-9) 2009.
|
||||
|
||||
The minimization routine is implemented in [`engine.rs`](../src/branch/main/app-proto/src/engine.rs). (In the old Julia prototype of the engine, it's in [`Engine.jl`](../src/branch/main/engine-proto/gram-test/Engine.jl).) It works like this.
|
||||
We minimize the loss function using a cheap imitation of Ueda and Yamashita's uniformly regularized Newton's method with backtracking [[UY](https://code.studioinfinity.org/StudioInfinity/dyna3/wiki/Numerical-optimization#uniform-regularization)]. The minimization routine is implemented in [`engine.rs`](../src/branch/main/app-proto/src/engine.rs). (In the old Julia prototype of the engine, it's in [`Engine.jl`](../src/branch/main/engine-proto/gram-test/Engine.jl).) It works like this.
|
||||
|
||||
1. Do Newton steps, as described below, until the loss gets tolerably close to zero. Fail out if we reach the maximum allowed number of descent steps.
|
||||
1. Find $-\operatorname{grad}(f)$, as described in ["The first derivative of the loss function."](Gram-matrix-parameterization#the-first-derivative-of-the-loss-function)
|
||||
|
|
@ -138,14 +134,12 @@ The minimization routine is implemented in [`engine.rs`](../src/branch/main/app-
|
|||
|
||||
#### Other minimization methods
|
||||
|
||||
Ge, Chou, and Gao have written about using optimization methods to solve 2d geometric constraint problems.
|
||||
|
||||
- Jian-Xin Ge, Shang-Ching Chou, and Xiao-Shan Gao. ["Geometric Constraint Satisfaction Using Optimization Methods,"](https://doi.org/10.1016/S0010-4485(99)00074-3) 1999.
|
||||
|
||||
Instead of Newton's method, they use the BFGS method, which they say is less sensitive to the initial guess and more likely to arrive at a solution close to the initial guess.
|
||||
Ge, Chou, and Gao have written about using optimization methods to solve 2d geometric constraint problems [GCG]. Instead of Newton's method, they use the BFGS method, which they say is less sensitive to the initial guess and more likely to arrive at a solution close to the initial guess.
|
||||
|
||||
Julia's [Optim](https://julianlsolvers.github.io/Optim.jl) package uses an [unconventional variant](https://julianlsolvers.github.io/Optim.jl/v1.12/algo/newton/) of Newton's method whose rationale is sketched in this [issue discussion](https://github.com/JuliaNLSolvers/Optim.jl/issues/153#issuecomment-161268535). It's based on the Cholesky-like factorization implemented in the [PositiveFactorizations](https://github.com/timholy/PositiveFactorizations.jl) package.
|
||||
|
||||
- **[GCG]** Jian-Xin Ge, Shang-Ching Chou, and Xiao-Shan Gao. ["Geometric Constraint Satisfaction Using Optimization Methods,"](https://doi.org/10.1016/S0010-4485(99)00074-3) 1999.
|
||||
|
||||
### Reconstructing a rigid subassembly
|
||||
|
||||
Suppose we can find a set of vectors $\{a_k\}_{k \in K}$ whose Lorentz products are all known. Restricting the Gram matrix to $\mathbb{R}^K$ and projecting its output orthogonally onto $\mathbb{R}^K$ gives a submatrix $G_K \colon \mathbb{R}^K \to \mathbb{R}^K$ whose entries are all known. Suppose in addition that the set $\{a_k\}_{k \in K}$ spans $V$, implying that $G_K$ has rank five.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue