Aaron Fenyes
|
133519cacb
|
Encapsulate gradient descent code
The completed gram matrix from this commit matches the one from commit
e7dde58 to six decimal places.
|
2024-07-02 15:02:59 -07:00 |
|
Aaron Fenyes
|
e7dde5800c
|
Do gradient descent entirely in BigFloat
The previos version accidentally returned steps in Float64.
|
2024-07-02 12:35:12 -07:00 |
|
Aaron Fenyes
|
c933e07312
|
Switch to Ganja.js basis ordering
|
2024-06-26 11:39:34 -07:00 |
|
Aaron Fenyes
|
2b6c4f4720
|
Avoid naming conflict with identity transformation
|
2024-06-26 11:28:47 -07:00 |
|
Aaron Fenyes
|
4a28a47520
|
Update namespace of AbstractAlgebra.Rationals
|
2024-06-26 01:06:27 -07:00 |
|
Aaron Fenyes
|
58a5c38e62
|
Try numerical low-rank factorization
The best technique I've found so far is the homemade gradient descent
routine in `descent-test.jl`.
|
2024-05-30 00:36:03 -07:00 |
|
Aaron Fenyes
|
ef33b8ee10
|
Correct signature
|
2024-03-01 13:26:20 -05:00 |
|
Aaron Fenyes
|
717e5a6200
|
Extend Gram matrix automatically
The signature of the Minkowski form on the subspace spanned by the Gram
matrix should tell us what the big Gram matrix has to look like
|
2024-02-21 03:00:06 -05:00 |
|
Aaron Fenyes
|
16826cf07c
|
Try out the Gram matrix approach
|
2024-02-20 22:35:24 -05:00 |
|