Compare commits

...

4 Commits

Author SHA1 Message Date
65cee1ecc2 Clean up the outline view (#19)
Clean up the source code and interface of the outline view. In addition, [fix a bug](commit/6e42681b719d7ec97c4225ca321225979bf87b56) that could cause `Assembly::realize` to react to itself under certain circumstances. Those circumstances arose, making the bug noticeable, while this branch was being written.

#### Source code

- Modularize the `Outline` component into smaller components.
- Switch from static iteration to dynamic Sycamore lists. This reduces the amount of re-rendering that happens when an element or constraint changes. It also allows constraint details to stay open or closed during constraint updates, rather than resetting to closed.
- Make `Element::index` private, as discussed [here](pulls/15#issuecomment-1816).

#### Interface

- Make constraints editable, updating the assembly realization on input. Flag constraints where the Lorentz product value doesn't parse.
- Round element vector coordinates to prevent the displayed strings from overlapping.

Note that issue #20 was created by this PR, but it will be addressed shortly.

Co-authored-by: Aaron Fenyes <aaron.fenyes@fareycircles.ooo>
Reviewed-on: #19
Co-authored-by: Vectornaut <vectornaut@nobody@nowhere.net>
Co-committed-by: Vectornaut <vectornaut@nobody@nowhere.net>
2024-11-15 03:32:47 +00:00
707618cdd3 Integrate engine into application prototype (#15)
Port the engine prototype to Rust, integrate it into the application prototype, and use it to enforce the constraints.

### Features

To see the engine in action:

1. Add a constraint by shift-clicking to select two spheres in the outline view and then hitting the 🔗 button
2. Click a summary arrow to see the outline item for the new constraint
2. Set the constraint's Lorentz product by entering a value in the text field at the right end of the outline item
   * *The display should update as soon as you press* Enter *or focus away from the text field*

The checkbox at the left end of a constraint outline item controls whether the constraint is active. Activating a constraint triggers a solution update. (Deactivating a constraint doesn't, since the remaining active constraints are still satisfied.)

### Precision

The Julia prototype of the engine uses a generic scalar type, so you can pass in any type the linear algebra functions are implemented for. The examples use the [adjustable-precision](https://docs.julialang.org/en/v1/base/numbers/#Base.MPFR.setprecision) `BigFloat` type.

In the Rust port of the engine, the scalar type is currently fixed at `f64`. Switching to generic scalars shouldn't be too hard, but I haven't looked into [which other types](https://www.nalgebra.org/docs/user_guide/generic_programming) the linear algebra functions are implemented for.

### Testing

To confirm quantitatively that the Rust port of the engine is working, you can go to the `app-proto` folder and:

* Run some automated tests by calling `cargo test`.
* Inspect the optimization process in a few examples calling the `run-examples` script. The first example that prints is the same as the Irisawa hexlet example from the engine prototype. If you go into `engine-proto/gram-test`, launch Julia, and then

  ```
  include("irisawa-hexlet.jl")
  for (step, scaled_loss) in enumerate(history_alt.scaled_loss)
    println(rpad(step-1, 4), " | ", scaled_loss)
  end
  ```

  you should see that it prints basically the same loss history until the last few steps, when the lower default precision of the Rust engine really starts to show.

### A small engine revision

The Rust port of the engine improves on the Julia prototype in one part of the constraint-solving routine: projecting the Hessian onto the subspace where the frozen entries stay constant. The Julia prototype does this by removing the rows and columns of the Hessian that correspond to the frozen entries, finding the Newton step from the resulting "compressed" Hessian, and then adding zero entries to the Newton step in the appropriate places. The Rust port instead replaces each frozen row and column with its corresponding standard unit vector, avoiding the finicky compressing and decompressing steps.

To confirm that this version of the constraint-solving routine works the same as the original, I implemented it in Julia as `realize_gram_alt_proj`. The solutions we get from this routine match the ones we get from the original `realize_gram` to very high precision, and in the simplest examples (`sphere-in-tetrahedron.jl` and `tetrahedron-radius-ratio.jl`), the descent paths also match to very high precision. In a more complicated example (`irisawa-hexlet.jl`), the descent paths diverge about a quarter of the way into the search, even though they end up in the same place.

Co-authored-by: Aaron Fenyes <aaron.fenyes@fareycircles.ooo>
Reviewed-on: #15
Co-authored-by: Vectornaut <vectornaut@nobody@nowhere.net>
Co-committed-by: Vectornaut <vectornaut@nobody@nowhere.net>
2024-11-12 00:46:16 +00:00
86fa682b31 feat: Application prototype (#14)
Creates a prototype user interface for dyna3 in the `app-proto` folder. The interface is dynamically constructed using [Sycamore](https://sycamore.dev).

The prototype includes:

  * An application state model (the `AppState` type)
    * A constraint problem model (the `Assembly` type), used in the application state
  * Two views
    * A 3D rendering of the assembly (the `Display` component)
    * A list of elements and constraints (the `Outline` component)

The following features confirm that the views can reflect and send input to the model:

  * You can select elements by clicking and shift-clicking them in the outline. The selected elements are highlighted in the display.
  * You can add elements using a button above the outline. The new elements appear in the display.

Co-authored-by: Aaron Fenyes <aaron.fenyes@fareycircles.ooo>
Reviewed-on: #14
Co-authored-by: Vectornaut <vectornaut@nobody@nowhere.net>
Co-committed-by: Vectornaut <vectornaut@nobody@nowhere.net>
2024-10-21 23:38:27 +00:00
b92be312e8 Engine prototype (#13)
This PR adds code for a Julia-language prototype of a configuration solver, in the `engine-proto` folder. It uses Julia version 1.10.0.

### Approaches
Development of this PR tried two broad approaches to the constraint geometry problem. Each one suggested various solution techniques. The Gram matrix approach, with the low-rank factorization technique, seems the most promising.

- **Algebraic** *(In the `alg-test` subfolder).* Write the constraints as polynomials in the inversive coordinates of the elements, and use computational algebraic geometry techniques to solve the resulting system. We tried the following techniques.
  - **Gröbner bases** *(`Engine.Algebraic.jl`).* Symbolic. Find a Gröbner basis for the ideal generated by the constraint equations. Information about the solution variety, like its codimension, is then relatively easy to extract.
  - **Homotopy continuation** *(`Engine.Numerical.jl`).* Numerical. Cut the solution set along a random hyperplane to get a generic zero-dimensional slice, and then use a fancy homotopy technique to approximate the points in that slice.

  A few notes about our experiences can be found on the [engine prototype](wiki/Engine-prototype) wiki page.
- **Gram matrix** *(in the `gram-test` subfolder).* A construction is described completely, up to conformal transformations, by the Gram matrix of the vectors representing its elements. Express the constraints as fixed entries of the Gram matrix, and use numerical linear algebra techniques to find a list of vectors whose Gram matrix fits the bill. We tried the following techniques.
  - **LDL decomposition** *(`gram-test.sage`, `gram-test.jl`, `overlap-test.jl`).* Find a cluster of up to five elements whose Gram matrix is completely filled in by the constraints. Use LDL decomposition to find a list of vectors with that Gram matrix. This technique can be made algebraic, as seen in `overlap-test.jl`.
  - **Low-rank factorization** *(source files listed in findings section).* Write down a quadratic loss function that says how far a set of vectors is from meeting the Gram matrix constraints. Use a smooth optimization technique like Newton's method or gradient descent to find a zero of the loss function. In addition to the polished prototype described in the results section, we have an early prototype using an off-the-shelf factorization package (`low-rank-test.jl`) and an visualization of the loss function landscape near global minima (`basin-shapes.jl`).

  The [Gram matrix parameterization](wiki/Gram-matrix-parameterization) wiki page contains detailed notes on this approach.

### Findings

With the algebraic approach, we hit a performance wall pretty quickly as our constructions grew. It was often hard to find real solutions of the polynomial system, since the techniques we use work most naturally in the complex world.

With the Gram matrix approach, on the other hand, we could solve interesting problems in acceptably short times using the low-rank factorization technique. We put the optimization routine in its own module (`Engine.jl`) and used it to solve five example problems:
- `overlapping-pyramids.jl`
- `circles-in-triangle.jl`
- `sphere-in-tetrahedron.jl`
- `tetrahedron-radius-ratio.jl`
- `irisawa-hexlet.jl`

We plan to use low-rank factorization of the Gram matrix in our first app prototype.

### Visualizations

We used the visualizer in the `ganja-test` folder to visually check our low-rank factorization results. The visualizer runs [Ganja.js](https://enkimute.github.io/ganja.js/) in an Electron app, made with [Blink](https://github.com/JuliaGizmos/Blink.jl). Although Ganja.js makes beautiful pictures under most circumstances, we found two obstacles to using it in production.

- It seems to have precision problems with low-curvature spheres.
- We couldn't figure out how to customize its clipping and transparency settings, and the default settings often obscure construction details.

Co-authored-by: Aaron Fenyes <aaron.fenyes@fareycircles.ooo>
Co-authored-by: Glen Whitney <glen@studioinfinity.org>
Reviewed-on: #13
Co-authored-by: Vectornaut <vectornaut@nobody@nowhere.net>
Co-committed-by: Vectornaut <vectornaut@nobody@nowhere.net>
2024-10-21 03:18:47 +00:00
33 changed files with 6335 additions and 21 deletions

4
app-proto/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
target
dist
profiling
Cargo.lock

44
app-proto/Cargo.toml Normal file
View File

@ -0,0 +1,44 @@
[package]
name = "dyna3"
version = "0.1.0"
authors = ["Aaron Fenyes", "Glen Whitney"]
edition = "2021"
[features]
default = ["console_error_panic_hook"]
[dependencies]
itertools = "0.13.0"
js-sys = "0.3.70"
lazy_static = "1.5.0"
nalgebra = "0.33.0"
rustc-hash = "2.0.0"
slab = "0.4.9"
sycamore = "0.9.0-beta.3"
# The `console_error_panic_hook` crate provides better debugging of panics by
# logging them with `console.error`. This is great for development, but requires
# all the `std::fmt` and `std::panicking` infrastructure, so isn't great for
# code size when deploying.
console_error_panic_hook = { version = "0.1.7", optional = true }
[dependencies.web-sys]
version = "0.3.69"
features = [
'HtmlCanvasElement',
'HtmlInputElement',
'Performance',
'WebGl2RenderingContext',
'WebGlBuffer',
'WebGlProgram',
'WebGlShader',
'WebGlUniformLocation',
'WebGlVertexArrayObject'
]
[dev-dependencies]
wasm-bindgen-test = "0.3.34"
[profile.release]
opt-level = "s" # optimize for small code size
debug = true # include debug symbols

11
app-proto/index.html Normal file
View File

@ -0,0 +1,11 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>dyna3</title>
<link data-trunk rel="css" href="main.css"/>
<link href="https://fonts.bunny.net/css?family=fira-sans:ital,wght@0,400;1,400&display=swap" rel="stylesheet">
<link href="https://fonts.bunny.net/css?family=noto-emoji:wght@400&text=%f0%9f%94%97%e2%9a%a0&display=swap" rel="stylesheet">
</head>
<body></body>
</html>

175
app-proto/main.css Normal file
View File

@ -0,0 +1,175 @@
:root {
--text: #fcfcfc; /* almost white */
--text-bright: white;
--text-invalid: #f58fc2; /* bright pink */
--border: #555; /* light gray */
--border-focus: #aaa; /* bright gray */
--border-invalid: #70495c; /* dusky pink */
--selection-highlight: #444; /* medium gray */
--page-background: #222; /* dark gray */
--display-background: #020202; /* almost black */
}
body {
margin: 0px;
color: var(--text);
background-color: var(--page-background);
font-family: 'Fira Sans', sans-serif;
}
/* sidebar */
#sidebar {
display: flex;
flex-direction: column;
float: left;
width: 450px;
height: 100vh;
margin: 0px;
padding: 0px;
border-width: 0px 1px 0px 0px;
border-style: solid;
border-color: var(--border);
}
/* add-remove */
#add-remove {
display: flex;
gap: 8px;
margin: 8px;
}
#add-remove > button {
width: 32px;
height: 32px;
font-size: large;
}
/* KLUDGE */
/*
for convenience, we're using emoji as temporary icons for some buttons. these
buttons need to be displayed in an emoji font
*/
#add-remove > button.emoji {
font-family: 'Noto Emoji', sans-serif;
}
/* outline */
#outline {
flex-grow: 1;
margin: 0px;
padding: 0px;
overflow-y: scroll;
}
li {
user-select: none;
}
summary {
display: flex;
}
summary.selected {
color: var(--text-bright);
background-color: var(--selection-highlight);
}
summary > div, .constraint {
padding-top: 4px;
padding-bottom: 4px;
}
.element, .constraint {
display: flex;
flex-grow: 1;
padding-left: 8px;
padding-right: 8px;
}
.element-switch {
width: 18px;
padding-left: 2px;
text-align: center;
}
details:has(li) .element-switch::after {
content: '▸';
}
details[open]:has(li) .element-switch::after {
content: '▾';
}
.element-label {
flex-grow: 1;
}
.constraint-label {
flex-grow: 1;
}
.element-representation {
display: flex;
}
.element-representation > div {
padding: 2px 0px 0px 0px;
font-size: 10pt;
font-variant-numeric: tabular-nums;
text-align: right;
width: 56px;
}
.constraint {
font-style: italic;
}
.constraint.invalid {
color: var(--text-invalid);
}
.constraint > input[type=checkbox] {
margin: 0px 8px 0px 0px;
}
.constraint > input[type=text] {
color: inherit;
background-color: inherit;
border: 1px solid var(--border);
border-radius: 2px;
}
.constraint.invalid > input[type=text] {
border-color: var(--border-invalid);
}
.status {
width: 20px;
padding-left: 4px;
text-align: center;
font-family: 'Noto Emoji';
font-style: normal;
}
.invalid > .status::after, details:has(.invalid):not([open]) .status::after {
content: '⚠';
color: var(--text-invalid);
}
/* display */
canvas {
float: left;
margin-left: 20px;
margin-top: 20px;
background-color: var(--display-background);
border: 1px solid var(--border);
border-radius: 16px;
}
canvas:focus {
border-color: var(--border-focus);
}

8
app-proto/run-examples Executable file
View File

@ -0,0 +1,8 @@
# based on "Enabling print statements in Cargo tests", by Jon Almeida
#
# https://jonalmeida.com/posts/2015/01/23/print-cargo/
#
cargo test -- --nocapture engine::tests::irisawa_hexlet_test
cargo test -- --nocapture engine::tests::three_spheres_example
cargo test -- --nocapture engine::tests::point_on_sphere_example

240
app-proto/src/add_remove.rs Normal file
View File

@ -0,0 +1,240 @@
use sycamore::prelude::*;
use web_sys::{console, wasm_bindgen::JsValue};
use crate::{engine, AppState, assembly::{Assembly, Constraint, Element}};
/* DEBUG */
// load an example assembly for testing. this code will be removed once we've
// built a more formal test assembly system
fn load_gen_assemb(assembly: &Assembly) {
let _ = assembly.try_insert_element(
Element::new(
String::from("gemini_a"),
String::from("Castor"),
[1.00_f32, 0.25_f32, 0.00_f32],
engine::sphere(0.5, 0.5, 0.0, 1.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("gemini_b"),
String::from("Pollux"),
[0.00_f32, 0.25_f32, 1.00_f32],
engine::sphere(-0.5, -0.5, 0.0, 1.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("ursa_major"),
String::from("Ursa major"),
[0.25_f32, 0.00_f32, 1.00_f32],
engine::sphere(-0.5, 0.5, 0.0, 0.75)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("ursa_minor"),
String::from("Ursa minor"),
[0.25_f32, 1.00_f32, 0.00_f32],
engine::sphere(0.5, -0.5, 0.0, 0.5)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("moon_deimos"),
String::from("Deimos"),
[0.75_f32, 0.75_f32, 0.00_f32],
engine::sphere(0.0, 0.15, 1.0, 0.25)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("moon_phobos"),
String::from("Phobos"),
[0.00_f32, 0.75_f32, 0.50_f32],
engine::sphere(0.0, -0.15, -1.0, 0.25)
)
);
}
/* DEBUG */
// load an example assembly for testing. this code will be removed once we've
// built a more formal test assembly system
fn load_low_curv_assemb(assembly: &Assembly) {
let a = 0.75_f64.sqrt();
let _ = assembly.try_insert_element(
Element::new(
"central".to_string(),
"Central".to_string(),
[0.75_f32, 0.75_f32, 0.75_f32],
engine::sphere(0.0, 0.0, 0.0, 1.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"assemb_plane".to_string(),
"Assembly plane".to_string(),
[0.75_f32, 0.75_f32, 0.75_f32],
engine::sphere_with_offset(0.0, 0.0, 1.0, 0.0, 0.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"side1".to_string(),
"Side 1".to_string(),
[1.00_f32, 0.00_f32, 0.25_f32],
engine::sphere_with_offset(1.0, 0.0, 0.0, 1.0, 0.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"side2".to_string(),
"Side 2".to_string(),
[0.25_f32, 1.00_f32, 0.00_f32],
engine::sphere_with_offset(-0.5, a, 0.0, 1.0, 0.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"side3".to_string(),
"Side 3".to_string(),
[0.00_f32, 0.25_f32, 1.00_f32],
engine::sphere_with_offset(-0.5, -a, 0.0, 1.0, 0.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"corner1".to_string(),
"Corner 1".to_string(),
[0.75_f32, 0.75_f32, 0.75_f32],
engine::sphere(-4.0/3.0, 0.0, 0.0, 1.0/3.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
"corner2".to_string(),
"Corner 2".to_string(),
[0.75_f32, 0.75_f32, 0.75_f32],
engine::sphere(2.0/3.0, -4.0/3.0 * a, 0.0, 1.0/3.0)
)
);
let _ = assembly.try_insert_element(
Element::new(
String::from("corner3"),
String::from("Corner 3"),
[0.75_f32, 0.75_f32, 0.75_f32],
engine::sphere(2.0/3.0, 4.0/3.0 * a, 0.0, 1.0/3.0)
)
);
}
#[component]
pub fn AddRemove() -> View {
/* DEBUG */
let assembly_name = create_signal("general".to_string());
create_effect(move || {
// get name of chosen assembly
let name = assembly_name.get_clone();
console::log_1(
&JsValue::from(format!("Showing assembly \"{}\"", name.clone()))
);
batch(|| {
let state = use_context::<AppState>();
let assembly = &state.assembly;
// clear state
assembly.elements.update(|elts| elts.clear());
assembly.elements_by_id.update(|elts_by_id| elts_by_id.clear());
state.selection.update(|sel| sel.clear());
// load assembly
match name.as_str() {
"general" => load_gen_assemb(assembly),
"low-curv" => load_low_curv_assemb(assembly),
_ => ()
};
});
});
view! {
div(id="add-remove") {
button(
on:click=|_| {
let state = use_context::<AppState>();
state.assembly.insert_new_element();
/* DEBUG */
// print updated list of elements by identifier
console::log_1(&JsValue::from("elements by identifier:"));
for (id, key) in state.assembly.elements_by_id.get_clone().iter() {
console::log_3(
&JsValue::from(" "),
&JsValue::from(id),
&JsValue::from(*key)
);
}
}
) { "+" }
button(
class="emoji", /* KLUDGE */ // for convenience, we're using an emoji as a temporary icon for this button
disabled={
let state = use_context::<AppState>();
state.selection.with(|sel| sel.len() != 2)
},
on:click=|_| {
let state = use_context::<AppState>();
let subjects = state.selection.with(
|sel| {
let subject_vec: Vec<_> = sel.into_iter().collect();
(subject_vec[0].clone(), subject_vec[1].clone())
}
);
let lorentz_prod = create_signal(0.0);
let lorentz_prod_valid = create_signal(false);
let active = create_signal(true);
state.assembly.insert_constraint(Constraint {
subjects: subjects,
lorentz_prod: lorentz_prod,
lorentz_prod_text: create_signal(String::new()),
lorentz_prod_valid: lorentz_prod_valid,
active: active,
});
state.selection.update(|sel| sel.clear());
/* DEBUG */
// print updated constraint list
console::log_1(&JsValue::from("Constraints:"));
state.assembly.constraints.with(|csts| {
for (_, cst) in csts.into_iter() {
console::log_5(
&JsValue::from(" "),
&JsValue::from(cst.subjects.0),
&JsValue::from(cst.subjects.1),
&JsValue::from(":"),
&JsValue::from(cst.lorentz_prod.get_untracked())
);
}
});
// update the realization when the constraint becomes active
// and valid, or is edited while active and valid
create_effect(move || {
console::log_1(&JsValue::from(
format!("Constraint ({}, {}) updated", subjects.0, subjects.1)
));
lorentz_prod.track();
if active.get() && lorentz_prod_valid.get() {
state.assembly.realize();
}
});
}
) { "🔗" }
select(bind:value=assembly_name) { /* DEBUG */ // example assembly chooser
option(value="general") { "General" }
option(value="low-curv") { "Low-curvature" }
option(value="empty") { "Empty" }
}
}
}
}

210
app-proto/src/assembly.rs Normal file
View File

@ -0,0 +1,210 @@
use nalgebra::{DMatrix, DVector};
use rustc_hash::FxHashMap;
use slab::Slab;
use std::collections::BTreeSet;
use sycamore::prelude::*;
use web_sys::{console, wasm_bindgen::JsValue}; /* DEBUG */
use crate::engine::{realize_gram, PartialMatrix};
// the types of the keys we use to access an assembly's elements and constraints
pub type ElementKey = usize;
pub type ConstraintKey = usize;
pub type ElementColor = [f32; 3];
#[derive(Clone, PartialEq)]
pub struct Element {
pub id: String,
pub label: String,
pub color: ElementColor,
pub representation: Signal<DVector<f64>>,
pub constraints: Signal<BTreeSet<ConstraintKey>>,
// the configuration matrix column index that was assigned to this element
// last time the assembly was realized
column_index: usize
}
impl Element {
pub fn new(
id: String,
label: String,
color: ElementColor,
representation: DVector<f64>
) -> Element {
Element {
id: id,
label: label,
color: color,
representation: create_signal(representation),
constraints: create_signal(BTreeSet::default()),
column_index: 0
}
}
}
#[derive(Clone)]
pub struct Constraint {
pub subjects: (ElementKey, ElementKey),
pub lorentz_prod: Signal<f64>,
pub lorentz_prod_text: Signal<String>,
pub lorentz_prod_valid: Signal<bool>,
pub active: Signal<bool>
}
// a complete, view-independent description of an assembly
#[derive(Clone)]
pub struct Assembly {
// elements and constraints
pub elements: Signal<Slab<Element>>,
pub constraints: Signal<Slab<Constraint>>,
// indexing
pub elements_by_id: Signal<FxHashMap<String, ElementKey>>
}
impl Assembly {
pub fn new() -> Assembly {
Assembly {
elements: create_signal(Slab::new()),
constraints: create_signal(Slab::new()),
elements_by_id: create_signal(FxHashMap::default())
}
}
// --- inserting elements and constraints ---
// insert an element into the assembly without checking whether we already
// have an element with the same identifier. any element that does have the
// same identifier will get kicked out of the `elements_by_id` index
fn insert_element_unchecked(&self, elt: Element) {
let id = elt.id.clone();
let key = self.elements.update(|elts| elts.insert(elt));
self.elements_by_id.update(|elts_by_id| elts_by_id.insert(id, key));
}
pub fn try_insert_element(&self, elt: Element) -> bool {
let can_insert = self.elements_by_id.with_untracked(
|elts_by_id| !elts_by_id.contains_key(&elt.id)
);
if can_insert {
self.insert_element_unchecked(elt);
}
can_insert
}
pub fn insert_new_element(&self) {
// find the next unused identifier in the default sequence
let mut id_num = 1;
let mut id = format!("sphere{}", id_num);
while self.elements_by_id.with_untracked(
|elts_by_id| elts_by_id.contains_key(&id)
) {
id_num += 1;
id = format!("sphere{}", id_num);
}
// create and insert a new element
self.insert_element_unchecked(
Element::new(
id,
format!("Sphere {}", id_num),
[0.75_f32, 0.75_f32, 0.75_f32],
DVector::<f64>::from_column_slice(&[0.0, 0.0, 0.0, 0.5, -0.5])
)
);
}
pub fn insert_constraint(&self, constraint: Constraint) {
let subjects = constraint.subjects;
let key = self.constraints.update(|csts| csts.insert(constraint));
let subject_constraints = self.elements.with(
|elts| (elts[subjects.0].constraints, elts[subjects.1].constraints)
);
subject_constraints.0.update(|csts| csts.insert(key));
subject_constraints.1.update(|csts| csts.insert(key));
}
// --- realization ---
pub fn realize(&self) {
// index the elements
self.elements.update_silent(|elts| {
for (index, (_, elt)) in elts.into_iter().enumerate() {
elt.column_index = index;
}
});
// set up the Gram matrix and the initial configuration matrix
let (gram, guess) = self.elements.with_untracked(|elts| {
// set up the off-diagonal part of the Gram matrix
let mut gram_to_be = PartialMatrix::new();
self.constraints.with_untracked(|csts| {
for (_, cst) in csts {
if cst.active.get_untracked() && cst.lorentz_prod_valid.get_untracked() {
let subjects = cst.subjects;
let row = elts[subjects.0].column_index;
let col = elts[subjects.1].column_index;
gram_to_be.push_sym(row, col, cst.lorentz_prod.get_untracked());
}
}
});
// set up the initial configuration matrix and the diagonal of the
// Gram matrix
let mut guess_to_be = DMatrix::<f64>::zeros(5, elts.len());
for (_, elt) in elts {
let index = elt.column_index;
gram_to_be.push_sym(index, index, 1.0);
guess_to_be.set_column(index, &elt.representation.get_clone_untracked());
}
(gram_to_be, guess_to_be)
});
/* DEBUG */
// log the Gram matrix
console::log_1(&JsValue::from("Gram matrix:"));
gram.log_to_console();
/* DEBUG */
// log the initial configuration matrix
console::log_1(&JsValue::from("Old configuration:"));
for j in 0..guess.nrows() {
let mut row_str = String::new();
for k in 0..guess.ncols() {
row_str.push_str(format!(" {:>8.3}", guess[(j, k)]).as_str());
}
console::log_1(&JsValue::from(row_str));
}
// look for a configuration with the given Gram matrix
let (config, success, history) = realize_gram(
&gram, guess, &[],
1.0e-12, 0.5, 0.9, 1.1, 200, 110
);
/* DEBUG */
// report the outcome of the search
console::log_1(&JsValue::from(
if success {
"Target accuracy achieved!"
} else {
"Failed to reach target accuracy"
}
));
console::log_2(&JsValue::from("Steps:"), &JsValue::from(history.scaled_loss.len() - 1));
console::log_2(&JsValue::from("Loss:"), &JsValue::from(*history.scaled_loss.last().unwrap()));
if success {
// read out the solution
for (_, elt) in self.elements.get_clone_untracked() {
elt.representation.update(
|rep| rep.set_column(0, &config.column(elt.column_index))
);
}
}
}
}

464
app-proto/src/display.rs Normal file
View File

@ -0,0 +1,464 @@
use core::array;
use nalgebra::{DMatrix, Rotation3, Vector3};
use sycamore::{prelude::*, motion::create_raf};
use web_sys::{
console,
window,
KeyboardEvent,
WebGl2RenderingContext,
WebGlProgram,
WebGlShader,
WebGlUniformLocation,
wasm_bindgen::{JsCast, JsValue}
};
use crate::AppState;
fn compile_shader(
context: &WebGl2RenderingContext,
shader_type: u32,
source: &str,
) -> WebGlShader {
let shader = context.create_shader(shader_type).unwrap();
context.shader_source(&shader, source);
context.compile_shader(&shader);
shader
}
fn get_uniform_array_locations<const N: usize>(
context: &WebGl2RenderingContext,
program: &WebGlProgram,
var_name: &str,
member_name_opt: Option<&str>
) -> [Option<WebGlUniformLocation>; N] {
array::from_fn(|n| {
let name = match member_name_opt {
Some(member_name) => format!("{var_name}[{n}].{member_name}"),
None => format!("{var_name}[{n}]")
};
context.get_uniform_location(&program, name.as_str())
})
}
// load the given data into the vertex input of the given name
fn bind_vertex_attrib(
context: &WebGl2RenderingContext,
index: u32,
size: i32,
data: &[f32]
) {
// create a data buffer and bind it to ARRAY_BUFFER
let buffer = context.create_buffer().unwrap();
context.bind_buffer(WebGl2RenderingContext::ARRAY_BUFFER, Some(&buffer));
// load the given data into the buffer. the function `Float32Array::view`
// creates a raw view into our module's `WebAssembly.Memory` buffer.
// allocating more memory will change the buffer, invalidating the view.
// that means we have to make sure we don't allocate any memory until the
// view is dropped
unsafe {
context.buffer_data_with_array_buffer_view(
WebGl2RenderingContext::ARRAY_BUFFER,
&js_sys::Float32Array::view(&data),
WebGl2RenderingContext::STATIC_DRAW,
);
}
// allow the target attribute to be used
context.enable_vertex_attrib_array(index);
// take whatever's bound to ARRAY_BUFFER---here, the data buffer created
// above---and bind it to the target attribute
//
// https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/vertexAttribPointer
//
context.vertex_attrib_pointer_with_i32(
index,
size,
WebGl2RenderingContext::FLOAT,
false, // don't normalize
0, // zero stride
0, // zero offset
);
}
#[component]
pub fn Display() -> View {
let state = use_context::<AppState>();
// canvas
let display = create_node_ref();
// navigation
let pitch_up = create_signal(0.0);
let pitch_down = create_signal(0.0);
let yaw_right = create_signal(0.0);
let yaw_left = create_signal(0.0);
let roll_ccw = create_signal(0.0);
let roll_cw = create_signal(0.0);
let zoom_in = create_signal(0.0);
let zoom_out = create_signal(0.0);
let turntable = create_signal(false); /* BENCHMARKING */
// change listener
let scene_changed = create_signal(true);
create_effect(move || {
state.assembly.elements.with(|elts| {
for (_, elt) in elts {
elt.representation.track();
}
});
state.selection.track();
scene_changed.set(true);
});
/* INSTRUMENTS */
const SAMPLE_PERIOD: i32 = 60;
let mut last_sample_time = 0.0;
let mut frames_since_last_sample = 0;
let mean_frame_interval = create_signal(0.0);
on_mount(move || {
// timing
let mut last_time = 0.0;
// viewpoint
const ROT_SPEED: f64 = 0.4; // in radians per second
const ZOOM_SPEED: f64 = 0.15; // multiplicative rate per second
const TURNTABLE_SPEED: f64 = 0.1; /* BENCHMARKING */
let mut orientation = DMatrix::<f64>::identity(5, 5);
let mut rotation = DMatrix::<f64>::identity(5, 5);
let mut location_z: f64 = 5.0;
// display parameters
const OPACITY: f32 = 0.5; /* SCAFFOLDING */
const HIGHLIGHT: f32 = 0.2; /* SCAFFOLDING */
const LAYER_THRESHOLD: i32 = 0; /* DEBUG */
const DEBUG_MODE: i32 = 0; /* DEBUG */
/* INSTRUMENTS */
let performance = window().unwrap().performance().unwrap();
// get the display canvas
let canvas = display.get().unchecked_into::<web_sys::HtmlCanvasElement>();
let ctx = canvas
.get_context("webgl2")
.unwrap()
.unwrap()
.dyn_into::<WebGl2RenderingContext>()
.unwrap();
// compile and attach the vertex and fragment shaders
let vertex_shader = compile_shader(
&ctx,
WebGl2RenderingContext::VERTEX_SHADER,
include_str!("identity.vert"),
);
let fragment_shader = compile_shader(
&ctx,
WebGl2RenderingContext::FRAGMENT_SHADER,
include_str!("inversive.frag"),
);
let program = ctx.create_program().unwrap();
ctx.attach_shader(&program, &vertex_shader);
ctx.attach_shader(&program, &fragment_shader);
ctx.link_program(&program);
let link_status = ctx
.get_program_parameter(&program, WebGl2RenderingContext::LINK_STATUS)
.as_bool()
.unwrap();
let link_msg = if link_status {
"Linked successfully"
} else {
"Linking failed"
};
console::log_1(&JsValue::from(link_msg));
ctx.use_program(Some(&program));
/* DEBUG */
// print the maximum number of vectors that can be passed as
// uniforms to a fragment shader. the OpenGL ES 3.0 standard
// requires this maximum to be at least 224, as discussed in the
// documentation of the GL_MAX_FRAGMENT_UNIFORM_VECTORS parameter
// here:
//
// https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glGet.xhtml
//
// there are also other size limits. for example, on Aaron's
// machine, the the length of a float or genType array seems to be
// capped at 1024 elements
console::log_2(
&ctx.get_parameter(WebGl2RenderingContext::MAX_FRAGMENT_UNIFORM_VECTORS).unwrap(),
&JsValue::from("uniform vectors available")
);
// find indices of vertex attributes and uniforms
const SPHERE_MAX: usize = 200;
let position_index = ctx.get_attrib_location(&program, "position") as u32;
let sphere_cnt_loc = ctx.get_uniform_location(&program, "sphere_cnt");
let sphere_sp_locs = get_uniform_array_locations::<SPHERE_MAX>(
&ctx, &program, "sphere_list", Some("sp")
);
let sphere_lt_locs = get_uniform_array_locations::<SPHERE_MAX>(
&ctx, &program, "sphere_list", Some("lt")
);
let color_locs = get_uniform_array_locations::<SPHERE_MAX>(
&ctx, &program, "color_list", None
);
let highlight_locs = get_uniform_array_locations::<SPHERE_MAX>(
&ctx, &program, "highlight_list", None
);
let resolution_loc = ctx.get_uniform_location(&program, "resolution");
let shortdim_loc = ctx.get_uniform_location(&program, "shortdim");
let opacity_loc = ctx.get_uniform_location(&program, "opacity");
let layer_threshold_loc = ctx.get_uniform_location(&program, "layer_threshold");
let debug_mode_loc = ctx.get_uniform_location(&program, "debug_mode");
// create a vertex array and bind it to the graphics context
let vertex_array = ctx.create_vertex_array().unwrap();
ctx.bind_vertex_array(Some(&vertex_array));
// set the vertex positions
const VERTEX_CNT: usize = 6;
let positions: [f32; 3*VERTEX_CNT] = [
// northwest triangle
-1.0, -1.0, 0.0,
-1.0, 1.0, 0.0,
1.0, 1.0, 0.0,
// southeast triangle
-1.0, -1.0, 0.0,
1.0, 1.0, 0.0,
1.0, -1.0, 0.0
];
bind_vertex_attrib(&ctx, position_index, 3, &positions);
// set up a repainting routine
let (_, start_animation_loop, _) = create_raf(move || {
// get the time step
let time = performance.now();
let time_step = 0.001*(time - last_time);
last_time = time;
// get the navigation state
let pitch_up_val = pitch_up.get();
let pitch_down_val = pitch_down.get();
let yaw_right_val = yaw_right.get();
let yaw_left_val = yaw_left.get();
let roll_ccw_val = roll_ccw.get();
let roll_cw_val = roll_cw.get();
let zoom_in_val = zoom_in.get();
let zoom_out_val = zoom_out.get();
let turntable_val = turntable.get(); /* BENCHMARKING */
// update the assembly's orientation
let ang_vel = {
let pitch = pitch_up_val - pitch_down_val;
let yaw = yaw_right_val - yaw_left_val;
let roll = roll_ccw_val - roll_cw_val;
if pitch != 0.0 || yaw != 0.0 || roll != 0.0 {
ROT_SPEED * Vector3::new(-pitch, yaw, roll).normalize()
} else {
Vector3::zeros()
}
} /* BENCHMARKING */ + if turntable_val {
Vector3::new(0.0, TURNTABLE_SPEED, 0.0)
} else {
Vector3::zeros()
};
let mut rotation_sp = rotation.fixed_view_mut::<3, 3>(0, 0);
rotation_sp.copy_from(
Rotation3::from_scaled_axis(time_step * ang_vel).matrix()
);
orientation = &rotation * &orientation;
// update the assembly's location
let zoom = zoom_out_val - zoom_in_val;
location_z *= (time_step * ZOOM_SPEED * zoom).exp();
if scene_changed.get() {
/* INSTRUMENTS */
// measure mean frame interval
frames_since_last_sample += 1;
if frames_since_last_sample >= SAMPLE_PERIOD {
mean_frame_interval.set((time - last_sample_time) / (SAMPLE_PERIOD as f64));
last_sample_time = time;
frames_since_last_sample = 0;
}
// find the map from assembly space to world space
let location = {
let u = -location_z;
DMatrix::from_column_slice(5, 5, &[
1.0, 0.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0, u,
0.0, 0.0, 2.0*u, 1.0, u*u,
0.0, 0.0, 0.0, 0.0, 1.0
])
};
let assembly_to_world = &location * &orientation;
// get the assembly
let (
elt_cnt,
reps_world,
colors,
highlights
) = state.assembly.elements.with(|elts| {
(
// number of elements
elts.len() as i32,
// representation vectors in world coordinates
elts.iter().map(
|(_, elt)| elt.representation.with(|rep| &assembly_to_world * rep)
).collect::<Vec<_>>(),
// colors
elts.iter().map(|(key, elt)| {
if state.selection.with(|sel| sel.contains(&key)) {
elt.color.map(|ch| 0.2 + 0.8*ch)
} else {
elt.color
}
}).collect::<Vec<_>>(),
// highlight levels
elts.iter().map(|(key, _)| {
if state.selection.with(|sel| sel.contains(&key)) {
1.0_f32
} else {
HIGHLIGHT
}
}).collect::<Vec<_>>()
)
});
// set the resolution
let width = canvas.width() as f32;
let height = canvas.height() as f32;
ctx.uniform2f(resolution_loc.as_ref(), width, height);
ctx.uniform1f(shortdim_loc.as_ref(), width.min(height));
// pass the assembly
ctx.uniform1i(sphere_cnt_loc.as_ref(), elt_cnt);
for n in 0..reps_world.len() {
let v = &reps_world[n];
ctx.uniform3f(
sphere_sp_locs[n].as_ref(),
v[0] as f32, v[1] as f32, v[2] as f32
);
ctx.uniform2f(
sphere_lt_locs[n].as_ref(),
v[3] as f32, v[4] as f32
);
ctx.uniform3fv_with_f32_array(
color_locs[n].as_ref(),
&colors[n]
);
ctx.uniform1f(
highlight_locs[n].as_ref(),
highlights[n]
);
}
// pass the display parameters
ctx.uniform1f(opacity_loc.as_ref(), OPACITY);
ctx.uniform1i(layer_threshold_loc.as_ref(), LAYER_THRESHOLD);
ctx.uniform1i(debug_mode_loc.as_ref(), DEBUG_MODE);
// draw the scene
ctx.draw_arrays(WebGl2RenderingContext::TRIANGLES, 0, VERTEX_CNT as i32);
// clear the scene change flag
scene_changed.set(
pitch_up_val != 0.0
|| pitch_down_val != 0.0
|| yaw_left_val != 0.0
|| yaw_right_val != 0.0
|| roll_cw_val != 0.0
|| roll_ccw_val != 0.0
|| zoom_in_val != 0.0
|| zoom_out_val != 0.0
|| turntable_val /* BENCHMARKING */
);
} else {
frames_since_last_sample = 0;
mean_frame_interval.set(-1.0);
}
});
start_animation_loop();
});
let set_nav_signal = move |event: KeyboardEvent, value: f64| {
let mut navigating = true;
let shift = event.shift_key();
match event.key().as_str() {
"ArrowUp" if shift => zoom_in.set(value),
"ArrowDown" if shift => zoom_out.set(value),
"ArrowUp" => pitch_up.set(value),
"ArrowDown" => pitch_down.set(value),
"ArrowRight" if shift => roll_cw.set(value),
"ArrowLeft" if shift => roll_ccw.set(value),
"ArrowRight" => yaw_right.set(value),
"ArrowLeft" => yaw_left.set(value),
_ => navigating = false
};
if navigating {
scene_changed.set(true);
event.prevent_default();
}
};
view! {
/* TO DO */
// switch back to integer-valued parameters when that becomes possible
// again
canvas(
ref=display,
width="600",
height="600",
tabindex="0",
on:keydown=move |event: KeyboardEvent| {
if event.key() == "Shift" {
roll_cw.set(yaw_right.get());
roll_ccw.set(yaw_left.get());
zoom_in.set(pitch_up.get());
zoom_out.set(pitch_down.get());
yaw_right.set(0.0);
yaw_left.set(0.0);
pitch_up.set(0.0);
pitch_down.set(0.0);
} else {
if event.key() == "Enter" { /* BENCHMARKING */
turntable.set_fn(|turn| !turn);
scene_changed.set(true);
}
set_nav_signal(event, 1.0);
}
},
on:keyup=move |event: KeyboardEvent| {
if event.key() == "Shift" {
yaw_right.set(roll_cw.get());
yaw_left.set(roll_ccw.get());
pitch_up.set(zoom_in.get());
pitch_down.set(zoom_out.get());
roll_cw.set(0.0);
roll_ccw.set(0.0);
zoom_in.set(0.0);
zoom_out.set(0.0);
} else {
set_nav_signal(event, 0.0);
}
},
on:blur=move |_| {
pitch_up.set(0.0);
pitch_down.set(0.0);
yaw_right.set(0.0);
yaw_left.set(0.0);
roll_ccw.set(0.0);
roll_cw.set(0.0);
}
)
}
}

540
app-proto/src/engine.rs Normal file
View File

@ -0,0 +1,540 @@
use lazy_static::lazy_static;
use nalgebra::{Const, DMatrix, DVector, Dyn};
use web_sys::{console, wasm_bindgen::JsValue}; /* DEBUG */
// --- elements ---
#[cfg(test)]
pub fn point(x: f64, y: f64, z: f64) -> DVector<f64> {
DVector::from_column_slice(&[x, y, z, 0.5, 0.5*(x*x + y*y + z*z)])
}
// the sphere with the given center and radius, with inward-pointing normals
pub fn sphere(center_x: f64, center_y: f64, center_z: f64, radius: f64) -> DVector<f64> {
let center_norm_sq = center_x * center_x + center_y * center_y + center_z * center_z;
DVector::from_column_slice(&[
center_x / radius,
center_y / radius,
center_z / radius,
0.5 / radius,
0.5 * (center_norm_sq / radius - radius)
])
}
// the sphere of curvature `curv` whose closest point to the origin has position
// `off * dir` and normal `dir`, where `dir` is a unit vector. setting the
// curvature to zero gives a plane
pub fn sphere_with_offset(dir_x: f64, dir_y: f64, dir_z: f64, off: f64, curv: f64) -> DVector<f64> {
let norm_sp = 1.0 + off * curv;
DVector::from_column_slice(&[
norm_sp * dir_x,
norm_sp * dir_y,
norm_sp * dir_z,
0.5 * curv,
off * (1.0 + 0.5 * off * curv)
])
}
// --- partial matrices ---
struct MatrixEntry {
index: (usize, usize),
value: f64
}
pub struct PartialMatrix(Vec<MatrixEntry>);
impl PartialMatrix {
pub fn new() -> PartialMatrix {
PartialMatrix(Vec::<MatrixEntry>::new())
}
pub fn push_sym(&mut self, row: usize, col: usize, value: f64) {
let PartialMatrix(entries) = self;
entries.push(MatrixEntry { index: (row, col), value: value });
if row != col {
entries.push(MatrixEntry { index: (col, row), value: value });
}
}
/* DEBUG */
pub fn log_to_console(&self) {
let PartialMatrix(entries) = self;
for ent in entries {
let ent_str = format!(" {} {} {}", ent.index.0, ent.index.1, ent.value);
console::log_1(&JsValue::from(ent_str.as_str()));
}
}
fn proj(&self, a: &DMatrix<f64>) -> DMatrix<f64> {
let mut result = DMatrix::<f64>::zeros(a.nrows(), a.ncols());
let PartialMatrix(entries) = self;
for ent in entries {
result[ent.index] = a[ent.index];
}
result
}
fn sub_proj(&self, rhs: &DMatrix<f64>) -> DMatrix<f64> {
let mut result = DMatrix::<f64>::zeros(rhs.nrows(), rhs.ncols());
let PartialMatrix(entries) = self;
for ent in entries {
result[ent.index] = ent.value - rhs[ent.index];
}
result
}
}
// --- descent history ---
pub struct DescentHistory {
pub config: Vec<DMatrix<f64>>,
pub scaled_loss: Vec<f64>,
pub neg_grad: Vec<DMatrix<f64>>,
pub min_eigval: Vec<f64>,
pub base_step: Vec<DMatrix<f64>>,
pub backoff_steps: Vec<i32>
}
impl DescentHistory {
fn new() -> DescentHistory {
DescentHistory {
config: Vec::<DMatrix<f64>>::new(),
scaled_loss: Vec::<f64>::new(),
neg_grad: Vec::<DMatrix<f64>>::new(),
min_eigval: Vec::<f64>::new(),
base_step: Vec::<DMatrix<f64>>::new(),
backoff_steps: Vec::<i32>::new(),
}
}
}
// --- gram matrix realization ---
// the Lorentz form
lazy_static! {
static ref Q: DMatrix<f64> = DMatrix::from_row_slice(5, 5, &[
1.0, 0.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, -2.0,
0.0, 0.0, 0.0, -2.0, 0.0
]);
}
struct SearchState {
config: DMatrix<f64>,
err_proj: DMatrix<f64>,
loss: f64
}
impl SearchState {
fn from_config(gram: &PartialMatrix, config: DMatrix<f64>) -> SearchState {
let err_proj = gram.sub_proj(&(config.tr_mul(&*Q) * &config));
let loss = err_proj.norm_squared();
SearchState {
config: config,
err_proj: err_proj,
loss: loss
}
}
}
fn basis_matrix(index: (usize, usize), nrows: usize, ncols: usize) -> DMatrix<f64> {
let mut result = DMatrix::<f64>::zeros(nrows, ncols);
result[index] = 1.0;
result
}
// use backtracking line search to find a better configuration
fn seek_better_config(
gram: &PartialMatrix,
state: &SearchState,
base_step: &DMatrix<f64>,
base_target_improvement: f64,
min_efficiency: f64,
backoff: f64,
max_backoff_steps: i32
) -> Option<(SearchState, i32)> {
let mut rate = 1.0;
for backoff_steps in 0..max_backoff_steps {
let trial_config = &state.config + rate * base_step;
let trial_state = SearchState::from_config(gram, trial_config);
let improvement = state.loss - trial_state.loss;
if improvement >= min_efficiency * rate * base_target_improvement {
return Some((trial_state, backoff_steps));
}
rate *= backoff;
}
None
}
// seek a matrix `config` for which `config' * Q * config` matches the partial
// matrix `gram`. use gradient descent starting from `guess`
pub fn realize_gram(
gram: &PartialMatrix,
guess: DMatrix<f64>,
frozen: &[(usize, usize)],
scaled_tol: f64,
min_efficiency: f64,
backoff: f64,
reg_scale: f64,
max_descent_steps: i32,
max_backoff_steps: i32
) -> (DMatrix<f64>, bool, DescentHistory) {
// start the descent history
let mut history = DescentHistory::new();
// find the dimension of the search space
let element_dim = guess.nrows();
let assembly_dim = guess.ncols();
let total_dim = element_dim * assembly_dim;
// scale the tolerance
let scale_adjustment = (gram.0.len() as f64).sqrt();
let tol = scale_adjustment * scaled_tol;
// convert the frozen indices to stacked format
let frozen_stacked: Vec<usize> = frozen.into_iter().map(
|index| index.1*element_dim + index.0
).collect();
// use Newton's method with backtracking and gradient descent backup
let mut state = SearchState::from_config(gram, guess);
for _ in 0..max_descent_steps {
// stop if the loss is tolerably low
history.config.push(state.config.clone());
history.scaled_loss.push(state.loss / scale_adjustment);
if state.loss < tol { break; }
// find the negative gradient of the loss function
let neg_grad = 4.0 * &*Q * &state.config * &state.err_proj;
let mut neg_grad_stacked = neg_grad.clone().reshape_generic(Dyn(total_dim), Const::<1>);
history.neg_grad.push(neg_grad.clone());
// find the negative Hessian of the loss function
let mut hess_cols = Vec::<DVector<f64>>::with_capacity(total_dim);
for col in 0..assembly_dim {
for row in 0..element_dim {
let index = (row, col);
let basis_mat = basis_matrix(index, element_dim, assembly_dim);
let neg_d_err =
basis_mat.tr_mul(&*Q) * &state.config
+ state.config.tr_mul(&*Q) * &basis_mat;
let neg_d_err_proj = gram.proj(&neg_d_err);
let deriv_grad = 4.0 * &*Q * (
-&basis_mat * &state.err_proj
+ &state.config * &neg_d_err_proj
);
hess_cols.push(deriv_grad.reshape_generic(Dyn(total_dim), Const::<1>));
}
}
let mut hess = DMatrix::from_columns(hess_cols.as_slice());
// regularize the Hessian
let min_eigval = hess.symmetric_eigenvalues().min();
if min_eigval <= 0.0 {
hess -= reg_scale * min_eigval * DMatrix::identity(total_dim, total_dim);
}
history.min_eigval.push(min_eigval);
// project the negative gradient and negative Hessian onto the
// orthogonal complement of the frozen subspace
let zero_col = DVector::zeros(total_dim);
let zero_row = zero_col.transpose();
for &k in &frozen_stacked {
neg_grad_stacked[k] = 0.0;
hess.set_row(k, &zero_row);
hess.set_column(k, &zero_col);
hess[(k, k)] = 1.0;
}
// compute the Newton step
/*
we need to either handle or eliminate the case where the minimum
eigenvalue of the Hessian is zero, so the regularized Hessian is
singular. right now, this causes the Cholesky decomposition to return
`None`, leading to a panic when we unrap
*/
let base_step_stacked = hess.cholesky().unwrap().solve(&neg_grad_stacked);
let base_step = base_step_stacked.reshape_generic(Dyn(element_dim), Dyn(assembly_dim));
history.base_step.push(base_step.clone());
// use backtracking line search to find a better configuration
match seek_better_config(
gram, &state, &base_step, neg_grad.dot(&base_step),
min_efficiency, backoff, max_backoff_steps
) {
Some((better_state, backoff_steps)) => {
state = better_state;
history.backoff_steps.push(backoff_steps);
},
None => return (state.config, false, history)
};
}
(state.config, state.loss < tol, history)
}
// --- tests ---
#[cfg(test)]
mod tests {
use std::{array, f64::consts::PI};
use super::*;
#[test]
fn sub_proj_test() {
let target = PartialMatrix(vec![
MatrixEntry { index: (0, 0), value: 19.0 },
MatrixEntry { index: (0, 2), value: 39.0 },
MatrixEntry { index: (1, 1), value: 59.0 },
MatrixEntry { index: (1, 2), value: 69.0 }
]);
let attempt = DMatrix::<f64>::from_row_slice(2, 3, &[
1.0, 2.0, 3.0,
4.0, 5.0, 6.0
]);
let expected_result = DMatrix::<f64>::from_row_slice(2, 3, &[
18.0, 0.0, 36.0,
0.0, 54.0, 63.0
]);
assert_eq!(target.sub_proj(&attempt), expected_result);
}
#[test]
fn zero_loss_test() {
let gram = PartialMatrix({
let mut entries = Vec::<MatrixEntry>::new();
for j in 0..3 {
for k in 0..3 {
entries.push(MatrixEntry {
index: (j, k),
value: if j == k { 1.0 } else { -1.0 }
});
}
}
entries
});
let config = {
let a: f64 = 0.75_f64.sqrt();
DMatrix::from_columns(&[
sphere(1.0, 0.0, 0.0, a),
sphere(-0.5, a, 0.0, a),
sphere(-0.5, -a, 0.0, a)
])
};
let state = SearchState::from_config(&gram, config);
assert!(state.loss.abs() < f64::EPSILON);
}
// this problem is from a sangaku by Irisawa Shintarō Hiroatsu. the article
// below includes a nice translation of the problem statement, which was
// recorded in Uchida Itsumi's book _Kokon sankan_ (_Mathematics, Past and
// Present_)
//
// "Japan's 'Wasan' Mathematical Tradition", by Abe Haruki
// https://www.nippon.com/en/japan-topics/c12801/
//
#[test]
fn irisawa_hexlet_test() {
let gram = PartialMatrix({
let mut entries = Vec::<MatrixEntry>::new();
for s in 0..9 {
// each sphere is represented by a spacelike vector
entries.push(MatrixEntry { index: (s, s), value: 1.0 });
// the circumscribing sphere is tangent to all of the other
// spheres, with matching orientation
if s > 0 {
entries.push(MatrixEntry { index: (0, s), value: 1.0 });
entries.push(MatrixEntry { index: (s, 0), value: 1.0 });
}
if s > 2 {
// each chain sphere is tangent to the "sun" and "moon"
// spheres, with opposing orientation
for n in 1..3 {
entries.push(MatrixEntry { index: (s, n), value: -1.0 });
entries.push(MatrixEntry { index: (n, s), value: -1.0 });
}
// each chain sphere is tangent to the next chain sphere,
// with opposing orientation
let s_next = 3 + (s-2) % 6;
entries.push(MatrixEntry { index: (s, s_next), value: -1.0 });
entries.push(MatrixEntry { index: (s_next, s), value: -1.0 });
}
}
entries
});
let guess = DMatrix::from_columns(
[
sphere(0.0, 0.0, 0.0, 15.0),
sphere(0.0, 0.0, -9.0, 5.0),
sphere(0.0, 0.0, 11.0, 3.0)
].into_iter().chain(
(1..=6).map(
|k| {
let ang = (k as f64) * PI/3.0;
sphere(9.0 * ang.cos(), 9.0 * ang.sin(), 0.0, 2.5)
}
)
).collect::<Vec<_>>().as_slice()
);
let frozen: [(usize, usize); 4] = array::from_fn(|k| (3, k));
const SCALED_TOL: f64 = 1.0e-12;
let (config, success, history) = realize_gram(
&gram, guess, &frozen,
SCALED_TOL, 0.5, 0.9, 1.1, 200, 110
);
let entry_tol = SCALED_TOL.sqrt();
let solution_diams = [30.0, 10.0, 6.0, 5.0, 15.0, 10.0, 3.75, 2.5, 2.0 + 8.0/11.0];
for (k, diam) in solution_diams.into_iter().enumerate() {
assert!((config[(3, k)] - 1.0 / diam).abs() < entry_tol);
}
print!("\nCompleted Gram matrix:{}", config.tr_mul(&*Q) * &config);
if success {
println!("Target accuracy achieved!");
} else {
println!("Failed to reach target accuracy");
}
println!("Steps: {}", history.scaled_loss.len() - 1);
println!("Loss: {}", history.scaled_loss.last().unwrap());
if success {
println!("\nChain diameters:");
println!(" {} sun (given)", 1.0 / config[(3, 3)]);
for k in 4..9 {
println!(" {} sun", 1.0 / config[(3, k)]);
}
}
println!("\nStep │ Loss\n─────┼────────────────────────────────");
for (step, scaled_loss) in history.scaled_loss.into_iter().enumerate() {
println!("{:<4}{}", step, scaled_loss);
}
}
// --- process inspection examples ---
// these tests are meant for human inspection, not automated use. run them
// one at a time in `--nocapture` mode and read through the results and
// optimization histories that they print out. the `run-examples` script
// will run all of them
#[test]
fn three_spheres_example() {
let gram = PartialMatrix({
let mut entries = Vec::<MatrixEntry>::new();
for j in 0..3 {
for k in 0..3 {
entries.push(MatrixEntry {
index: (j, k),
value: if j == k { 1.0 } else { -1.0 }
});
}
}
entries
});
let guess = {
let a: f64 = 0.75_f64.sqrt();
DMatrix::from_columns(&[
sphere(1.0, 0.0, 0.0, 1.0),
sphere(-0.5, a, 0.0, 1.0),
sphere(-0.5, -a, 0.0, 1.0)
])
};
println!();
let (config, success, history) = realize_gram(
&gram, guess, &[],
1.0e-12, 0.5, 0.9, 1.1, 200, 110
);
print!("\nCompleted Gram matrix:{}", config.tr_mul(&*Q) * &config);
if success {
println!("Target accuracy achieved!");
} else {
println!("Failed to reach target accuracy");
}
println!("Steps: {}", history.scaled_loss.len() - 1);
println!("Loss: {}", history.scaled_loss.last().unwrap());
println!("\nStep │ Loss\n─────┼────────────────────────────────");
for (step, scaled_loss) in history.scaled_loss.into_iter().enumerate() {
println!("{:<4}{}", step, scaled_loss);
}
}
#[test]
fn point_on_sphere_example() {
let gram = PartialMatrix({
let mut entries = Vec::<MatrixEntry>::new();
for j in 0..2 {
for k in 0..2 {
entries.push(MatrixEntry {
index: (j, k),
value: if (j, k) == (1, 1) { 1.0 } else { 0.0 }
});
}
}
entries
});
let guess = DMatrix::from_columns(&[
point(0.0, 0.0, 2.0),
sphere(0.0, 0.0, 0.0, 1.0)
]);
let frozen = [(3, 0)];
println!();
let (config, success, history) = realize_gram(
&gram, guess, &frozen,
1.0e-12, 0.5, 0.9, 1.1, 200, 110
);
print!("\nCompleted Gram matrix:{}", config.tr_mul(&*Q) * &config);
print!("Configuration:{}", config);
if success {
println!("Target accuracy achieved!");
} else {
println!("Failed to reach target accuracy");
}
println!("Steps: {}", history.scaled_loss.len() - 1);
println!("Loss: {}", history.scaled_loss.last().unwrap());
println!("\nStep │ Loss\n─────┼────────────────────────────────");
for (step, scaled_loss) in history.scaled_loss.into_iter().enumerate() {
println!("{:<4}{}", step, scaled_loss);
}
}
/* TO DO */
// --- new test placed here to avoid merge conflict ---
// at the frozen indices, the optimization steps should have exact zeros,
// and the realized configuration should match the initial guess
#[test]
fn frozen_entry_test() {
let gram = {
let mut gram_to_be = PartialMatrix::new();
for j in 0..2 {
for k in j..2 {
gram_to_be.push_sym(j, k, if (j, k) == (1, 1) { 1.0 } else { 0.0 });
}
}
gram_to_be
};
let guess = DMatrix::from_columns(&[
point(0.0, 0.0, 2.0),
sphere(0.0, 0.0, 0.0, 1.0)
]);
let frozen = [(3, 0), (3, 1)];
println!();
let (config, success, history) = realize_gram(
&gram, guess.clone(), &frozen,
1.0e-12, 0.5, 0.9, 1.1, 200, 110
);
assert_eq!(success, true);
for base_step in history.base_step.into_iter() {
for index in frozen {
assert_eq!(base_step[index], 0.0);
}
}
for index in frozen {
assert_eq!(config[index], guess[index]);
}
}
}

View File

@ -0,0 +1,7 @@
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}

View File

@ -0,0 +1,234 @@
#version 300 es
precision highp float;
out vec4 outColor;
// --- inversive geometry ---
struct vecInv {
vec3 sp;
vec2 lt;
};
// --- uniforms ---
// assembly
const int SPHERE_MAX = 200;
uniform int sphere_cnt;
uniform vecInv sphere_list[SPHERE_MAX];
uniform vec3 color_list[SPHERE_MAX];
uniform float highlight_list[SPHERE_MAX];
// view
uniform vec2 resolution;
uniform float shortdim;
// controls
uniform float opacity;
uniform int layer_threshold;
uniform bool debug_mode;
// light and camera
const float focal_slope = 0.3;
const vec3 light_dir = normalize(vec3(2., 2., 1.));
const float ixn_threshold = 0.005;
const float INTERIOR_DIMMING = 0.7;
// --- sRGB ---
// map colors from RGB space to sRGB space, as specified in the sRGB standard
// (IEC 61966-2-1:1999)
//
// https://www.color.org/sRGB.pdf
// https://www.color.org/chardata/rgb/srgb.xalter
//
// in RGB space, color value is proportional to light intensity, so linear
// color-vector interpolation corresponds to physical light mixing. in sRGB
// space, the color encoding used by many monitors, we use more of the value
// interval to represent low intensities, and less of the interval to represent
// high intensities. this improves color quantization
float sRGB(float t) {
if (t <= 0.0031308) {
return 12.92*t;
} else {
return 1.055*pow(t, 5./12.) - 0.055;
}
}
vec3 sRGB(vec3 color) {
return vec3(sRGB(color.r), sRGB(color.g), sRGB(color.b));
}
// --- shading ---
struct Fragment {
vec3 pt;
vec3 normal;
vec4 color;
};
Fragment sphere_shading(vecInv v, vec3 pt, vec3 base_color) {
// the expression for normal needs to be checked. it's supposed to give the
// negative gradient of the lorentz product between the impact point vector
// and the sphere vector with respect to the coordinates of the impact
// point. i calculated it in my head and decided that the result looked good
// enough for now
vec3 normal = normalize(-v.sp + 2.*v.lt.s*pt);
float incidence = dot(normal, light_dir);
float illum = mix(0.4, 1.0, max(incidence, 0.0));
return Fragment(pt, normal, vec4(illum * base_color, opacity));
}
float intersection_dist(Fragment a, Fragment b) {
float intersection_sin = length(cross(a.normal, b.normal));
vec3 disp = a.pt - b.pt;
return max(
abs(dot(a.normal, disp)),
abs(dot(b.normal, disp))
) / intersection_sin;
}
// --- ray-casting ---
struct TaggedDepth {
float depth;
float dimming;
int id;
};
// if `a/b` is less than this threshold, we approximate `a*u^2 + b*u + c` by
// the linear function `b*u + c`
const float DEG_THRESHOLD = 1e-9;
// the depths, represented as multiples of `dir`, where the line generated by
// `dir` hits the sphere represented by `v`. if both depths are positive, the
// smaller one is returned in the first component. if only one depth is
// positive, it could be returned in either component
vec2 sphere_cast(vecInv v, vec3 dir) {
float a = -v.lt.s * dot(dir, dir);
float b = dot(v.sp, dir);
float c = -v.lt.t;
float adjust = 4.*a*c/(b*b);
if (adjust < 1.) {
// as long as `b` is non-zero, the linear approximation of
//
// a*u^2 + b*u + c
//
// at `u = 0` will reach zero at a finite depth `u_lin`. the root of the
// quadratic adjacent to `u_lin` is stored in `lin_root`. if both roots
// have the same sign, `lin_root` will be the one closer to `u = 0`
float square_rect_ratio = 1. + sqrt(1. - adjust);
float lin_root = -(2.*c)/b / square_rect_ratio;
if (abs(a) > DEG_THRESHOLD * abs(b)) {
return vec2(lin_root, -b/(2.*a) * square_rect_ratio);
} else {
return vec2(lin_root, -1.);
}
} else {
// the line through `dir` misses the sphere completely
return vec2(-1., -1.);
}
}
void main() {
vec2 scr = (2.*gl_FragCoord.xy - resolution) / shortdim;
vec3 dir = vec3(focal_slope * scr, -1.);
// cast rays through the spheres
const int LAYER_MAX = 12;
TaggedDepth top_hits [LAYER_MAX];
int layer_cnt = 0;
for (int id = 0; id < sphere_cnt; ++id) {
// find out where the ray hits the sphere
vec2 hit_depths = sphere_cast(sphere_list[id], dir);
// insertion-sort the points we hit into the hit list
float dimming = 1.;
for (int side = 0; side < 2; ++side) {
float depth = hit_depths[side];
if (depth > 0.) {
for (int layer = layer_cnt; layer >= 0; --layer) {
if (layer < 1 || top_hits[layer-1].depth <= depth) {
// we're not as close to the screen as the hit before
// the empty slot, so insert here
if (layer < LAYER_MAX) {
top_hits[layer] = TaggedDepth(depth, dimming, id);
}
break;
} else {
// we're closer to the screen than the hit before the
// empty slot, so move that hit into the empty slot
top_hits[layer] = top_hits[layer-1];
}
}
layer_cnt = min(layer_cnt + 1, LAYER_MAX);
dimming = INTERIOR_DIMMING;
}
}
}
/* DEBUG */
// in debug mode, show the layer count instead of the shaded image
if (debug_mode) {
// at the bottom of the screen, show the color scale instead of the
// layer count
if (gl_FragCoord.y < 10.) layer_cnt = int(16. * gl_FragCoord.x / resolution.x);
// convert number to color
ivec3 bits = layer_cnt / ivec3(1, 2, 4);
vec3 color = mod(vec3(bits), 2.);
if (layer_cnt % 16 >= 8) {
color = mix(color, vec3(0.5), 0.5);
}
outColor = vec4(color, 1.);
return;
}
// composite the sphere fragments
vec3 color = vec3(0.);
int layer = layer_cnt - 1;
TaggedDepth hit = top_hits[layer];
Fragment frag_next = sphere_shading(
sphere_list[hit.id],
hit.depth * dir,
hit.dimming * color_list[hit.id]
);
float highlight_next = highlight_list[hit.id];
--layer;
for (; layer >= layer_threshold; --layer) {
// load the current fragment
Fragment frag = frag_next;
float highlight = highlight_next;
// shade the next fragment
hit = top_hits[layer];
frag_next = sphere_shading(
sphere_list[hit.id],
hit.depth * dir,
hit.dimming * color_list[hit.id]
);
highlight_next = highlight_list[hit.id];
// highlight intersections
float ixn_dist = intersection_dist(frag, frag_next);
float max_highlight = max(highlight, highlight_next);
float ixn_highlight = 0.5 * max_highlight * (1. - smoothstep(2./3.*ixn_threshold, 1.5*ixn_threshold, ixn_dist));
frag.color = mix(frag.color, vec4(1.), ixn_highlight);
frag_next.color = mix(frag_next.color, vec4(1.), ixn_highlight);
// highlight cusps
float cusp_cos = abs(dot(dir, frag.normal));
float cusp_threshold = 2.*sqrt(ixn_threshold * sphere_list[hit.id].lt.s);
float cusp_highlight = highlight * (1. - smoothstep(2./3.*cusp_threshold, 1.5*cusp_threshold, cusp_cos));
frag.color = mix(frag.color, vec4(1.), cusp_highlight);
// composite the current fragment
color = mix(color, frag.color.rgb, frag.color.a);
}
color = mix(color, frag_next.color.rgb, frag_next.color.a);
outColor = vec4(sRGB(color), 1.);
}

42
app-proto/src/main.rs Normal file
View File

@ -0,0 +1,42 @@
mod add_remove;
mod assembly;
mod display;
mod engine;
mod outline;
use rustc_hash::FxHashSet;
use sycamore::prelude::*;
use add_remove::AddRemove;
use assembly::{Assembly, ElementKey};
use display::Display;
use outline::Outline;
#[derive(Clone)]
struct AppState {
assembly: Assembly,
selection: Signal<FxHashSet<ElementKey>>
}
impl AppState {
fn new() -> AppState {
AppState {
assembly: Assembly::new(),
selection: create_signal(FxHashSet::default())
}
}
}
fn main() {
sycamore::render(|| {
provide_context(AppState::new());
view! {
div(id="sidebar") {
AddRemove {}
Outline {}
}
Display {}
}
});
}

207
app-proto/src/outline.rs Normal file
View File

@ -0,0 +1,207 @@
use itertools::Itertools;
use sycamore::prelude::*;
use web_sys::{
Event,
HtmlInputElement,
KeyboardEvent,
MouseEvent,
wasm_bindgen::JsCast
};
use crate::{AppState, assembly, assembly::{Constraint, ConstraintKey, ElementKey}};
// an editable view of the Lorentz product representing a constraint
#[component(inline_props)]
fn LorentzProductInput(constraint: Constraint) -> View {
view! {
input(
r#type="text",
bind:value=constraint.lorentz_prod_text,
on:change=move |event: Event| {
let target: HtmlInputElement = event.target().unwrap().unchecked_into();
match target.value().parse::<f64>() {
Ok(lorentz_prod) => batch(|| {
constraint.lorentz_prod.set(lorentz_prod);
constraint.lorentz_prod_valid.set(true);
}),
Err(_) => constraint.lorentz_prod_valid.set(false)
};
}
)
}
}
// a list item that shows a constraint in an outline view of an element
#[component(inline_props)]
fn ConstraintOutlineItem(constraint_key: ConstraintKey, element_key: ElementKey) -> View {
let state = use_context::<AppState>();
let assembly = &state.assembly;
let constraint = assembly.constraints.with(|csts| csts[constraint_key].clone());
let other_subject = if constraint.subjects.0 == element_key {
constraint.subjects.1
} else {
constraint.subjects.0
};
let other_subject_label = assembly.elements.with(|elts| elts[other_subject].label.clone());
let class = constraint.lorentz_prod_valid.map(
|&lorentz_prod_valid| if lorentz_prod_valid { "constraint" } else { "constraint invalid" }
);
view! {
li(class=class.get()) {
input(r#type="checkbox", bind:checked=constraint.active)
div(class="constraint-label") { (other_subject_label) }
LorentzProductInput(constraint=constraint)
div(class="status")
}
}
}
// a list item that shows an element in an outline view of an assembly
#[component(inline_props)]
fn ElementOutlineItem(key: ElementKey, element: assembly::Element) -> View {
let state = use_context::<AppState>();
let class = state.selection.map(
move |sel| if sel.contains(&key) { "selected" } else { "" }
);
let label = element.label.clone();
let rep_components = element.representation.map(
|rep| rep.iter().map(
|u| format!("{:.3}", u).replace("-", "\u{2212}")
).collect()
);
let constrained = element.constraints.map(|csts| csts.len() > 0);
let constraint_list = element.constraints.map(
|csts| csts.clone().into_iter().collect()
);
let details_node = create_node_ref();
view! {
li {
details(ref=details_node) {
summary(
class=class.get(),
on:keydown={
move |event: KeyboardEvent| {
match event.key().as_str() {
"Enter" => {
if event.shift_key() {
state.selection.update(|sel| {
if !sel.remove(&key) {
sel.insert(key);
}
});
} else {
state.selection.update(|sel| {
sel.clear();
sel.insert(key);
});
}
event.prevent_default();
},
"ArrowRight" if constrained.get() => {
let _ = details_node
.get()
.unchecked_into::<web_sys::Element>()
.set_attribute("open", "");
},
"ArrowLeft" => {
let _ = details_node
.get()
.unchecked_into::<web_sys::Element>()
.remove_attribute("open");
},
_ => ()
}
}
}
) {
div(
class="element-switch",
on:click=|event: MouseEvent| event.stop_propagation()
)
div(
class="element",
on:click={
move |event: MouseEvent| {
if event.shift_key() {
state.selection.update(|sel| {
if !sel.remove(&key) {
sel.insert(key);
}
});
} else {
state.selection.update(|sel| {
sel.clear();
sel.insert(key);
});
}
event.stop_propagation();
event.prevent_default();
}
}
) {
div(class="element-label") { (label) }
div(class="element-representation") {
Indexed(
list=rep_components,
view=|coord_str| view! {
div { (coord_str) }
}
)
}
div(class="status")
}
}
ul(class="constraints") {
Keyed(
list=constraint_list,
view=move |cst_key| view! {
ConstraintOutlineItem(
constraint_key=cst_key,
element_key=key
)
},
key=|cst_key| cst_key.clone()
)
}
}
}
}
}
// a component that lists the elements of the current assembly, showing the
// constraints on each element as a collapsible sub-list. its implementation
// is based on Kate Morley's HTML + CSS tree views:
//
// https://iamkate.com/code/tree-views/
//
#[component]
pub fn Outline() -> View {
let state = use_context::<AppState>();
// list the elements alphabetically by ID
let element_list = state.assembly.elements.map(
|elts| elts
.clone()
.into_iter()
.sorted_by_key(|(_, elt)| elt.id.clone())
.collect()
);
view! {
ul(
id="outline",
on:click={
let state = use_context::<AppState>();
move |_| state.selection.update(|sel| sel.clear())
}
) {
Keyed(
list=element_list,
view=|(key, elt)| view! {
ElementOutlineItem(key=key, element=elt)
},
key=|(key, _)| key.clone()
)
}
}
}

View File

@ -0,0 +1,223 @@
module Viewer
using Blink
using Colors
using Printf
using Main.Engine
export ConstructionViewer, display!, opentools!, closetools!
# === Blink utilities ===
append_to_head!(w, type, content) = @js w begin
@var element = document.createElement($type)
element.appendChild(document.createTextNode($content))
document.head.appendChild(element)
end
style!(w, stylesheet) = append_to_head!(w, "style", stylesheet)
script!(w, code) = append_to_head!(w, "script", code)
# === construction viewer ===
mutable struct ConstructionViewer
win::Window
function ConstructionViewer()
# create window and open developer console
win = Window(Blink.Dict(:width => 620, :height => 830))
# set stylesheet
style!(win, """
body {
background-color: #ccc;
}
/* the maximum dimensions keep Ganja from blowing up the canvas */
#view {
display: block;
width: 600px;
height: 600px;
margin-top: 10px;
margin-left: 10px;
border-radius: 10px;
background-color: #f0f0f0;
}
#control-panel {
width: 600px;
height: 200px;
box-sizing: border-box;
padding: 5px 10px 5px 10px;
margin-top: 10px;
margin-left: 10px;
overflow-y: scroll;
border-radius: 10px;
background-color: #f0f0f0;
}
#control-panel > div {
margin-top: 5px;
padding: 4px;
border-radius: 5px;
border: solid;
font-family: monospace;
}
""")
# load Ganja.js. for an automatically updated web-hosted version, load from
#
# https://unpkg.com/ganja.js
#
# instead
loadjs!(win, "http://localhost:8000/ganja-1.0.204.js")
# create global functions and variables
script!(win, """
// create algebra
var CGA3 = Algebra(4, 1);
// initialize element list and palette
var elements = [];
var palette = [];
// declare handles for the view and its options
var view;
var viewOpt;
// declare handles for the controls
var controlPanel;
var visToggles;
// create scene function
function scene() {
commands = [];
for (let n = 0; n < elements.length; ++n) {
if (visToggles[n].checked) {
commands.push(palette[n], elements[n]);
}
}
return commands;
}
function updateView() {
requestAnimationFrame(view.update.bind(view, scene));
}
""")
@js win begin
# create view
viewOpt = Dict(
:conformal => true,
:gl => true,
:devicePixelRatio => window.devicePixelRatio
)
view = CGA3.graph(scene, viewOpt)
view.setAttribute(:id, "view")
view.removeAttribute(:style)
document.body.replaceChildren(view)
# create control panel
controlPanel = document.createElement(:div)
controlPanel.setAttribute(:id, "control-panel")
document.body.appendChild(controlPanel)
end
new(win)
end
end
mprod(v, w) =
v[1]*w[1] + v[2]*w[2] + v[3]*w[3] + v[4]*w[4] - v[5]*w[5]
function display!(viewer::ConstructionViewer, elements::Matrix)
# load elements
elements_full = []
for elt in eachcol(Engine.unmix * elements)
if mprod(elt, elt) < 0.5
elt_full = [0; elt; fill(0, 26)]
else
# `elt` is a spacelike vector, representing a generalized sphere, so we
# take its Hodge dual before passing it to Ganja.js. the dual represents
# the same generalized sphere, but Ganja.js only displays planes when
# they're represented by vectors in grade 4 rather than grade 1
elt_full = [fill(0, 26); -elt[5]; -elt[4]; elt[3]; -elt[2]; elt[1]; 0]
end
push!(elements_full, elt_full)
end
@js viewer.win elements = $elements_full.map((elt) -> @new CGA3(elt))
# generate palette. this is Gadfly's `default_discrete_colors` palette,
# available under the MIT license
palette = distinguishable_colors(
length(elements_full),
[LCHab(70, 60, 240)],
transform = c -> deuteranopic(c, 0.5),
lchoices = Float64[65, 70, 75, 80],
cchoices = Float64[0, 50, 60, 70],
hchoices = range(0, stop=330, length=24)
)
palette_packed = [RGB24(c).color for c in palette]
@js viewer.win palette = $palette_packed
# create visibility toggles
@js viewer.win begin
controlPanel.replaceChildren()
visToggles = []
end
for (elt, c) in zip(eachcol(elements), palette)
vec_str = join(map(t -> @sprintf("%.3f", t), elt), ", ")
color_str = "#$(hex(c))"
style_str = "background-color: $color_str; border-color: $color_str;"
@js viewer.win begin
@var toggle = document.createElement(:div)
toggle.setAttribute(:style, $style_str)
toggle.checked = true
toggle.addEventListener(
"click",
() -> begin
toggle.checked = !toggle.checked
toggle.style.backgroundColor = toggle.checked ? $color_str : "inherit";
updateView()
end
)
toggle.appendChild(document.createTextNode($vec_str))
visToggles.push(toggle);
controlPanel.appendChild(toggle);
end
end
# update view
@js viewer.win updateView()
end
function opentools!(viewer::ConstructionViewer)
size(viewer.win, 1240, 830)
opentools(viewer.win)
end
function closetools!(viewer::ConstructionViewer)
closetools(viewer.win)
size(viewer.win, 620, 830)
end
end
# ~~~ sandbox setup ~~~
elements = let
a = sqrt(BigFloat(3)/2)
sqrt(0.5) * BigFloat[
1 1 -1 -1 0
1 -1 1 -1 0
1 -1 -1 1 0
0.5 0.5 0.5 0.5 1+a
0.5 0.5 0.5 0.5 1-a
]
end
# show construction
viewer = Viewer.ConstructionViewer()
Viewer.display!(viewer, elements)

View File

@ -0,0 +1,203 @@
module Algebraic
export
codimension, dimension,
Construction, realize,
Element, Point, Sphere,
Relation, LiesOn, AlignsWithBy, mprod
import Subscripts
using LinearAlgebra
using AbstractAlgebra
using Groebner
using ...HittingSet
# --- commutative algebra ---
# as of version 0.36.6, AbstractAlgebra only supports ideals in multivariate
# polynomial rings when the coefficients are integers. we use Groebner to extend
# support to rationals and to finite fields of prime order
Generic.reduce_gens(I::Generic.Ideal{U}) where {T <: FieldElement, U <: MPolyRingElem{T}} =
Generic.Ideal{U}(base_ring(I), groebner(gens(I)))
function codimension(I::Generic.Ideal{U}, maxdepth = Inf) where {T <: RingElement, U <: MPolyRingElem{T}}
leading = [exponent_vector(f, 1) for f in gens(I)]
targets = [Set(findall(.!iszero.(exp_vec))) for exp_vec in leading]
length(HittingSet.solve(HittingSetProblem(targets), maxdepth))
end
dimension(I::Generic.Ideal{U}, maxdepth = Inf) where {T <: RingElement, U <: MPolyRingElem{T}} =
length(gens(base_ring(I))) - codimension(I, maxdepth)
# --- primitve elements ---
abstract type Element{T} end
mutable struct Point{T} <: Element{T}
coords::Vector{MPolyRingElem{T}}
vec::Union{Vector{MPolyRingElem{T}}, Nothing}
rel::Nothing
## [to do] constructor argument never needed?
Point{T}(
coords::Vector{MPolyRingElem{T}} = MPolyRingElem{T}[],
vec::Union{Vector{MPolyRingElem{T}}, Nothing} = nothing
) where T = new(coords, vec, nothing)
end
function buildvec!(pt::Point)
coordring = parent(pt.coords[1])
pt.vec = [one(coordring), dot(pt.coords, pt.coords), pt.coords...]
end
mutable struct Sphere{T} <: Element{T}
coords::Vector{MPolyRingElem{T}}
vec::Union{Vector{MPolyRingElem{T}}, Nothing}
rel::Union{MPolyRingElem{T}, Nothing}
## [to do] constructor argument never needed?
Sphere{T}(
coords::Vector{MPolyRingElem{T}} = MPolyRingElem{T}[],
vec::Union{Vector{MPolyRingElem{T}}, Nothing} = nothing,
rel::Union{MPolyRingElem{T}, Nothing} = nothing
) where T = new(coords, vec, rel)
end
function buildvec!(sph::Sphere)
coordring = parent(sph.coords[1])
sph.vec = sph.coords
sph.rel = mprod(sph.coords, sph.coords) + one(coordring)
end
const coordnames = IdDict{Symbol, Vector{Union{Symbol, Nothing}}}(
nameof(Point) => [nothing, nothing, :xₚ, :yₚ, :zₚ],
nameof(Sphere) => [:rₛ, :sₛ, :xₛ, :yₛ, :zₛ]
)
coordname(elt::Element, index) = coordnames[nameof(typeof(elt))][index]
function pushcoordname!(coordnamelist, indexed_elt::Tuple{Any, Element}, coordindex)
eltindex, elt = indexed_elt
name = coordname(elt, coordindex)
if !isnothing(name)
subscript = Subscripts.sub(string(eltindex))
push!(coordnamelist, Symbol(name, subscript))
end
end
function takecoord!(coordlist, indexed_elt::Tuple{Any, Element}, coordindex)
elt = indexed_elt[2]
if !isnothing(coordname(elt, coordindex))
push!(elt.coords, popfirst!(coordlist))
end
end
# --- primitive relations ---
abstract type Relation{T} end
mprod(v, w) = (v[1]*w[2] + w[1]*v[2]) / 2 - dot(v[3:end], w[3:end])
# elements: point, sphere
struct LiesOn{T} <: Relation{T}
elements::Vector{Element{T}}
LiesOn{T}(pt::Point{T}, sph::Sphere{T}) where T = new{T}([pt, sph])
end
equation(rel::LiesOn) = mprod(rel.elements[1].vec, rel.elements[2].vec)
# elements: sphere, sphere
struct AlignsWithBy{T} <: Relation{T}
elements::Vector{Element{T}}
cos_angle::T
AlignsWithBy{T}(sph1::Sphere{T}, sph2::Sphere{T}, cos_angle::T) where T = new{T}([sph1, sph2], cos_angle)
end
equation(rel::AlignsWithBy) = mprod(rel.elements[1].vec, rel.elements[2].vec) - rel.cos_angle
# --- constructions ---
mutable struct Construction{T}
points::Vector{Point{T}}
spheres::Vector{Sphere{T}}
relations::Vector{Relation{T}}
function Construction{T}(; elements = Vector{Element{T}}(), relations = Vector{Relation{T}}()) where T
allelements = union(elements, (rel.elements for rel in relations)...)
new{T}(
filter(elt -> isa(elt, Point), allelements),
filter(elt -> isa(elt, Sphere), allelements),
relations
)
end
end
function Base.push!(ctx::Construction{T}, elt::Point{T}) where T
push!(ctx.points, elt)
end
function Base.push!(ctx::Construction{T}, elt::Sphere{T}) where T
push!(ctx.spheres, elt)
end
function Base.push!(ctx::Construction{T}, rel::Relation{T}) where T
push!(ctx.relations, rel)
for elt in rel.elements
push!(ctx, elt)
end
end
function realize(ctx::Construction{T}) where T
# collect coordinate names
coordnamelist = Symbol[]
eltenum = enumerate(Iterators.flatten((ctx.spheres, ctx.points)))
for coordindex in 1:5
for indexed_elt in eltenum
pushcoordname!(coordnamelist, indexed_elt, coordindex)
end
end
# construct coordinate ring
coordring, coordqueue = polynomial_ring(parent_type(T)(), coordnamelist, ordering = :degrevlex)
# retrieve coordinates
for (_, elt) in eltenum
empty!(elt.coords)
end
for coordindex in 1:5
for indexed_elt in eltenum
takecoord!(coordqueue, indexed_elt, coordindex)
end
end
# construct coordinate vectors
for (_, elt) in eltenum
buildvec!(elt)
end
# turn relations into equations
eqns = vcat(
equation.(ctx.relations),
[elt.rel for (_, elt) in eltenum if !isnothing(elt.rel)]
)
# add relations to center, orient, and scale the construction
# [to do] the scaling constraint, as written, can be impossible to satisfy
# when all of the spheres have to go through the origin
if !isempty(ctx.points)
append!(eqns, [sum(pt.coords[k] for pt in ctx.points) for k in 1:3])
end
if !isempty(ctx.spheres)
append!(eqns, [sum(sph.coords[k] for sph in ctx.spheres) for k in 3:4])
end
n_elts = length(ctx.points) + length(ctx.spheres)
if n_elts > 0
push!(eqns, sum(elt.vec[2] for elt in Iterators.flatten((ctx.points, ctx.spheres))) - n_elts)
end
(Generic.Ideal(coordring, eqns), eqns)
end
end

View File

@ -0,0 +1,53 @@
module Numerical
using Random: default_rng
using LinearAlgebra
using AbstractAlgebra
using HomotopyContinuation:
Variable, Expression, AbstractSystem, System, LinearSubspace,
nvariables, isreal, witness_set, results
import GLMakie
using ..Algebraic
# --- polynomial conversion ---
# hat tip Sascha Timme
# https://github.com/JuliaHomotopyContinuation/HomotopyContinuation.jl/issues/520#issuecomment-1317681521
function Base.convert(::Type{Expression}, f::MPolyRingElem)
variables = Variable.(symbols(parent(f)))
f_data = zip(coefficients(f), exponent_vectors(f))
sum(cf * prod(variables .^ exp_vec) for (cf, exp_vec) in f_data)
end
# create a ModelKit.System from an ideal in a multivariate polynomial ring. the
# variable ordering is taken from the polynomial ring
function System(I::Generic.Ideal)
eqns = Expression.(gens(I))
variables = Variable.(symbols(base_ring(I)))
System(eqns, variables = variables)
end
# --- sampling ---
function real_samples(F::AbstractSystem, dim; rng = default_rng())
# choose a random real hyperplane of codimension `dim` by intersecting
# hyperplanes whose normal vectors are uniformly distributed over the unit
# sphere
# [to do] guard against the unlikely event that one of the normals is zero
normals = transpose(hcat(
(normalize(randn(rng, nvariables(F))) for _ in 1:dim)...
))
cut = LinearSubspace(normals, fill(0., dim))
filter(isreal, results(witness_set(F, cut, seed = 0x1974abba)))
end
AbstractAlgebra.evaluate(pt::Point, vals::Vector{<:RingElement}) =
GLMakie.Point3f([evaluate(u, vals) for u in pt.coords])
function AbstractAlgebra.evaluate(sph::Sphere, vals::Vector{<:RingElement})
radius = 1 / evaluate(sph.coords[1], vals)
center = radius * [evaluate(u, vals) for u in sph.coords[3:end]]
GLMakie.Sphere(GLMakie.Point3f(center), radius)
end
end

View File

@ -0,0 +1,76 @@
include("HittingSet.jl")
module Engine
include("Engine.Algebraic.jl")
include("Engine.Numerical.jl")
using .Algebraic
using .Numerical
export Construction, mprod, codimension, dimension
end
# ~~~ sandbox setup ~~~
using Random
using Distributions
using LinearAlgebra
using AbstractAlgebra
using HomotopyContinuation
using GLMakie
CoeffType = Rational{Int64}
spheres = [Engine.Sphere{CoeffType}() for _ in 1:3]
tangencies = [
Engine.AlignsWithBy{CoeffType}(
spheres[n],
spheres[mod1(n+1, length(spheres))],
CoeffType(1)
)
for n in 1:3
]
ctx_tan_sph = Engine.Construction{CoeffType}(elements = spheres, relations = tangencies)
ideal_tan_sph, eqns_tan_sph = Engine.realize(ctx_tan_sph)
freedom = Engine.dimension(ideal_tan_sph)
println("Three mutually tangent spheres: $freedom degrees of freedom")
# --- test rational cut ---
coordring = base_ring(ideal_tan_sph)
vbls = Variable.(symbols(coordring))
# test a random witness set
system = CompiledSystem(System(eqns_tan_sph, variables = vbls))
norm2 = vec -> real(dot(conj.(vec), vec))
rng = MersenneTwister(6071)
n_planes = 6
samples = []
for _ in 1:n_planes
real_solns = solution.(Engine.Numerical.real_samples(system, freedom, rng = rng))
for soln in real_solns
if all(norm2(soln - samp) > 1e-4*length(gens(coordring)) for samp in samples)
push!(samples, soln)
end
end
end
println("Found $(length(samples)) sample solutions")
# show a sample solution
function show_solution(ctx, vals)
# evaluate elements
real_vals = real.(vals)
disp_points = [Engine.Numerical.evaluate(pt, real_vals) for pt in ctx.points]
disp_spheres = [Engine.Numerical.evaluate(sph, real_vals) for sph in ctx.spheres]
# create scene
scene = Scene()
cam3d!(scene)
scatter!(scene, disp_points, color = :green)
for sph in disp_spheres
mesh!(scene, sph, color = :gray)
end
scene
end

View File

@ -0,0 +1,111 @@
module HittingSet
export HittingSetProblem, solve
HittingSetProblem{T} = Pair{Set{T}, Vector{Pair{T, Set{Set{T}}}}}
# `targets` should be a collection of Set objects
function HittingSetProblem(targets, chosen = Set())
wholeset = union(targets...)
T = eltype(wholeset)
unsorted_moves = [
elt => Set(filter(s -> elt s, targets))
for elt in wholeset
]
moves = sort(unsorted_moves, by = pair -> length(pair.second))
Set{T}(chosen) => moves
end
function Base.display(problem::HittingSetProblem{T}) where T
println("HittingSetProblem{$T}")
chosen = problem.first
println(" {", join(string.(chosen), ", "), "}")
moves = problem.second
for (choice, missed) in moves
println(" | ", choice)
for s in missed
println(" | | {", join(string.(s), ", "), "}")
end
end
println()
end
function solve(pblm::HittingSetProblem{T}, maxdepth = Inf) where T
problems = Dict(pblm)
while length(first(problems).first) < maxdepth
subproblems = typeof(problems)()
for (chosen, moves) in problems
if isempty(moves)
return chosen
else
for (choice, missed) in moves
to_be_chosen = union(chosen, Set([choice]))
if isempty(missed)
return to_be_chosen
elseif !haskey(subproblems, to_be_chosen)
push!(subproblems, HittingSetProblem(missed, to_be_chosen))
end
end
end
end
problems = subproblems
end
problems
end
function test(n = 1)
T = [Int64, Int64, Symbol, Symbol][n]
targets = Set{T}.([
[
[1, 3, 5],
[2, 3, 4],
[1, 4],
[2, 3, 4, 5],
[4, 5]
],
# example from Amit Chakrabarti's graduate-level algorithms class (CS 105)
# notes by Valika K. Wan and Khanh Do Ba, Winter 2005
# https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/
[
[1, 3], [1, 4], [1, 5],
[1, 3], [1, 2, 4], [1, 2, 5],
[4, 3], [ 2, 4], [ 2, 5],
[6, 3], [6, 4], [ 5]
],
[
[:w, :x, :y],
[:x, :y, :z],
[:w, :z],
[:x, :y]
],
# Wikipedia showcases this as an example of a problem where the greedy
# algorithm performs especially poorly
[
[:a, :x, :t1],
[:a, :y, :t2],
[:a, :y, :t3],
[:a, :z, :t4],
[:a, :z, :t5],
[:a, :z, :t6],
[:a, :z, :t7],
[:b, :x, :t8],
[:b, :y, :t9],
[:b, :y, :t10],
[:b, :z, :t11],
[:b, :z, :t12],
[:b, :z, :t13],
[:b, :z, :t14]
]
][n])
problem = HittingSetProblem(targets)
if isa(problem, HittingSetProblem{T})
println("Correct type")
else
println("Wrong type: ", typeof(problem))
end
problem
end
end

View File

@ -0,0 +1,96 @@
<!DOCTYPE html>
<html>
<head>
<style>
body {
background-color: #ffe0f0;
}
/* needed to keep Ganja canvas from blowing up */
canvas {
min-width: 600px;
max-width: 600px;
min-height: 600px;
max-height: 600px;
}
</style>
<script src="https://unpkg.com/ganja.js"></script>
</head>
<body>
<p><button onclick="flip()">Flip</button></p>
<script>
// in the default view, e4 + e5 is the point at infinity
let CGA3 = Algebra(4, 1);
let elements = [
CGA3.inline(() => Math.sqrt(0.5)*( 1e1 + 1e2 + 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*( 1e1 - 1e2 - 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*(-1e1 + 1e2 - 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*(-1e1 - 1e2 + 1e3 + 1e5))(),
CGA3.inline(() => -Math.sqrt(3)*1e4 + Math.sqrt(2)*1e5)()
];
/*
these blocks of commented-out code can be used to confirm that a spacelike
vector and its Hodge dual represent the same generalized sphere
*/
/*let elements = [
CGA3.inline(() => Math.sqrt(0.5)*!( 1e1 + 1e2 + 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*!( 1e1 - 1e2 - 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*!(-1e1 + 1e2 - 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*!(-1e1 - 1e2 + 1e3 + 1e5))(),
CGA3.inline(() => !(-Math.sqrt(3)*1e4 + Math.sqrt(2)*1e5))()
];*/
/*let elements = [
CGA3.inline(() => 1e1 + 1e5)(),
CGA3.inline(() => 1e2 + 1e5)(),
CGA3.inline(() => 1e3 + 1e5)(),
CGA3.inline(() => -1e4 + 1e5)(),
CGA3.inline(() => Math.sqrt(0.5)*(1e1 + 1e2 + 1e3 + 1e5))(),
CGA3.inline(() => Math.sqrt(0.5)*!(1e1 + 1e2 + 1e3 - 0.01e4 + 1e5))()
];*/
// set up palette
var colorIndex;
var palette = [0xff00b0, 0x00ffb0, 0x00b0ff, 0x8040ff, 0xc0c0c0];
function nextColor() {
colorIndex = (colorIndex + 1) % palette.length;
return palette[colorIndex];
}
function resetColorCycle() {
colorIndex = palette.length - 1;
}
resetColorCycle();
// create scene function
function scene() {
commands = [];
resetColorCycle();
elements.forEach((elt) => commands.push(nextColor(), elt));
return commands;
}
// initialize graph
let graph = CGA3.graph(
scene,
{
conformal: true, gl: true, grid: true
}
)
document.body.appendChild(graph);
function flip() {
let last = elements.length - 1;
for (let n = 0; n < last; ++n) {
// reflect
elements[n] = CGA3.Mul(CGA3.Mul(elements[last], elements[n]), elements[last]);
// de-noise
for (let k = 6; k < elements[n].length; ++k) {
/*for (let k = 0; k < 26; ++k) {*/
elements[n][k] = 0;
}
}
requestAnimationFrame(graph.update.bind(graph, scene));
}
</script>
</body>
</html>

View File

@ -0,0 +1,127 @@
using Blink
using Colors
# === utilities ===
append_to_head!(w, type, content) = @js w begin
@var element = document.createElement($type)
element.appendChild(document.createTextNode($content))
document.head.appendChild(element)
end
style!(w, stylesheet) = append_to_head!(w, "style", stylesheet)
script!(w, code) = append_to_head!(w, "script", code)
function add_element!(vec)
# add element
full_vec = [0; vec; fill(0, 26)]
n = @js win elements.push(@new CGA3($full_vec))
# generate palette. this is Gadfly's `default_discrete_colors` palette,
# available under the MIT license
palette = distinguishable_colors(
n,
[LCHab(70, 60, 240)],
transform = c -> deuteranopic(c, 0.5),
lchoices = Float64[65, 70, 75, 80],
cchoices = Float64[0, 50, 60, 70],
hchoices = range(0, stop=330, length=24)
)
palette_packed = [RGB24(c).color for c in palette]
@js win palette = $palette_packed
end
# === build page ===
# create window and open developer console
win = Window()
opentools(win)
# set stylesheet
style!(win, """
body {
background-color: #ffe0f0;
}
/* needed to keep Ganja canvas from blowing up */
canvas {
min-width: 600px;
max-width: 600px;
min-height: 600px;
max-height: 600px;
}
""")
# load Ganja.js
loadjs!(win, "https://unpkg.com/ganja.js")
# create global functions and variables
script!(win, """
// create algebra
var CGA3 = Algebra(4, 1);
// initialize element list and palette
var elements = [];
var palette = [];
// declare visualization handle
var graph;
// create scene function
function scene() {
commands = [];
for (let n = 0; n < elements.length; ++n) {
commands.push(palette[n], elements[n]);
}
return commands;
}
function flip() {
let last = elements.length - 1;
for (let n = 0; n < last; ++n) {
// reflect
elements[n] = CGA3.Mul(CGA3.Mul(elements[last], elements[n]), elements[last]);
// de-noise
for (let k = 6; k < elements[n].length; ++k) {
elements[n][k] = 0;
}
}
requestAnimationFrame(graph.update.bind(graph, scene));
}
""")
# set up controls
body!(win, """
<p><button id="flip-button" onclick="flip()">Flip</button></p>
""", async = false)
# === set up visualization ===
# list elements. in the default view, e4 + e5 is the point at infinity
elements = sqrt(0.5) * BigFloat[
1 1 -1 -1 0;
1 -1 1 -1 0;
1 -1 -1 1 0;
0 0 0 0 -sqrt(6);
1 1 1 1 2
]
# load elements
for vec in eachcol(elements)
add_element!(vec)
end
# initialize visualization
@js win begin
graph = CGA3.graph(
scene,
Dict(
"conformal" => true,
"gl" => true,
"grid" => true
)
)
document.body.appendChild(graph)
end

View File

@ -0,0 +1,573 @@
module Engine
using LinearAlgebra
using GenericLinearAlgebra
using SparseArrays
using Random
using Optim
export
rand_on_shell, Q, DescentHistory,
realize_gram_gradient, realize_gram_newton, realize_gram_optim,
realize_gram_alt_proj, realize_gram
# === guessing ===
sconh(t, u) = 0.5*(exp(t) + u*exp(-t))
function rand_on_sphere(rng::AbstractRNG, ::Type{T}, n) where T
out = randn(rng, T, n)
tries_left = 2
while dot(out, out) < 1e-6 && tries_left > 0
out = randn(rng, T, n)
tries_left -= 1
end
normalize(out)
end
##[TO DO] write a test to confirm that the outputs are on the correct shells
function rand_on_shell(rng::AbstractRNG, shell::T) where T <: Number
space_part = rand_on_sphere(rng, T, 4)
rapidity = randn(rng, T)
sig = sign(shell)
nullmix * [sconh(rapidity, sig)*space_part; sconh(rapidity, -sig)]
end
rand_on_shell(rng::AbstractRNG, shells::Array{T}) where T <: Number =
hcat([rand_on_shell(rng, sh) for sh in shells]...)
rand_on_shell(shells::Array{<:Number}) = rand_on_shell(Random.default_rng(), shells)
# === elements ===
point(pos) = [pos; 0.5; 0.5 * dot(pos, pos)]
plane(normal, offset) = [-normal; 0; -offset]
function sphere(center, radius)
dist_sq = dot(center, center)
[
center / radius;
0.5 / radius;
0.5 * (dist_sq / radius - radius)
]
end
# === Gram matrix realization ===
# basis changes
nullmix = [Matrix{Int64}(I, 3, 3) zeros(Int64, 3, 2); zeros(Int64, 2, 3) [-1 1; 1 1]//2]
unmix = [Matrix{Int64}(I, 3, 3) zeros(Int64, 3, 2); zeros(Int64, 2, 3) [-1 1; 1 1]]
# the Lorentz form
Q = [Matrix{Int64}(I, 3, 3) zeros(Int64, 3, 2); zeros(Int64, 2, 3) [0 -2; -2 0]]
# project a matrix onto the subspace of matrices whose entries vanish away from
# the given indices
function proj_to_entries(mat, indices)
result = zeros(size(mat))
for (j, k) in indices
result[j, k] = mat[j, k]
end
result
end
# the difference between the matrices `target` and `attempt`, projected onto the
# subspace of matrices whose entries vanish at each empty index of `target`
function proj_diff(target::SparseMatrixCSC{T, <:Any}, attempt::Matrix{T}) where T
J, K, values = findnz(target)
result = zeros(size(target))
for (j, k, val) in zip(J, K, values)
result[j, k] = val - attempt[j, k]
end
result
end
# a type for keeping track of gradient descent history
struct DescentHistory{T}
scaled_loss::Array{T}
neg_grad::Array{Matrix{T}}
base_step::Array{Matrix{T}}
hess::Array{Hermitian{T, Matrix{T}}}
slope::Array{T}
stepsize::Array{T}
positive::Array{Bool}
backoff_steps::Array{Int64}
last_line_L::Array{Matrix{T}}
last_line_loss::Array{T}
function DescentHistory{T}(
scaled_loss = Array{T}(undef, 0),
neg_grad = Array{Matrix{T}}(undef, 0),
hess = Array{Hermitian{T, Matrix{T}}}(undef, 0),
base_step = Array{Matrix{T}}(undef, 0),
slope = Array{T}(undef, 0),
stepsize = Array{T}(undef, 0),
positive = Bool[],
backoff_steps = Int64[],
last_line_L = Array{Matrix{T}}(undef, 0),
last_line_loss = Array{T}(undef, 0)
) where T
new(scaled_loss, neg_grad, hess, base_step, slope, stepsize, positive, backoff_steps, last_line_L, last_line_loss)
end
end
# seek a matrix `L` for which `L'QL` matches the sparse matrix `gram` at every
# explicit entry of `gram`. use gradient descent starting from `guess`
function realize_gram_gradient(
gram::SparseMatrixCSC{T, <:Any},
guess::Matrix{T};
scaled_tol = 1e-30,
min_efficiency = 0.5,
init_stepsize = 1.0,
backoff = 0.9,
max_descent_steps = 600,
max_backoff_steps = 110
) where T <: Number
# start history
history = DescentHistory{T}()
# scale tolerance
scale_adjustment = sqrt(T(nnz(gram)))
tol = scale_adjustment * scaled_tol
# initialize variables
stepsize = init_stepsize
L = copy(guess)
# do gradient descent
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
for _ in 1:max_descent_steps
# stop if the loss is tolerably low
if loss < tol
break
end
# find the negative gradient of the loss function
neg_grad = 4*Q*L*Δ_proj
slope = norm(neg_grad)
dir = neg_grad / slope
# store current position, loss, and slope
L_last = L
loss_last = loss
push!(history.scaled_loss, loss / scale_adjustment)
push!(history.neg_grad, neg_grad)
push!(history.slope, slope)
# find a good step size using backtracking line search
push!(history.stepsize, 0)
push!(history.backoff_steps, max_backoff_steps)
empty!(history.last_line_L)
empty!(history.last_line_loss)
for backoff_steps in 0:max_backoff_steps
history.stepsize[end] = stepsize
L = L_last + stepsize * dir
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
improvement = loss_last - loss
push!(history.last_line_L, L)
push!(history.last_line_loss, loss / scale_adjustment)
if improvement >= min_efficiency * stepsize * slope
history.backoff_steps[end] = backoff_steps
break
end
stepsize *= backoff
end
# [DEBUG] if we've hit a wall, quit
if history.backoff_steps[end] == max_backoff_steps
break
end
end
# return the factorization and its history
push!(history.scaled_loss, loss / scale_adjustment)
L, history
end
function basis_matrix(::Type{T}, j, k, dims) where T
result = zeros(T, dims)
result[j, k] = one(T)
result
end
# seek a matrix `L` for which `L'QL` matches the sparse matrix `gram` at every
# explicit entry of `gram`. use Newton's method starting from `guess`
function realize_gram_newton(
gram::SparseMatrixCSC{T, <:Any},
guess::Matrix{T};
scaled_tol = 1e-30,
rate = 1,
max_steps = 100
) where T <: Number
# start history
history = DescentHistory{T}()
# find the dimension of the search space
dims = size(guess)
element_dim, construction_dim = dims
total_dim = element_dim * construction_dim
# list the constrained entries of the gram matrix
J, K, _ = findnz(gram)
constrained = zip(J, K)
# scale the tolerance
scale_adjustment = sqrt(T(length(constrained)))
tol = scale_adjustment * scaled_tol
# use Newton's method
L = copy(guess)
for step in 0:max_steps
# evaluate the loss function
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
# store the current loss
push!(history.scaled_loss, loss / scale_adjustment)
# stop if the loss is tolerably low
if loss < tol || step > max_steps
break
end
# find the negative gradient of the loss function
neg_grad = 4*Q*L*Δ_proj
# find the negative Hessian of the loss function
hess = Matrix{T}(undef, total_dim, total_dim)
indices = [(j, k) for k in 1:construction_dim for j in 1:element_dim]
for (j, k) in indices
basis_mat = basis_matrix(T, j, k, dims)
neg_dΔ = basis_mat'*Q*L + L'*Q*basis_mat
neg_dΔ_proj = proj_to_entries(neg_dΔ, constrained)
deriv_grad = 4*Q*(-basis_mat*Δ_proj + L*neg_dΔ_proj)
hess[:, (k-1)*element_dim + j] = reshape(deriv_grad, total_dim)
end
hess = Hermitian(hess)
push!(history.hess, hess)
# compute the Newton step
step = hess \ reshape(neg_grad, total_dim)
L += rate * reshape(step, dims)
end
# return the factorization and its history
L, history
end
LinearAlgebra.eigen!(A::Symmetric{BigFloat, Matrix{BigFloat}}; sortby::Nothing) =
eigen!(Hermitian(A))
function convertnz(type, mat)
J, K, values = findnz(mat)
sparse(J, K, type.(values))
end
function realize_gram_optim(
gram::SparseMatrixCSC{T, <:Any},
guess::Matrix{T}
) where T <: Number
# find the dimension of the search space
dims = size(guess)
element_dim, construction_dim = dims
total_dim = element_dim * construction_dim
# list the constrained entries of the gram matrix
J, K, _ = findnz(gram)
constrained = zip(J, K)
# scale the loss function
scale_adjustment = length(constrained)
function loss(L_vec)
L = reshape(L_vec, dims)
Δ_proj = proj_diff(gram, L'*Q*L)
dot(Δ_proj, Δ_proj) / scale_adjustment
end
function loss_grad!(storage, L_vec)
L = reshape(L_vec, dims)
Δ_proj = proj_diff(gram, L'*Q*L)
storage .= reshape(-4*Q*L*Δ_proj, total_dim) / scale_adjustment
end
function loss_hess!(storage, L_vec)
L = reshape(L_vec, dims)
Δ_proj = proj_diff(gram, L'*Q*L)
indices = [(j, k) for k in 1:construction_dim for j in 1:element_dim]
for (j, k) in indices
basis_mat = basis_matrix(T, j, k, dims)
neg_dΔ = basis_mat'*Q*L + L'*Q*basis_mat
neg_dΔ_proj = proj_to_entries(neg_dΔ, constrained)
deriv_grad = 4*Q*(-basis_mat*Δ_proj + L*neg_dΔ_proj) / scale_adjustment
storage[:, (k-1)*element_dim + j] = reshape(deriv_grad, total_dim)
end
end
optimize(
loss, loss_grad!, loss_hess!,
reshape(guess, total_dim),
Newton()
)
end
# seek a matrix `L` for which `L'QL` matches the sparse matrix `gram` at every
# explicit entry of `gram`. use gradient descent starting from `guess`, with an
# alternate technique for finding the projected base step from the unprojected
# Hessian
function realize_gram_alt_proj(
gram::SparseMatrixCSC{T, <:Any},
guess::Matrix{T},
frozen = CartesianIndex[];
scaled_tol = 1e-30,
min_efficiency = 0.5,
backoff = 0.9,
reg_scale = 1.1,
max_descent_steps = 200,
max_backoff_steps = 110
) where T <: Number
# start history
history = DescentHistory{T}()
# find the dimension of the search space
dims = size(guess)
element_dim, construction_dim = dims
total_dim = element_dim * construction_dim
# list the constrained entries of the gram matrix
J, K, _ = findnz(gram)
constrained = zip(J, K)
# scale the tolerance
scale_adjustment = sqrt(T(length(constrained)))
tol = scale_adjustment * scaled_tol
# convert the frozen indices to stacked format
frozen_stacked = [(index[2]-1)*element_dim + index[1] for index in frozen]
# initialize search state
L = copy(guess)
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
# use Newton's method with backtracking and gradient descent backup
for step in 1:max_descent_steps
# stop if the loss is tolerably low
if loss < tol
break
end
# find the negative gradient of the loss function
neg_grad = 4*Q*L*Δ_proj
# find the negative Hessian of the loss function
hess = Matrix{T}(undef, total_dim, total_dim)
indices = [(j, k) for k in 1:construction_dim for j in 1:element_dim]
for (j, k) in indices
basis_mat = basis_matrix(T, j, k, dims)
neg_dΔ = basis_mat'*Q*L + L'*Q*basis_mat
neg_dΔ_proj = proj_to_entries(neg_dΔ, constrained)
deriv_grad = 4*Q*(-basis_mat*Δ_proj + L*neg_dΔ_proj)
hess[:, (k-1)*element_dim + j] = reshape(deriv_grad, total_dim)
end
hess_sym = Hermitian(hess)
push!(history.hess, hess_sym)
# regularize the Hessian
min_eigval = minimum(eigvals(hess_sym))
push!(history.positive, min_eigval > 0)
if min_eigval <= 0
hess -= reg_scale * min_eigval * I
end
# compute the Newton step
neg_grad_stacked = reshape(neg_grad, total_dim)
for k in frozen_stacked
neg_grad_stacked[k] = 0
hess[k, :] .= 0
hess[:, k] .= 0
hess[k, k] = 1
end
base_step_stacked = Hermitian(hess) \ neg_grad_stacked
base_step = reshape(base_step_stacked, dims)
push!(history.base_step, base_step)
# store the current position, loss, and slope
L_last = L
loss_last = loss
push!(history.scaled_loss, loss / scale_adjustment)
push!(history.neg_grad, neg_grad)
push!(history.slope, norm(neg_grad))
# find a good step size using backtracking line search
push!(history.stepsize, 0)
push!(history.backoff_steps, max_backoff_steps)
empty!(history.last_line_L)
empty!(history.last_line_loss)
rate = one(T)
step_success = false
base_target_improvement = dot(neg_grad, base_step)
for backoff_steps in 0:max_backoff_steps
history.stepsize[end] = rate
L = L_last + rate * base_step
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
improvement = loss_last - loss
push!(history.last_line_L, L)
push!(history.last_line_loss, loss / scale_adjustment)
if improvement >= min_efficiency * rate * base_target_improvement
history.backoff_steps[end] = backoff_steps
step_success = true
break
end
rate *= backoff
end
# if we've hit a wall, quit
if !step_success
return L_last, false, history
end
end
# return the factorization and its history
push!(history.scaled_loss, loss / scale_adjustment)
L, loss < tol, history
end
# seek a matrix `L` for which `L'QL` matches the sparse matrix `gram` at every
# explicit entry of `gram`. use gradient descent starting from `guess`
function realize_gram(
gram::SparseMatrixCSC{T, <:Any},
guess::Matrix{T},
frozen = nothing;
scaled_tol = 1e-30,
min_efficiency = 0.5,
backoff = 0.9,
reg_scale = 1.1,
max_descent_steps = 200,
max_backoff_steps = 110
) where T <: Number
# start history
history = DescentHistory{T}()
# find the dimension of the search space
dims = size(guess)
element_dim, construction_dim = dims
total_dim = element_dim * construction_dim
# list the constrained entries of the gram matrix
J, K, _ = findnz(gram)
constrained = zip(J, K)
# scale the tolerance
scale_adjustment = sqrt(T(length(constrained)))
tol = scale_adjustment * scaled_tol
# list the un-frozen indices
has_frozen = !isnothing(frozen)
if has_frozen
is_unfrozen = fill(true, size(guess))
is_unfrozen[frozen] .= false
unfrozen = findall(is_unfrozen)
unfrozen_stacked = reshape(is_unfrozen, total_dim)
end
# initialize search state
L = copy(guess)
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
# use Newton's method with backtracking and gradient descent backup
for step in 1:max_descent_steps
# stop if the loss is tolerably low
if loss < tol
break
end
# find the negative gradient of the loss function
neg_grad = 4*Q*L*Δ_proj
# find the negative Hessian of the loss function
hess = Matrix{T}(undef, total_dim, total_dim)
indices = [(j, k) for k in 1:construction_dim for j in 1:element_dim]
for (j, k) in indices
basis_mat = basis_matrix(T, j, k, dims)
neg_dΔ = basis_mat'*Q*L + L'*Q*basis_mat
neg_dΔ_proj = proj_to_entries(neg_dΔ, constrained)
deriv_grad = 4*Q*(-basis_mat*Δ_proj + L*neg_dΔ_proj)
hess[:, (k-1)*element_dim + j] = reshape(deriv_grad, total_dim)
end
hess = Hermitian(hess)
push!(history.hess, hess)
# regularize the Hessian
min_eigval = minimum(eigvals(hess))
push!(history.positive, min_eigval > 0)
if min_eigval <= 0
hess -= reg_scale * min_eigval * I
end
# compute the Newton step
neg_grad_stacked = reshape(neg_grad, total_dim)
if has_frozen
hess = hess[unfrozen_stacked, unfrozen_stacked]
neg_grad_compressed = neg_grad_stacked[unfrozen_stacked]
else
neg_grad_compressed = neg_grad_stacked
end
base_step_compressed = hess \ neg_grad_compressed
if has_frozen
base_step_stacked = zeros(total_dim)
base_step_stacked[unfrozen_stacked] .= base_step_compressed
else
base_step_stacked = base_step_compressed
end
base_step = reshape(base_step_stacked, dims)
push!(history.base_step, base_step)
# store the current position, loss, and slope
L_last = L
loss_last = loss
push!(history.scaled_loss, loss / scale_adjustment)
push!(history.neg_grad, neg_grad)
push!(history.slope, norm(neg_grad))
# find a good step size using backtracking line search
push!(history.stepsize, 0)
push!(history.backoff_steps, max_backoff_steps)
empty!(history.last_line_L)
empty!(history.last_line_loss)
rate = one(T)
step_success = false
base_target_improvement = dot(neg_grad, base_step)
for backoff_steps in 0:max_backoff_steps
history.stepsize[end] = rate
L = L_last + rate * base_step
Δ_proj = proj_diff(gram, L'*Q*L)
loss = dot(Δ_proj, Δ_proj)
improvement = loss_last - loss
push!(history.last_line_L, L)
push!(history.last_line_loss, loss / scale_adjustment)
if improvement >= min_efficiency * rate * base_target_improvement
history.backoff_steps[end] = backoff_steps
step_success = true
break
end
rate *= backoff
end
# if we've hit a wall, quit
if !step_success
return L_last, false, history
end
end
# return the factorization and its history
push!(history.scaled_loss, loss / scale_adjustment)
L, loss < tol, history
end
end

View File

@ -0,0 +1,99 @@
include("Engine.jl")
using LinearAlgebra
using SparseArrays
function sphere_in_tetrahedron_shape()
# initialize the partial gram matrix for a sphere inscribed in a regular
# tetrahedron
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:5
for k in 1:5
push!(J, j)
push!(K, k)
if j == k
push!(values, 1)
elseif (j <= 4 && k <= 4)
push!(values, -1/BigFloat(3))
else
push!(values, -1)
end
end
end
gram = sparse(J, K, values)
# plot loss along a slice
loss_lin = []
loss_sq = []
mesh = range(0.9, 1.1, 101)
for t in mesh
L = hcat(
Engine.plane(normalize(BigFloat[ 1, 1, 1]), BigFloat(1)),
Engine.plane(normalize(BigFloat[ 1, -1, -1]), BigFloat(1)),
Engine.plane(normalize(BigFloat[-1, 1, -1]), BigFloat(1)),
Engine.plane(normalize(BigFloat[-1, -1, 1]), BigFloat(1)),
Engine.sphere(BigFloat[0, 0, 0], BigFloat(t))
)
Δ_proj = Engine.proj_diff(gram, L'*Engine.Q*L)
push!(loss_lin, norm(Δ_proj))
push!(loss_sq, dot(Δ_proj, Δ_proj))
end
mesh, loss_lin, loss_sq
end
function circles_in_triangle_shape()
# initialize the partial gram matrix for a sphere inscribed in a regular
# tetrahedron
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:8
for k in 1:8
filled = false
if j == k
push!(values, 1)
filled = true
elseif (j == 1 || k == 1)
push!(values, 0)
filled = true
elseif (j == 2 || k == 2)
push!(values, -1)
filled = true
end
#=elseif (j <= 5 && j != 2 && k == 9 || k == 9 && k <= 5 && k != 2)
push!(values, 0)
filled = true
end=#
if filled
push!(J, j)
push!(K, k)
end
end
end
append!(J, [6, 4, 6, 5, 7, 5, 7, 3, 8, 3, 8, 4])
append!(K, [4, 6, 5, 6, 5, 7, 3, 7, 3, 8, 4, 8])
append!(values, fill(-1, 12))
# plot loss along a slice
loss_lin = []
loss_sq = []
mesh = range(0.99, 1.01, 101)
for t in mesh
L = hcat(
Engine.plane(BigFloat[0, 0, 1], BigFloat(0)),
Engine.sphere(BigFloat[0, 0, 0], BigFloat(t)),
Engine.plane(BigFloat[1, 0, 0], BigFloat(1)),
Engine.plane(BigFloat[cos(2pi/3), sin(2pi/3), 0], BigFloat(1)),
Engine.plane(BigFloat[cos(-2pi/3), sin(-2pi/3), 0], BigFloat(1)),
Engine.sphere(4//3*BigFloat[-1, 0, 0], BigFloat(1//3)),
Engine.sphere(4//3*BigFloat[cos(-pi/3), sin(-pi/3), 0], BigFloat(1//3)),
Engine.sphere(4//3*BigFloat[cos(pi/3), sin(pi/3), 0], BigFloat(1//3))
)
Δ_proj = Engine.proj_diff(gram, L'*Engine.Q*L)
push!(loss_lin, norm(Δ_proj))
push!(loss_sq, dot(Δ_proj, Δ_proj))
end
mesh, loss_lin, loss_sq
end

View File

@ -0,0 +1,76 @@
include("Engine.jl")
using SparseArrays
using Random
# initialize the partial gram matrix for a sphere inscribed in a regular
# tetrahedron
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:9
for k in 1:9
filled = false
if j == 9
if k <= 5 && k != 2
push!(values, 0)
filled = true
end
elseif k == 9
if j <= 5 && j != 2
push!(values, 0)
filled = true
end
elseif j == k
push!(values, 1)
filled = true
elseif j == 1 || k == 1
push!(values, 0)
filled = true
elseif j == 2 || k == 2
push!(values, -1)
filled = true
end
if filled
push!(J, j)
push!(K, k)
end
end
end
append!(J, [6, 4, 6, 5, 7, 5, 7, 3, 8, 3, 8, 4])
append!(K, [4, 6, 5, 6, 5, 7, 3, 7, 3, 8, 4, 8])
append!(values, fill(-1, 12))
#= make construction rigid
append!(J, [3, 4, 4, 5])
append!(K, [4, 3, 5, 4])
append!(values, fill(-0.5, 4))
=#
gram = sparse(J, K, values)
# set initial guess
Random.seed!(58271)
guess = hcat(
Engine.plane(BigFloat[0, 0, 1], BigFloat(0)),
Engine.sphere(BigFloat[0, 0, 0], BigFloat(1//2)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.plane(-BigFloat[1, 0, 0], BigFloat(-1)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.plane(-BigFloat[cos(2pi/3), sin(2pi/3), 0], BigFloat(-1)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.plane(-BigFloat[cos(-2pi/3), sin(-2pi/3), 0], BigFloat(-1)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.sphere(BigFloat[-1, 0, 0], BigFloat(1//5)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.sphere(BigFloat[cos(-pi/3), sin(-pi/3), 0], BigFloat(1//5)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
Engine.sphere(BigFloat[cos(pi/3), sin(pi/3), 0], BigFloat(1//5)) + 0.1*Engine.rand_on_shell([BigFloat(-1)]),
BigFloat[0, 0, 0, 0, 1]
)
frozen = [CartesianIndex(j, 9) for j in 1:5]
# complete the gram matrix using Newton's method with backtracking
L, success, history = Engine.realize_gram(gram, guess, frozen)
completed_gram = L'*Engine.Q*L
println("Completed Gram matrix:\n")
display(completed_gram)
if success
println("\nTarget accuracy achieved!")
else
println("\nFailed to reach target accuracy")
end
println("Steps: ", size(history.scaled_loss, 1))
println("Loss: ", history.scaled_loss[end], "\n")

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,85 @@
using LinearAlgebra
using AbstractAlgebra
function printgood(msg)
printstyled("", color = :green)
println(" ", msg)
end
function printbad(msg)
printstyled("", color = :red)
println(" ", msg)
end
F, gens = rational_function_field(AbstractAlgebra.Rationals{BigInt}(), ["a₁", "a₂", "b₁", "b₂", "c₁", "c₂"])
a = gens[1:2]
b = gens[3:4]
c = gens[5:6]
# three mutually tangent spheres which are all perpendicular to the x, y plane
gram = [
-1 1 1;
1 -1 1;
1 1 -1
]
eig = eigen(gram)
n_pos = count(eig.values .> 0.5)
n_neg = count(eig.values .< -0.5)
if n_pos + n_neg == size(gram, 1)
printgood("Non-degenerate subspace")
else
printbad("Degenerate subspace")
end
sig_rem = Int64[ones(1-n_pos); -ones(4-n_neg)]
unk = hcat(a, b, c)
M = matrix_space(F, 5, 5)
big_gram = M(F.([
diagm(sig_rem) unk;
transpose(unk) gram
]))
r, p, L, U = lu(big_gram)
if isone(p)
printgood("Found a solution")
else
printbad("Didn't find a solution")
end
solution = transpose(L)
mform = U * inv(solution)
vals = [0, 0, 0, 1, 0, -3//4]
solution_ex = [evaluate(entry, vals) for entry in solution]
mform_ex = [evaluate(entry, vals) for entry in mform]
std_basis = [
0 0 0 1 1;
0 0 0 1 -1;
1 0 0 0 0;
0 1 0 0 0;
0 0 1 0 0
]
std_solution = M(F.(std_basis)) * solution
std_solution_ex = std_basis * solution_ex
println("Minkowski form:")
display(mform_ex)
big_gram_recovered = transpose(solution_ex) * mform_ex * solution_ex
valid = all(iszero.(
[evaluate(entry, vals) for entry in big_gram] - big_gram_recovered
))
if valid
printgood("Recovered Gram matrix:")
else
printbad("Didn't recover Gram matrix. Instead, got:")
end
display(big_gram_recovered)
# this should be a solution
hand_solution = [0 0 1 0 0; 0 0 -1 2 2; 0 0 0 1 -1; 1 0 0 0 0; 0 1 0 0 0]
unmix = Rational{Int64}[[1//2 1//2; 1//2 -1//2] zeros(Int64, 2, 3); zeros(Int64, 3, 2) Matrix{Int64}(I, 3, 3)]
hand_solution_diag = unmix * hand_solution
big_gram_hand_recovered = transpose(hand_solution_diag) * diagm([1; -ones(Int64, 4)]) * hand_solution_diag
println("Gram matrix from hand-written solution:")
display(big_gram_hand_recovered)

View File

@ -0,0 +1,27 @@
F = QQ['a', 'b', 'c'].fraction_field()
a, b, c = F.gens()
# three mutually tangent spheres which are all perpendicular to the x, y plane
gram = matrix([
[-1, 0, 0, 0, 0],
[0, -1, a, b, c],
[0, a, -1, 1, 1],
[0, b, 1, -1, 1],
[0, c, 1, 1, -1]
])
P, L, U = gram.LU()
solution = (P * L).transpose()
mform = U * L.transpose().inverse()
concrete = solution.subs({a: 0, b: 1, c: -3/4})
std_basis = matrix([
[0, 0, 0, 1, 1],
[0, 0, 0, 1, -1],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0]
])
std_solution = std_basis * solution
std_concrete = std_basis * concrete

View File

@ -0,0 +1,86 @@
include("Engine.jl")
using SparseArrays
# this problem is from a sangaku by Irisawa Shintarō Hiroatsu. the article below
# includes a nice translation of the problem statement, which was recorded in
# Uchida Itsumi's book _Kokon sankan_ (_Mathematics, Past and Present_)
#
# "Japan's 'Wasan' Mathematical Tradition", by Abe Haruki
# https://www.nippon.com/en/japan-topics/c12801/
#
# initialize the partial gram matrix
J = Int64[]
K = Int64[]
values = BigFloat[]
for s in 1:9
# each sphere is represented by a spacelike vector
push!(J, s)
push!(K, s)
push!(values, 1)
# the circumscribing sphere is internally tangent to all of the other spheres
if s > 1
append!(J, [1, s])
append!(K, [s, 1])
append!(values, [1, 1])
end
if s > 3
# each chain sphere is externally tangent to the "sun" and "moon" spheres
for n in 2:3
append!(J, [s, n])
append!(K, [n, s])
append!(values, [-1, -1])
end
# each chain sphere is externally tangent to the next chain sphere
s_next = 4 + mod(s-3, 6)
append!(J, [s, s_next])
append!(K, [s_next, s])
append!(values, [-1, -1])
end
end
gram = sparse(J, K, values)
# make an initial guess
guess = hcat(
Engine.sphere(BigFloat[0, 0, 0], BigFloat(15)),
Engine.sphere(BigFloat[0, 0, -9], BigFloat(5)),
Engine.sphere(BigFloat[0, 0, 11], BigFloat(3)),
(
Engine.sphere(9*BigFloat[cos(k*π/3), sin(k*π/3), 0], BigFloat(2.5))
for k in 1:6
)...
)
frozen = [CartesianIndex(4, k) for k in 1:4]
# complete the gram matrix using Newton's method with backtracking
L, success, history = Engine.realize_gram(gram, guess, frozen)
completed_gram = L'*Engine.Q*L
println("Completed Gram matrix:\n")
display(completed_gram)
if success
println("\nTarget accuracy achieved!")
else
println("\nFailed to reach target accuracy")
end
println("Steps: ", size(history.scaled_loss, 1))
println("Loss: ", history.scaled_loss[end], "\n")
if success
println("Chain diameters:")
println(" ", 1 / L[4,4], " sun (given)")
for k in 5:9
println(" ", 1 / L[4,k], " sun")
end
end
# test an alternate technique for finding the projected base step from the
# unprojected Hessian
L_alt, success_alt, history_alt = Engine.realize_gram_alt_proj(gram, guess, frozen)
completed_gram_alt = L_alt'*Engine.Q*L_alt
println("\nDifference in result using alternate projection:\n")
display(completed_gram_alt - completed_gram)
println("\nDifference in steps: ", size(history_alt.scaled_loss, 1) - size(history.scaled_loss, 1))
println("Difference in loss: ", history_alt.scaled_loss[end] - history.scaled_loss[end], "\n")

View File

@ -0,0 +1,49 @@
using LowRankModels
using LinearAlgebra
using SparseArrays
# testing Gram matrix recovery using the LowRankModels package
# initialize the partial gram matrix for an arrangement of seven spheres in
# which spheres 1 through 5 are mutually tangent, and spheres 3 through 7 are
# also mutually tangent
I = Int64[]
J = Int64[]
values = Float64[]
for i in 1:7
for j in 1:7
if (i <= 5 && j <= 5) || (i >= 3 && j >= 3)
push!(I, i)
push!(J, j)
push!(values, i == j ? 1 : -1)
end
end
end
gram = sparse(I, J, values)
# in this initial guess, the mutual tangency condition is satisfied for spheres
# 1 through 5
X₀ = sqrt(0.5) * [
1 0 1 1 1;
1 0 1 -1 -1;
1 0 -1 1 -1;
1 0 -1 -1 1;
2 -sqrt(6) 0 0 0;
0.2 0.3 -0.1 -0.2 0.1;
0.1 -0.2 0.3 0.4 -0.1
]'
Y₀ = diagm([-1, 1, 1, 1, 1]) * X₀
# search parameters
search_params = ProxGradParams(
1.0;
max_iter = 100,
inner_iter = 1,
abs_tol = 1e-16,
rel_tol = 1e-9,
min_stepsize = 0.01
)
# complete gram matrix
model = GLRM(gram, QuadLoss(), ZeroReg(), ZeroReg(), 5, X = X₀, Y = Y₀)
X, Y, history = fit!(model, search_params)

View File

@ -0,0 +1,37 @@
using LinearAlgebra
using AbstractAlgebra
function printgood(msg)
printstyled("", color = :green)
println(" ", msg)
end
function printbad(msg)
printstyled("", color = :red)
println(" ", msg)
end
F, gens = rational_function_field(AbstractAlgebra.Rationals{BigInt}(), ["x", "t₁", "t₂", "t₃"])
x = gens[1]
t = gens[2:4]
# three mutually tangent spheres which are all perpendicular to the x, y plane
M = matrix_space(F, 7, 7)
gram = M(F[
1 -1 -1 -1 -1 t[1] t[2];
-1 1 -1 -1 -1 x t[3]
-1 -1 1 -1 -1 -1 -1;
-1 -1 -1 1 -1 -1 -1;
-1 -1 -1 -1 1 -1 -1;
t[1] x -1 -1 -1 1 -1;
t[2] t[3] -1 -1 -1 -1 1
])
r, p, L, U = lu(gram)
if isone(p)
printgood("Found a solution")
else
printbad("Didn't find a solution")
end
solution = transpose(L)
mform = U * inv(solution)

View File

@ -0,0 +1,90 @@
include("Engine.jl")
using SparseArrays
using AbstractAlgebra
using PolynomialRoots
using Random
# initialize the partial gram matrix for an arrangement of seven spheres in
# which spheres 1 through 5 are mutually tangent, and spheres 3 through 7 are
# also mutually tangent
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:7
for k in 1:7
if (j <= 5 && k <= 5) || (j >= 3 && k >= 3)
push!(J, j)
push!(K, k)
push!(values, j == k ? 1 : -1)
end
end
end
gram = sparse(J, K, values)
# set the independent variable
indep_val = -9//5
gram[6, 1] = BigFloat(indep_val)
gram[1, 6] = gram[6, 1]
# in this initial guess, the mutual tangency condition is satisfied for spheres
# 1 through 5
Random.seed!(50793)
guess = let
a = sqrt(BigFloat(3)/2)
hcat(
sqrt(1/BigFloat(2)) * BigFloat[
1 1 -1 -1 0
1 -1 1 -1 0
1 -1 -1 1 0
0.5 0.5 0.5 0.5 1+a
0.5 0.5 0.5 0.5 1-a
] + 0.2*Engine.rand_on_shell(fill(BigFloat(-1), 5)),
Engine.rand_on_shell(fill(BigFloat(-1), 2))
)
end
# complete the gram matrix using Newton's method with backtracking
L, success, history = Engine.realize_gram(gram, guess)
completed_gram = L'*Engine.Q*L
println("Completed Gram matrix:\n")
display(completed_gram)
if success
println("\nTarget accuracy achieved!")
else
println("\nFailed to reach target accuracy")
end
println("Steps: ", size(history.scaled_loss, 1))
println("Loss: ", history.scaled_loss[end], "\n")
# === algebraic check ===
#=
R, gens = polynomial_ring(AbstractAlgebra.Rationals{BigInt}(), ["x", "t₁", "t₂", "t₃"])
x = gens[1]
t = gens[2:4]
S, u = polynomial_ring(AbstractAlgebra.Rationals{BigInt}(), "u")
M = matrix_space(R, 7, 7)
gram_symb = M(R[
1 -1 -1 -1 -1 t[1] t[2];
-1 1 -1 -1 -1 x t[3]
-1 -1 1 -1 -1 -1 -1;
-1 -1 -1 1 -1 -1 -1;
-1 -1 -1 -1 1 -1 -1;
t[1] x -1 -1 -1 1 -1;
t[2] t[3] -1 -1 -1 -1 1
])
rank_constraints = det.([
gram_symb[1:6, 1:6],
gram_symb[2:7, 2:7],
gram_symb[[1, 3, 4, 5, 6, 7], [1, 3, 4, 5, 6, 7]]
])
# solve for x and t
x_constraint = 25//16 * to_univariate(S, evaluate(rank_constraints[1], [2], [indep_val]))
t₂_constraint = 25//16 * to_univariate(S, evaluate(rank_constraints[3], [2], [indep_val]))
x_vals = PolynomialRoots.roots(x_constraint.coeffs)
t₂_vals = PolynomialRoots.roots(t₂_constraint.coeffs)
=#

View File

@ -0,0 +1,76 @@
include("Engine.jl")
using SparseArrays
using Random
# initialize the partial gram matrix for a sphere inscribed in a regular
# tetrahedron
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:6
for k in 1:6
filled = false
if j == 6
if k <= 4
push!(values, 0)
filled = true
end
elseif k == 6
if j <= 4
push!(values, 0)
filled = true
end
elseif j == k
push!(values, 1)
filled = true
elseif j <= 4 && k <= 4
push!(values, -1/BigFloat(3))
filled = true
else
push!(values, -1)
filled = true
end
if filled
push!(J, j)
push!(K, k)
end
end
end
gram = sparse(J, K, values)
# set initial guess
Random.seed!(99230)
guess = hcat(
sqrt(1/BigFloat(3)) * BigFloat[
1 1 -1 -1 0
1 -1 1 -1 0
1 -1 -1 1 0
0 0 0 0 1.5
1 1 1 1 -0.5
] + 0.2*Engine.rand_on_shell(fill(BigFloat(-1), 5)),
BigFloat[0, 0, 0, 0, 1]
)
frozen = [CartesianIndex(j, 6) for j in 1:5]
# complete the gram matrix using Newton's method with backtracking
L, success, history = Engine.realize_gram(gram, guess, frozen)
completed_gram = L'*Engine.Q*L
println("Completed Gram matrix:\n")
display(completed_gram)
if success
println("\nTarget accuracy achieved!")
else
println("\nFailed to reach target accuracy")
end
println("Steps: ", size(history.scaled_loss, 1))
println("Loss: ", history.scaled_loss[end], "\n")
# test an alternate technique for finding the projected base step from the
# unprojected Hessian
L_alt, success_alt, history_alt = Engine.realize_gram_alt_proj(gram, guess, frozen)
completed_gram_alt = L_alt'*Engine.Q*L_alt
println("\nDifference in result using alternate projection:\n")
display(completed_gram_alt - completed_gram)
println("\nDifference in steps: ", size(history_alt.scaled_loss, 1) - size(history.scaled_loss, 1))
println("Difference in loss: ", history_alt.scaled_loss[end] - history.scaled_loss[end], "\n")

View File

@ -0,0 +1,105 @@
include("Engine.jl")
using LinearAlgebra
using SparseArrays
using Random
# initialize the partial gram matrix for a sphere inscribed in a regular
# tetrahedron
J = Int64[]
K = Int64[]
values = BigFloat[]
for j in 1:11
for k in 1:11
filled = false
if j == 11
if k <= 4
push!(values, 0)
filled = true
end
elseif k == 11
if j <= 4
push!(values, 0)
filled = true
end
elseif j == k
push!(values, j <= 6 ? 1 : 0)
filled = true
elseif j <= 4
if k <= 4
push!(values, -1/BigFloat(3))
filled = true
elseif k == 5
push!(values, -1)
filled = true
elseif 7 <= k <= 10 && k - j != 6
push!(values, 0)
filled = true
end
elseif k <= 4
if j == 5
push!(values, -1)
filled = true
elseif 7 <= j <= 10 && j - k != 6
push!(values, 0)
filled = true
end
elseif j == 6 && 7 <= k <= 10 || k == 6 && 7 <= j <= 10
push!(values, 0)
filled = true
end
if filled
push!(J, j)
push!(K, k)
end
end
end
gram = sparse(J, K, values)
# set initial guess
Random.seed!(99230)
guess = hcat(
sqrt(1/BigFloat(3)) * BigFloat[
1 1 -1 -1 0 0
1 -1 1 -1 0 0
1 -1 -1 1 0 0
0 0 0 0 1.5 0.5
1 1 1 1 -0.5 -1.5
] + 0.0*Engine.rand_on_shell(fill(BigFloat(-1), 6)),
Engine.point([-0.5, -0.5, -0.5] + 0.3*randn(3)),
Engine.point([-0.5, 0.5, 0.5] + 0.3*randn(3)),
Engine.point([ 0.5, -0.5, 0.5] + 0.3*randn(3)),
Engine.point([ 0.5, 0.5, -0.5] + 0.3*randn(3)),
BigFloat[0, 0, 0, 0, 1]
)
frozen = vcat(
[CartesianIndex(4, k) for k in 7:10],
[CartesianIndex(j, 11) for j in 1:5]
)
# complete the gram matrix using Newton's method with backtracking
L, success, history = Engine.realize_gram(gram, guess, frozen)
completed_gram = L'*Engine.Q*L
println("Completed Gram matrix:\n")
display(completed_gram)
if success
println("\nTarget accuracy achieved!")
else
println("\nFailed to reach target accuracy")
end
println("Steps: ", size(history.scaled_loss, 1))
println("Loss: ", history.scaled_loss[end])
if success
infty = BigFloat[0, 0, 0, 0, 1]
radius_ratio = dot(infty, Engine.Q * L[:,5]) / dot(infty, Engine.Q * L[:,6])
println("\nCircumradius / inradius: ", radius_ratio)
end
# test an alternate technique for finding the projected base step from the
# unprojected Hessian
L_alt, success_alt, history_alt = Engine.realize_gram_alt_proj(gram, guess, frozen)
completed_gram_alt = L_alt'*Engine.Q*L_alt
println("\nDifference in result using alternate projection:\n")
display(completed_gram_alt - completed_gram)
println("\nDifference in steps: ", size(history_alt.scaled_loss, 1) - size(history.scaled_loss, 1))
println("Difference in loss: ", history_alt.scaled_loss[end] - history.scaled_loss[end], "\n")

View File

@ -2,28 +2,29 @@
(proposed by Alex Kontorovich as a practical system for doing 3D geometric calculations)
These coordinates are of form $I=(c, r, x, y, z)$ where we think of $c$ as the co-radius, $r$ as the radius, and $x, y, z$ as the "Euclidean" part, which we abbreviate $E_I$. There is an underlying basic quadratic form $Q(I_1,I_2) = (c_1r_2+c_2r_1)/2 - x_1x_2 -y_1y_2-z_1z_2$ which aids in calculation/verification of coordinates in this representation. We have:
These coordinates are of form $I=(c, b, x, y, z)$ where we think of $c$ as the co-radius, $b$ as the "bend" (reciprocal radius), and $x, y, z$ as the "Euclidean" part, which we abbreviate $E_I$. There is an underlying basic quadratic form $Q(I_1,I_2) = (c_1b_2+c_2b_1)/2 - x_1x_2 -y_1y_2-z_1z_2$ which aids in calculation/verification of coordinates in this representation. We have:
| Entity or Relationship | Representation | Comments/questions |
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Sphere s with radius r>0 centered on P = (x,y,z) | $I_s = (1/c, 1/r, x/r, y/r, z/r)$ satisfying $Q(I_s,I_s) = -1$, i.e., $c = r/(\|P\|^2 - r^2)$. | Can also write $I_s = (\|P\|^2/r - r, 1/r, x/r. y/r, z/r)$ -- so there is no trouble if $\|E_{I_s}\| = r$, just get first coordinate to be 0. |
| Plane p with unit normal (x,y,z), a distance s from origin | $I_p = (2s, 0, x, y, z)$ | Note $Q(I_p, I_p)$ is still -1. Also, there are two representations for each plane through the origin, namely $(0,0,x,y,z)$ and $(0,0,-x,-y,-z)$ |
| Point P with Euclidean coordinates (x,y,z) | $I_P = (\|P\|^2, 1, x, y, z)$ | Note $Q(I_P,I_P) = 0$.  Because of this we might choose  some other scaling of the inversive coordinates, say $(\||P\||,1/\||P\||,x/\||P\||,y/\||P\||,z/\||P\||)$ instead, but that fails at the origin, and likely won't have some of the other nice properties listed below.  Note that scaling just the co-radius by $s$ and the radius by $1/s$ (which still preserves $Q=0$) dilates by a factor of $s$ about the origin, so that $(\|P\|, \|P\|, x, y, z)$, which might look symmetric, would actually have to represent the Euclidean point $(x/\||P\||, y/\||P\||, z/\||P\||)$ . |
| ∞, the "point at infinity" | $I_\infty = (1,0,0,0,0)$ | The only solution to $Q(I,I) = 0$ not covered by the above case. |
| P lies on sphere or plane given by I | $Q(I_P, I) = 0$ | |
| Sphere/planes represented by I and J are tangent | $Q(I,J) = 1$ (??, see note at right) | Seems as though this must be $Q(I,J) = \pm1$  ? For example, the $xy$ plane represented by (0,0,0,0,1)  is tangent to the unit circle centered at (0,0,1) rep'd by (0,1,0,0,1), but their Q-product is -1. And in general you can reflect any sphere tangent to any plane through the plane and it should flip the sign of $Q(I,J)$, if I am not mistaken. |
| Sphere/planes represented by I and J intersect (respectively, don't intersect) | $\|Q(I,J)\| < (\text{resp. }>)\; 1$ | Follows from the angle formula, at least conceptually. |
| P is center of sphere represented by I | Well, $Q(I_P, I)$ comes out to be $(\|P\|^2/r - r + \|P\|^2/r)/2 - \|P\|^2/r$ or just $-r/2$ . | Is it if and only if ?   No this probably doesn't work because center is not conformal quantity. |
| Distance between P and R is d | $Q(I_P, I_R) = d^2/2$ | |
| Distance between P and sphere/plane rep by I | | In the very simple case of a plane $I$ rep'd by $(2s, 0, x, y, z)$ and a point $P$ that lies on its perpendicular through the origin, rep'd by $(r^2, 1, rx, ry, rz)$ we get $Q(I, I_p) = s-r$, which is indeed the signed distance between $I$ and $P$. Not sure if this generalizes to other combinations? |
| Distance between sphere/planes rep by I and J | Note that for any two Euclidean-concentric spheres rep by $I$ and $J$ with radius $r$ and $s,$ $Q(I,J) = -\frac12\left(\frac rs  + \frac sr\right)$ depends only on the ratio of $r$ and $s$. So this can't give something that determines the Euclidean distance between the two spheres, which presumably grows as the two spheres are blown up proportionally. For another example, for any two parallel planes, $Q(I,J) = \pm1$. | Alex had said: Q(I,J)=cosh^2 (d/2) maybe where d is distance in usual hyperbolic metric. Or maybe cosh d. That may be right depending on what's meant by the hyperbolic metric there, but it seems like it won't determine a reasonable Euclidean distance between planes, which should differ between different pairs of parallel planes. |
| Sphere centered on P through R | | Probably just calculate distance etc. |
| Plane rep'd by I goes through center of sphere rep'd by J | I think this is equivalent to the plane being perpendicular to the sphere, i.e.$Q(I,J) = 0$. | |
| Dihedral angle between planes (or spheres?) rep by I and J | $\theta = \arccos(Q(I,J))$ | Aaron Fenyes points out: The angle between spheres in $S^3$ matches the angle between the planes they bound in $R^{(1,4)}$, which matches the angle between the spacelike vectors perpendicular to those planes. So we should have $Q(I,J) = \cos\theta$. Note that when the spheres do not intersect, we can interpret this as the "imaginary angle" between them, via $\cosh t = \cos it$. |
| R, P, S are collinear | Maybe just cross product of two differences is 0. Or, $R,P,S,\infty$ lie on a circle, or equivalently, $I_R,I_P,I_S,I_\infty$ span a plane (rather than a three-space). | Not a conformal property, but $R,P,S,\infty$ lying on a circle _is_. |
| Plane through noncollinear R, P, S | Should be, just solve Q(I, I_R) = 0 etc. | |
| ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Sphere $s$ with radius $r>0$ centered on $P = (x,y,z)$ | $I_s = (\frac1{c}, \frac1{r}, \frac{x}{r}, \frac{y}{r}, \frac{z}{r})$ satisfying $Q(I_s,I_s) = -1,$ i.e., $c = r/(\|P\|^2 - r^2)$. | Note that $1/c = \|P\|^2/r - r$, so there is no trouble if $\|P\| = r$; we just get first coordinate to be 0. Using the point representation $I_P$ from below, let's orient the sphere so that its normals point into the "positive side," where $Q(I_P, I_s) > 0$. The vector $I_s$ then represents a sphere with outward normals, while $-I_s$ represents one with inward normals. |
| Plane $p$ with unit normal $(x,y,z)$ through the (Euclidean) point $(sx,sy,sz)$ | $I_p = (-2s, 0, -x, -y, -z)$ | Note that $Q(I_p, I_p)$ is still $-1$. We orient planes using the same convention we use for spheres. For example, $(-2, 0, -1/\sqrt3, -1/\sqrt3, -1/\sqrt3)$ and $(2, 0, 1/\sqrt3, 1/\sqrt3, 1/\sqrt3)$ represent planes that coincide in space, which have their normals pointing away from and toward the origin, respectively. Note that the ray from $(sx, sy, sz) \in p$ in direction $(-x, -y, -z)$ is the ray perpendicular to the plane through the origin; since $(-x, -y, -z)$ is a unit vector, $(sx, sy, sz)$ and hence $p$ is at distance $s$ from the origin. These coordinates are essentially the limit of a sphere's coordinates as its radius goes to infinity, or equivalently, as its bend goes to 0. |
| Point $P$ with Euclidean coordinates $(x,y,z)$ | $I_P = (\|P\|^2, 1, x, y, z)$ | Note $Q(I_P,I_P) = 0$. This gives us the freedom to choose a different normalization. For example, we could scale the representation shown here by $(\|P\|^2+1)^{-1}$, putting it on the sphere where the light cone intersects the plane where the first two coordinates sum to $1$. |
| ∞, the "point at infinity" | $I_\infty = (1,0,0,0,0)$ | The only solution to $Q(I,I) = 0$ not covered by (some normalization of) the above case. |
| Point $P$ lies on sphere or plane given by $I$ | $Q(I_P, I) = 0$ | Actually also works if $I$ is the coordinates of a point, in which case "lies on" simply means "coincides with". |
| Sphere/planes represented by $I$ and $J$ are tangent | If $I$ and $J$ have the same orientation where they touch, $Q(I,J) = -1$. If they have opposing orientations, $Q(I,J) = 1$. | For example, the $xy$ plane with normal $-e_z$, represented by $(0,0,0,0,1)$, is tangent with matching orientation to the unit sphere centered at $(0,0,1)$ with outward normals, represented by $(0,1,0,0,1).$ Accordingly, their $Q$ - product is $-1$. |
| Sphere/planes represented by $I$ and $J$ intersect (respectively, don't intersect) | $\lvert Q(I,J)\rvert \le (\text{resp. }>)\; 1$ | Follows from the angle formula and the tangency condition, at least conceptually. One subtlety: parallel planes have $Q$ - product $\pm 1$, because they intersect at infinity (and in fact, are "tangent" there)! |
| $P$ is center of sphere rep'd by $I$ | $Q(I, I_P) = -r/2$, where $1/r = 2Q(I_\infty, I)$ is the signed bend of the sphere, and $I_P$ is normalized in the standard way, which is to set $Q(I_\infty, I_P) = 1/2$ | This relationship is equivalent to both of the following. (1) The point $P$ has signed distance $-r$ from the sphere. (2) Inversion across the sphere maps $\infty$ to $P$. |
| Distance between points $P$ and $R$ is $d$ | $Q(I_P, I_R) = d^2/2$ | If $P$ and $R$ are represented by non-normalized vectors $V_P$ and $V_R$, the relation becomes $Q(V_P, V_R) = 2\,Q(V_P, I_\infty)\,Q(V_R, I_\infty)\,d^2$. This version of the relation makes it easier to see why $d$ goes to infinity as $P$ or $R$ approaches the point at infinity. |
| Signed distance between point rep'd by $V$ and sphere/plane rep'd by $I$ is $d$ | In general, $\frac{Q(I, V)}{2Q(I_\infty, V)} = Q(I_\infty, I)\,d^2 + d$. When $V$ is normalized in the usual way, this simplifies to $Q(I, V) = d^2/r + d$ for a sphere of radius $r$, and to $Q(I, V) = d$ for a plane. | We can use a Euclidean motion, represented linearly by a Lorentz transformation that fixes $I_\infty$, to put the point on the $z$ axis and put the nearest point on the sphere/plane at the origin with its normal pointing in the positive $z$ direction. Then the sphere/plane is represented by $I = (0, 1/r, 0, 0, -1)$, and the point can be represented by any multiple of $I_P = (d^2, 1, 0, 0, d)$, giving $Q(I, I_P) = d^2/2r + d.$ We turn this into a general expression by writing it in terms of Lorentz-invariant quantities and making it independent of the normalization of $I_P$. |
| Distance between sphere/planes rep by $I$ and $J$ | Note that for any two Euclidean-concentric spheres rep by $I$ and $J$ with radius $r$ and $s,$ $Q(I,J) = -\frac12\left(\frac rs + \frac sr\right)$ depends only on the ratio of $r$ and $s$. So this can't give something that determines the Euclidean distance between the two spheres, which presumably grows as the two spheres are blown up proportionally. For another example, for any two parallel planes, $Q(I,J) = \pm1$. | Alex had said: $Q(I,J)=\cosh(d/2)^2$ maybe where d is distance in usual hyperbolic metric. Or maybe $\cosh(d)$. That may be right depending on what's meant by the hyperbolic metric there, but it seems like it won't determine a reasonable Euclidean distance between planes, which should differ between different pairs of parallel planes. |
| Sphere centered on point $P$ through point $R$ | | Probably just calculate distance etc. |
| Plane rep'd by $I$ goes through center of sphere rep'd by $J$ | This is equivalent to the plane being perpendicular to the sphere: that is, $Q(I, J) = 0$. | |
| Dihedral angle between planes or spheres rep by $I$ and $J$ | $\theta = \arccos(Q(I,J))$ | Aaron Fenyes points out: The angle between spheres in $S^3$ matches the angle between the planes they bound in $R^{(1,4)}$, which matches the angle between the spacelike vectors perpendicular to those planes. So we should have $Q(I,J) = \cos(\theta)$. Note that when the spheres do not intersect, we can interpret this as the "imaginary angle" between them, via $\cosh(t) = \cos(it)$. |
| Points $R, P, S$ are collinear | Maybe just cross product of two differences is 0. Or, $R,P,S,\infty$ lie on a circle, or equivalently, $I_R,I_P,I_S,I_\infty$ span a plane (rather than a three-space). Or we can add two planes constrained to be perpendicular with one constrained to contain the origin, and all three points constrained to lie on both. But that's a lot of auxiliary entities and constraints... | $R,P,S$ lying on a line isn't a conformal property, but $R,P,S,\infty$ lying on a circle is. |
| Plane through noncollinear $R, P, S$ | Should be, just solve $Q(I, I_R) = 0$ etc. | |
| circle | Maybe concentric sphere and the containing plane? Note it is easy to constrain the relationship between those two: they must be perpendicular. | Defn: circle is intersection of two spheres. That does cover lines. But you lose the canonicalness |
| line | Maybe two perpendicular containing planes? Maybe the plane perpendicular to the line and through origin, together with the point of the line on that plane? Or maybe just as a bag of collinear points? | The first is the limiting case of the possible circle rep, but it is not canonical. The second appears to be canonical, but I don't see a circle rep that corresponds to it. |
| line | Maybe two perpendicular containing planes? Maybe the plane perpendicular to the line and through origin, together with the point of the line on that plane? Or maybe just as a bag of collinear points? | The first is the limiting case of the possible circle rep, but it is not canonical. However, there is a distinguished "standard" choice we could make: always choose one plane to contain the origin and the line, and the other to be the perpendicular plane containing the line. That choice or Plücker coordinates might be the best we can do. If we use the standardized perpendicular planes choice, then adding a line would be equivalent to adding two planes and the two constraints that one contains the origin and the other is perpendicular to it. That doesn't seem so bad. The second convention (perpendicular plane through the origin and a point on it) appears to be canonical, but there doesn't seem to be a circle representation that tends to it in the limit. |
| Inversion of entity represented by $v$ across sphere $s$, rep'd by $I_s$ | $v \mapsto v + 2Q(I_s, v)\,I_s$ | This is just an educated guess, but its behavior is consistent with inversion in at least two ways. (1) It fixes points on $s$ and spheres perpendicular to $s$. (2) It preserves dihedral angles with $s$. |
The unification of spheres/planes is indeed attractive for a project like Dyna3. The relationship between this representation and Geometric Algebras is a bit murky; likely it somehow fits under the Geometric Algebra umbrella.
@ -40,3 +41,25 @@ I will have to work out formulas for the Euclidean distance between two entities
In this vein, it seems as though if J1 and J2 are the reps of two points, then Q(J1,J2) = d^2/2. So then the sphere centered at J1 through J2 is (J1-(2Q(J1,J2),0,0,0,0))/sqrt(2Q(J1,J2)). Ugh has a sqrt in it. Similarly for sphere centered at J3 through J2, (J3-(2Q(J3,J2),0000))/sqrt(2Q(J3,J2)). J1,J2,J3 are collinear if these spheres are tangent, i.e. if those vectors have Q-inner-product 1, which is to say Q(J1,J3) - Q(J1,J2) - Q(J3,J2) = 2sqrt(Q(J1,J2)Q(J2,J3)). But maybe that's not the simplest way of putting it. After all, we can just say that the cross-product of the two differences is 0; that has no square roots in it.
One conceivable way to canonicalize lines is to use the *perpendicular* plane that goes through the origin, that's uniquely defined, and anyway just amounts to I = (0,0,d) where d is the ordinary direction vector of the line; and a point J in that plane that the line goes through, which just amounts to J=(r^2,1,E) with Q(I,J) = 0, i.e. E\dot d = 0. It's also the point on the line closest to the origin. The reason that we don't usually use that point as the companion to the direction vector is that the resulting set of six coordinates is not homogeneous. But here that's not an issue, since we have our standard point coordinates and plane coordinates; and for a plane through the origin, only two of the direction coordinates are really free, and then we have the one dot-product relation, so only two of the point coordinates are really free, giving us the correct dimensionality of 4 for the set of lines. So in some sense this says that we could take naively as coordinates for a line the projection of the unit direction vector to the xy plane and the projection of the line's closest point to the origin to the xy plane. That doesn't seem to have any weird gimbal locks or discontinuities or anything. And with these coordinates, you can test if the point E=x,y,z is on the line (dx,dy,cx,cy) by extending (dx,dy) to d via dz = sqrt(1-dx^2 - dy^2), extending (cx,cy) to c by determining cz via d\dot c = 0, and then checking if d\cross(E-c) = 0. And you can see if two lines are parallel just by checking if they have the same direction vector, and if not, you can see if they are coplanar by projecting both of their closest points perpendicularly onto the line in the direction of the cross product of their directions, and if the projections match they are coplanar.
#### Engine Conventions
The coordinate conventions used in the engine are different from the ones used in these notes. Marking the engine vectors and coordinates with $'$, we have
$$I' = (x', y', z', b', c'),$$
where
$$
\begin{align*}
x' & = x & b' & = b/2 \\
y' & = y & c' & = c/2. \\
z' & = z
\end{align*}
$$
The engine uses the quadratic form $Q' = -Q$, which is expressed in engine coordinates as
$$Q'(I'_1, I'_2) = x'_1 x'_2 + y'_1 y'_2 + z'_1 z'_2 - 2(b'_1c'_2 + c'_1 b'_2).$$
In the `engine` module, the matrix of $Q'$ is encoded in the lazy static variable `Q`.
In the engine's coordinate conventions, a sphere with radius $r > 0$ centered on $P = (P_x, P_y, P_z)$ is represented by the vector
$$I'_s = \left(\frac{P_x}{r}, \frac{P_y}{r}, \frac{P_z}{r}, \frac1{2r}, \frac{\|P\|^2 - r^2}{2r}\right),$$
which has the normalization $Q'(I'_s, I'_s) = 1$. The point $P$ is represented by the vector
$$I'_P = \left(P_x, P_y, P_z, \frac{1}{2}, \frac{\|P\|^2}{2}\right).$$
In the `engine` module, these formulas are encoded in the `sphere` and `point` functions.