Tidy up URLs, wording, and formatting

Vectornaut 2025-05-29 18:27:42 +00:00
parent 121caf27c8
commit a94a7b9860

@ -25,7 +25,7 @@ One strategy for exploring a positive-dimensional solution variety is to enumera
### Best order
The size of the Gröbner basis depends a lot on the variable order. For the examples below, I've gotten the best performance with the degree reverse lexicographic monomial order and the following variable order.
The size of the Gröbner basis depends a lot on the variable order. For the examples below, I've gotten the best performance with the *degree* *reverse lexicographic* monomial order and the following variable order.
* First, sort by coordinate: $r$, $s$, $x$, $y$, $z$
* Within each coordinate, put spheres before points: $r_\text{s1}, r_\text{p1}, r_\text{p2}$
@ -40,7 +40,7 @@ The size of the Gröbner basis depends a lot on the variable order. For the exam
## General simplifications
Some of these are obvious, but we may as well write them down. Note we may want to simultaneously run different solvers/heuristics and update the display as they each return new/useful information, killing ones that become redundant.
Some of these are obvious, but we may as well write them down. Note we may want to run different solvers and heuristics simultaneously and update the display as they each return new or useful information, killing ones that become redundant.
* Keep the network of entities (or variables?) and constraints. Only ever work on one connected component of this at a time.
* Conversely, track rigid subnetworks, and replace them with fewer working variables. [What do we do about/how do we display over-constrained subnetworks? Do we display the configuration "nearest" to satisfying all the constraints in some sense but show in some bright alarm state the violated constraints, perhaps graded by "how badly" the constraint fails? If we can do this well -- and numeric near-solutions would suffice in over-determined cases, I don't think there's any point in trying to find exact algebraic near-solutions -- then dyna3 could actually be useful working with over-determined systems; it would be doing some sort of optimization that could actually be telling us interesting things.]
@ -50,13 +50,13 @@ Some of these are obvious, but we may as well write them down. Note we may want
## Overall solver approaches
* Find the Gröbner basis of the constraints. This definitely tells us a lot about the possible configurations. For example, the basis will be 1 if and only if there are no solutions (correct?). And we should be able to tell if the system is rigid, i.e., has only finitely many solutions (correct?). In this case, can we find all of the algebraic number solutions? (I am worried that is in general a high-complexity problem that may become intractable in practice for realistic constructions??) And hopefully in the case of nonzero dimension components, we will have ways to find some rational points if they exist; we could certainly hit a variety with no rational points, correct?. We're of course also fine with finding say quadratic or other low-degree algebraic numbers that satisfy the constraints. I assume for any fixed degree, it could have only finitely many algebraic points of that degree over Q. I think that's true, but according to https://math.stackexchange.com/questions/21979/points-of-bounded-degree-on-varieties if it has infinitely many points, then there is a least degree for which it has infinitely many points. Are we interested in that degree? Can we find it from the Gröbner basis? If we could generate points of a given degree well, then we would know what degree to go to in order to allow a "smooth drag" among exact solutions. E.g, for a circle/sphere of quadratic algebraic radius there are already infinitely many rational points, if I am not mistaken, see https://mathoverflow.net/questions/125224/rational-points-on-a-sphere-in-mathbbrd. Except for the occasional "holes" around particularly low-denominator solutions, this would allow for a smooth drag just among rational points. )
* Use Newton's method to minimize the sum of the squares of the constraints. If we do find an approximate zero this way, is there a way to find a nearby algebraic zero? Sadly, if we don't find a zero this way, we may have just gotten stuck at a local minimum, so that's no proof the constraints are inconsistent. Note that if we suspect that we are near a quadratic algebraic solution, then the approach described at https://mathoverflow.net/questions/2861/how-should-i-approximate-real-numbers-by-algebraic-ones could well be fruitful, if we take the approximation to high accuracy.
* Use homotopy continuation (see https://www.juliahomotopycontinuation.org/) to get numerical information about the solution variety (and generate numerical approximations to points on it, correct?) If I understand correctly, an advantage of this method is that we get an almost-sure proof that the system is inconsistent (if it is), correct? And become almost sure whether it is rigid (has only isolated solutions), correct? As with Newton's method, we still have the question of whether having approximate numerical solutions helps us find exact algebraic ones, with presumably the same set of possible techniques. Also, if I understand correctly, this approach comes with a natural way to update (numerical) solutions as one element is dragged, is that right? Actually, homotopy continuation in general aside, shouldn't we be able to, at least numerically, find the tangent space to the variety at a point on it and simply ensure that at least locally we always proceed within that tangent space?
* Find the Gröbner basis of the constraints. This definitely tells us a lot about the possible configurations. For example, the basis will be 1 if and only if there are no solutions (correct?). And we should be able to tell if the system is rigid, i.e., has only finitely many solutions (correct?). In this case, can we find all of the algebraic number solutions? (I am worried that is in general a high-complexity problem that may become intractable in practice for realistic constructions??) And hopefully in the case of nonzero dimension components, we will have ways to find some rational points if they exist; we could certainly hit a variety with no rational points, correct?. We're of course also fine with finding say quadratic or other low-degree algebraic numbers that satisfy the constraints. I assume for any fixed degree, it could have only finitely many algebraic points of that degree over $\mathbb{Q}$. I think that's true, but according to [this Stack Exchange question](https://math.stackexchange.com/questions/21979/points-of-bounded-degree-on-varieties), if it has infinitely many points, then there is a least degree for which it has infinitely many points. Are we interested in that degree? Can we find it from the Gröbner basis? If we could generate points of a given degree well, then we would know what degree to go to in order to allow a "smooth drag" among exact solutions. For example, for a circle or sphere of quadratic algebraic radius, there are already infinitely many rational points, if I am not mistaken: see [this Math Overflow question](https://mathoverflow.net/questions/125224/rational-points-on-a-sphere-in-mathbbrd). Except for the occasional "holes" around particularly low-denominator solutions, this would allow for a smooth drag just among rational points.
* Use Newton's method to minimize the sum of the squares of the constraints. If we do find an approximate zero this way, is there a way to find a nearby algebraic zero? Sadly, if we don't find a zero this way, we may have just gotten stuck at a local minimum, so that's no proof the constraints are inconsistent. Note that if we suspect that we are near a quadratic algebraic solution, then the approach described [here](https://mathoverflow.net/questions/2861/how-should-i-approximate-real-numbers-by-algebraic-ones) could well be fruitful, if we take the approximation to high accuracy.
* Use [homotopy continuation](https://www.juliahomotopycontinuation.org/) to get numerical information about the solution variety (and generate numerical approximations to points on it, correct?) If I understand correctly, an advantage of this method is that we get an almost-sure proof that the system is inconsistent (if it is), correct? And become almost sure whether it is rigid (has only isolated solutions), correct? As with Newton's method, we still have the question of whether having approximate numerical solutions helps us find exact algebraic ones, with presumably the same set of possible techniques. Also, if I understand correctly, this approach comes with a natural way to update (numerical) solutions as one element is dragged, is that right? Actually, homotopy continuation in general aside, shouldn't we be able to, at least numerically, find the tangent space to the variety at a point on it and simply ensure that at least locally we always proceed within that tangent space?
* Since generally speaking, at least for most of the starting list of constraints, each constraint is quadratic, there _might_ be some useful information here
* [Paper on deciding the existence of solutions](http://arxiv.org/abs/2106.08119)
* [Relevant Math.StackExchange question](https://math.stackexchange.com/questions/2119007/how-do-you-solve-a-system-of-quadratic-equations)
* If say we are seeking the closest satisfying point to some input, it might be that we are trying to minimize a nonnegative polynomial, and then some of the information at https://en.wikipedia.org/wiki/Positive_polynomial might be relevant.
* If say we are seeking the closest satisfying point to some input, it might be that we are trying to minimize a nonnegative polynomial, and then some of the information [about those](https://en.wikipedia.org/wiki/Positive_polynomial) might be relevant.
Feel free to add in other ideas in any of these categories.