25 Engine prototype
Glen Whitney edited this page 2024-02-28 23:59:06 +00:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Witness sets

  • Cutting along a hyperplane with real coefficients and offset has been doing a good job of finding real solutions.
  • Hyperplanes that go through the origin has been performing better than hyperplanes with other offsets.
    • Ones that go through the trivial solution seem the worst.
    • In tests with no scale constraint (based on commit 8e33987), hyperplanes through the origin tend to hit around 1.5 times as many real solutions as ones through the trivial solution.
    • In tests with a scale constraint (based on commit af1d31f), the hit ratio drops to 1.0 to 1.2, and becomes more variable.

Challenging examples

With three mutually tangent spheres, HomotopyContinuation doesn't seem to find real solutions, even though we know there are some. One solution, in [r, s, x, y, z] coordinates, is:

  • [0, 0, 0, 0, 1] (plane perpendicular to z axis)
  • [1, 0, 0, 0, 1] (unit sphere above plane, touching at [x, y] = [0 ,0])
  • [1, 4, 0, 2, 1] (unit sphere above plane, touching at [x, y] = [0, 2])

Rational points

One strategy for exploring a positive-dimensional solution variety is to enumerate rational points. When the user drags the display, we move numerically along the solution variety, snapping to the nearest known rational point when the drag ends.

  • Sage and Macaulay2 can enumerate rational points over various fields. For \mathbb{Q}, however, they turn out to use mostly brute force.
  • Let's try the classic strategy of taking rational-slope slices through a known point. In our case, the known point is the totally degenerate solution where all the points coincide and all the spheres coincide.
    • The OP of this question suggests this strategy for finding rational points on a sphere.
  • We discussed in person birational isomorphisms of a single quadratic surface, by projecting from a known point, and the possible limitations of that in attacking the simultaneous solutions of multiple quadratics. But when it happens that all the quadratics share a solution, as in our current toy case of incidence-and-tangency only constraints, is there any chance it would work to parametrize the solutions to one of the quadratics, and then project all the other conditions into the parameter space using the selected isomorphism? Would they remain quadratic in the parameter space and could we then repeat? Does this question I am asking even make sense?

Basis optimization

Best order

The size of the Gröbner basis depends a lot on the variable order. For the examples below, I've gotten the best performance with the degree reverse lexicographic monomial order and the following variable order.

  • First, sort by coordinate: r, s, x, y, z
  • Within each coordinate, put spheres before points: r_\text{s1}, r_\text{p1}, r_\text{p2}

Test suite

  • One point on a sphere
  • Two points on a sphere
  • Three mutually tangent spheres
  • Two spheres, and a bunch of points which are each on both spheres
    • With the current system for turning a construction into a system of equations, the Gröbner basis calculation becomes overwhelmingly big with three points and two spheres. Removing one of the points makes the calculation totally manageable.

General simplifications

Some of these are obvious, but we may as well write them down. Note we may want to simultaneously run different solvers/heuristics and update the display as they each return new/useful information, killing ones that become redundant.

  • Keep the network of entities (or variables?) and constraints. Only ever work on one connected component of this at a time.
  • Conversely, track rigid subnetworks, and replace them with fewer working variables. [What do we do about/how do we display over-constrained subnetworks? Do we display the configuration "nearest" to satisfying all the constraints in some sense but show in some bright alarm state the violated constraints, perhaps graded by "how badly" the constraint fails? If we can do this well -- and numeric near-solutions would suffice in over-determined cases, I don't think there's any point in trying to find exact algebraic near-solutions -- then dyna3 could actually be useful working with over-determined systems; it would be doing some sort of optimization that could actually be telling us interesting things.]
  • Track the current displayed witnesses of satisfying the constraints (after all, we need those at least numerically to update the screen). Hopefully we can actually get exact algebraic current witnesses, at least when we're not in the midst of fast motion. If so, then in lightly constrained networks, we can temporarily add constraints fixing all the entities except for the ones that are being moved, backing off on some temporary constraints by distance in the constraint network until we get a low-dimensional component that we are on, and move just along that component. This will have consequences in the "feel" of the interface: it will make the system "conservative" in the sense of trying to move a small number of entities to keep constraints satisfied in a drag, leaving as much as possible of the construction fixed. One potential problem with that conservative approach is that in many cases it will break symmetry. For example, if M is defined as the midpoint of AB, and you drag M along line AB, you could at most fix one point, but it could be either A or B, so how do you choose? "Least-squares" motion will mean you simply drag the whole segment AB along its line -- is that the "intuitive" motion of the system when its midpoint is dragged? Or is some arbitrary symmetry-breaking OK to maximize the number of stationary entities?
  • We may be able to take good advantage of the fact that fixing some of the entities will linearize the remaining constraints, so we can very rapidly check if there are any overall solutions with those points fixed, and even tell something about conditions on the values of those entities that do or don't allow the linearized system to be solved. We might want to try to keep track of minimal sets of entities which, when fixed, linearize everything else (in their connected component, per the general simplification).
  • Try to emulate Geometry Expressions to some extent and convert some (sets of) constraints into constructions, e.g. if A,B,C collinear and AB=BC, we can just construct B as the midpoint of A and C, and it never need have any variables associated to it. These are basically hand-picked examples of the second bullet, i.e., we might want to have both heuristic and automatic means of detecting rigid subnetworks. Some limitations of this approach, at least as implemented by Geometry Expressions, can be found on pp 18-21 of the Geometry Expressions manual.

Overall solver approaches

  • Find the Gröbner basis of the constraints. This definitely tells us a lot about the possible configurations. For example, the basis will be 1 if and only if there are no solutions (correct?). And we should be able to tell if the system is rigid, i.e., has only finitely many solutions (correct?). In this case, can we find all of the algebraic number solutions? (I am worried that is in general a high-complexity problem that may become intractable in practice for realistic constructions??) And hopefully in the case of nonzero dimension components, we will have ways to find some rational points if they exist; we could certainly hit a variety with no rational points, correct?. We're of course also fine with finding say quadratic or other low-degree algebraic numbers that satisfy the constraints. I assume for any fixed degree, it could have only finitely many algebraic points of that degree over Q. I think that's true, but according to https://math.stackexchange.com/questions/21979/points-of-bounded-degree-on-varieties if it has infinitely many points, then there is a least degree for which it has infinitely many points. Are we interested in that degree? Can we find it from the Gröbner basis? If we could generate points of a given degree well, then we would know what degree to go to in order to allow a "smooth drag" among exact solutions. E.g, for a circle/sphere of quadratic algebraic radius there are already infinitely many rational points, if I am not mistaken, see https://mathoverflow.net/questions/125224/rational-points-on-a-sphere-in-mathbbrd. Except for the occasional "holes" around particularly low-denominator solutions, this would allow for a smooth drag just among rational points. )
  • Use Newton's method to minimize the sum of the squares of the constraints. If we do find an approximate zero this way, is there a way to find a nearby algebraic zero? Sadly, if we don't find a zero this way, we may have just gotten stuck at a local minimum, so that's no proof the constraints are inconsistent. Note that if we suspect that we are near a quadratic algebraic solution, then the approach described at https://mathoverflow.net/questions/2861/how-should-i-approximate-real-numbers-by-algebraic-ones could well be fruitful, if we take the approximation to high accuracy.
  • Use homotopy continuation (see https://www.juliahomotopycontinuation.org/) to get numerical information about the solution variety (and generate numerical approximations to points on it, correct?) If I understand correctly, an advantage of this method is that we get an almost-sure proof that the system is inconsistent (if it is), correct? And become almost sure whether it is rigid (has only isolated solutions), correct? As with Newton's method, we still have the question of whether having approximate numerical solutions helps us find exact algebraic ones, with presumably the same set of possible techniques. Also, if I understand correctly, this approach comes with a natural way to update (numerical) solutions as one element is dragged, is that right? Actually, homotopy continuation in general aside, shouldn't we be able to, at least numerically, find the tangent space to the variety at a point on it and simply ensure that at least locally we always proceed within that tangent space?
  • Since generally speaking, at least for most of the starting list of constraints, each constraint is quadratic, there might be some useful information here
  • If say we are seeking the closest satisfying point to some input, it might be that we are trying to minimize a nonnegative polynomial, and then some of the information at https://en.wikipedia.org/wiki/Positive_polynomial might be relevant.

Feel free to add in other ideas in any of these categories.

General references