Giter Club home page Giter Club logo

vinberg's Introduction

๐•๐ข๐ง๐ง๐ฒ ๐ญ๐ก๐ž ๐œ๐จ๐ฑ๐›๐จ๐ฒ

Learning to herd Coxeter diagrams in the Hyperbolic Plains since 2021


Port of aperep/vinberg-algorithm to julia with some changes

Warning. The code is still being tested, and surely has many bugs!

Description

  • The finite volume check for Coxeter diagrams is implemented โ€œfrom scratchโ€ using Guglielmetti's thesis as theoretical resource and CoxIter as a comparison point (plus a majority of tests from there); thank you!
  • The main bulk of Vinberg's algorithm (and associated procedures) is ported (modulo errors introduced in the process) from B&P.
  • The code uses static bitsets for hopefully efficient storage of small integer sets. The code for this comes from here. TODO. Contact the author to ask if we can use their code, and under what licence (see here).

How to use

  • If not already installed, run julia install_packages.jl to install the packages used in the code.
  • Go to the root folder of the project (having subfolder lattices,graphs,src).
  • Launch julia with julia -t auto to enable multithreading (might be useful, but on small examples does't look like it is)
  • call include("src/vinberg.jl").
  • calling toggle(true) or toggle(false) respectively enables/disables a majority of asserts in the code.

Now everything needed is loaded.

  • To run the Vinberg Algorithm on a lattice given by a matrix M, call Vinberg_Algorithm(M).
  • Some matrices are accessible through the dictionary L, for instance L["Vin07_L1"] contains one matrix of Vinberg.

vinberg's People

Contributors

bottine avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

vinberg's Issues

Doing computations with a diagonal basis versus the standard one.

So, we have P'*G**P = D as usual.
Let L be the lattice ZZ^n and L_P the matrix spanned by the columns of P.
Since P is invertible, there exists some big enough integer M so that M*L is a subset of L_P, right?
Since the elements P are orthogonal, it's way easier to reason about this lattice I think, and I'm wondering if it's a good idea to do the computations on L_P and translate back as necessaryโ€ฆ I guess the problem is that there will be many points of L_P that, when divided by M, aren't integer:ย superfluous points.

I'm not sure what I'm trying to say, but I'd like to write the routine that generates all roots now, before doing the fundamental cone computation, and I'm quite confused by how it all goes: The thesis of Guglielmetti does things differently than B&P, and they have a hard to read implementation which isn't very well synchronized with their paper

More questions: (no need to answer quickly, I'm writing them down for the record)

  1. Say the transformation P has columns v0,โ€ฆ,v_n with v0 of negative norm.
    Then in B&P, they talk about the orthogonal complement of v0: if we were doing vector spaces, we could juste take v1,โ€ฆ,v_n as a basis and be happy, but in our case v1,โ€ฆ,v_n now span a sub-lattice of the orthogonal complement of v0:
    Is there a known process to modify those vectors to get a basis for the complement itself?
    In the notation of B&P, this would be a basis of the space L'.
  2. Another not-so-related one: When they build the fundamental cone P_0, they test for each candidate root whether the hyperplane is actually necessary by testing if it intersects the previous cone non-trivially.
    To do so, they call some sage functions which do the check for them.
    Is it obvious how to do this check by hand?
  3. What's a good reference book/resource talking about what I need to know for these lattice/quad forms business?

Diagram in the output

More output should be human readable: e.g. one should be able to extract the diagram / Coxeter matrix from Vinberg_Algorithm. It should return some kind of an encapsulated objects that has:

  1. the quad form we started with
  2. v0
  3. fund cone
  4. the whole set of roots (if finite volume)
  5. the diagram in some format that is suitable later on for drawing, e.g. by applying Draw_Diagram(x), where x = Vinberg_Algorithm(y), y = matrix of the quad form.

Is it possible to enumerate all roots without care for distance, efficiently?

I tried to generate random roots, but that seems rather inefficient actually, since I guess a "random" vector won't be a root.

I would like to try a strategy consisting in generating roots in increasing distance, but with no care for whether the closest ones to vโ‚€ are chosen.
The idea would be that, perhaps, random roots will generally be sufficient to cover a finite polyhedron if such exist.
Maybe that could even be made into a formal probabilistic argument: What's the probability that a set of n roots define a finite polyhedron, if the lattice is reflective?

Opinions, @sashakolpakov ?

Is the matrix `P` given by a diagonalization always a basis of the lattice, or can it span a strict sublattice ?

More precisely, we start with the integral lattice Z^d, and an integral bilinear form on it, defined by a symmetric integral matrix A.
The diagonalization process yields an invertible matrix P and a diagonal matrix D satisfying P'AP = D with P' the transpose of P (all have integral coefficients).
Now, if we were talking about vector spaces (say over R), it would be clear that the columns of P can be viewed as a basis of R^d because P is invertible.
If I'm recalling correctly, a set of d vectors in Z^d spans Z^d as a lattice iff the determinant of the matrix they define is 1.
So the question is, is the determinant always ยฑ1.

Redefinitions of some constants?

On include("src/vinberg.jl") always gives "WARNING: redefinition of constant deg_seq_a1. This may fail, cause incorrect answers, or produce other errors."

Breaks down on a big(ish) input

This input

[ 4 -2 0 0 0 0 0 0 0 0 0 0 0 0 0;
-2 4 -2 2 0 0 0 0 0 0 0 0 -1 0 0;
0 -2 4 0 0 2 0 0 0 0 0 0 2 0 0;
0 2 0 4 2 2 0 0 0 0 0 0 0 0 0;
0 0 0 2 4 2 0 0 2 1 0 0 0 0 0;
0 0 2 2 2 4 2 2 1 2 0 0 1 0 0;
0 0 0 0 0 2 4 2 0 2 0 0 0 0 0;
0 0 0 0 0 2 2 4 0 2 0 0 1 0 0;
0 0 0 0 2 1 0 0 4 2 0 0 0 0 0;
0 0 0 0 1 2 2 2 2 4 2 2 1 0 0;
0 0 0 0 0 0 0 0 0 2 4 2 2 0 0;
0 0 0 0 0 0 0 0 0 2 2 4 1 0 0;
0 -1 2 0 0 1 0 1 0 1 2 1 4 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 2;
0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 ]

produces the following output

[ Info: Initialized root iterator.
[ Info: Got all roots at distance zero.
[ Info: Starting with 28 candidate roots (at distance zero) for fundamental cone.
[ Info: Returning 14 cone roots.
[ Info: Got the roots of a fundamental cone.
[ Info: : [-1, -2, -4, 0, -2, 4, -1, -2, 0, 0, -1, 0, 2, 0, 0].
[ Info: : [-1, -2, -3, 0, -2, 4, -1, -2, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [-1, -2, -3, 0, -2, 4, 0, 0, 2, -4, 1, 2, 0, 0, 0].
[ Info: : [-1, -2, -2, 1, -2, 2, -1, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [-1, -2, -2, 1, -2, 2, 0, 0, 2, -2, 1, 0, 0, 0, 0].
[ Info: : [-1, -2, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [0, 0, -1, -1, 0, 2, -1, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [0, 0, -1, -1, 0, 2, 0, 0, 0, -2, 1, 0, 0, 0, 0].
[ Info: : [0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0].
[ Info: : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0].
[ Info: : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1].
[ Info: Built the corresponding Coxeter diagram.
[ Info: Found new satisfying root: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [0, 0, 1, 1, 0, -2, 1, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [0, 0, 1, 1, 0, -2, 0, 0, 0, 2, -1, 0, 0, 1, 0].
[ Info: Found new satisfying root: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0].
[ Info: Found new satisfying root: [1, 2, 4, 0, 2, -4, 1, 2, 0, 0, 1, 0, -2, 1, 0].
[ Info: Found new satisfying root: [1, 2, 3, 0, 2, -4, 0, 0, -2, 4, -1, -2, 0, 1, 0].
[ Info: Found new satisfying root: [1, 2, 2, -1, 2, -2, 0, 0, -2, 2, -1, 0, 0, 1, 0].
[ Info: Found new satisfying root: [1, 2, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0].
[ Info: Found new satisfying root: [1, 2, 2, -1, 2, -2, 1, 0, 0, 0, 0, 0, 0, 1, 0].

and then the process breaks off.

ERROR ON ROOTS in test_suite()

I used this lattice: (call it BP_D4D4-1.lat).

2 0 1 0 0 0 0 0 0
0 2 -1 0 0 0 0 0 0
1 -1 2 -1 0 0 0 0 0
0 0 -1 2 0 0 0 0 0
0 0 0 0 2 0 1 0 0
0 0 0 0 0 2 -1 0 0
0 0 0 0 1 -1 2 -1 0
0 0 0 0 0 0 -1 2 0
0 0 0 0 0 0 0 0 -1

After I added it too known_values.json, it cast several time "ERROR on roots".

Roots it found first time (as written in my json file): SArray{Tuple{9},Int64,1,9}[[-1, 2, 2, 1, 0, 0, 0, 0, 0], [0, -1, 0, 1, 0, 0, 0, 0, 0], [0, 0, -1, 0, 0, 0, 0, 0, 0], [0, 0, 0, -1, 0, 0, 0, 0, 0], [0, 0, 0, 0, -1, 2, 2, 1, 0], [0, 0, 0, 0, 0, -1, 0, 1, 0], [0, 0, 0, 0, 0, 0, -1, 0, 0], [0, 0, 0, 0, 0, 0, 0, -1, 0], [1, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 1, 0, 0, 0, 1], [2, -1, -2, -1, 2, -1, -2, -1, 2]]

Second time:
[[-1, 2, 2, 1, 0, 0, 0, 0, 0], [0, -1, 0, 1, 0, 0, 0, 0, 0], [0, 0, -1, 0, 0, 0, 0, 0, 0], [0, 0, 0, -1, 0, 0, 0, 0, 0], [0, 0, 0, 0, -1, 2, 2, 1, 0], [0, 0, 0, 0, 0, -1, 0, 1, 0], [0, 0, 0, 0, 0, 0, -1, 0, 0], [0, 0, 0, 0, 0, 0, 0, -1, 0], [2, -1, -2, -1, 2, -1, -2, -1, 2], [1, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 1, 0, 0, 0, 1]]

Looks like they coincide as sets. Can you verify how the roots are arranged by distance? Why would they be permuted? Sometimes they come out exactly as stored, and sometimes permuted.

Use as few computations as possible

E.g. when enumerating roots, the norm of a root returned by roots_decomposed_into is known, so that we could ship the norm with the root already, so that we don't have to compute it later on, but is it actually useful?
Probably other such double computations could be killed at other places.

Performance drop!

I see a horrible 10x performance with the commit 513abb39fcf58d87d54f260a316568759c82d3d8 and what I had before (say 10 commits before).
Need to understand what the reason is, but Iย don't see it, since no big code change happened that Iย can seeโ€ฆ
Suspects are:

  • Using channels (but the commit introducing them said the impact was not big either way)
  • Making the channel's buffer bigger (but the profiler doesn't say the problem lies here as far as I can see)
  • Changes in qsolve (the profiler seems to say that qsolve_iterative is very resource hungry, but that's to be expected, and Iย changed almost nothing there!)
  • Setting warmstart = true for the cbc optimizer (but the profiler doesn't seem to say it's that)

Check that once a hyperplane is added and deemed necessary, it won't ever become redundant

Start with the following:

Let's say it's distance comparison.
You need to check smth like: if H_1, H_2 are hyperplanes such that v_0 belongs to their negative half-spaces and, moreover, H_1 is contained in the negative subspace of H_2, then the distance from v_0 to H_1 is less than the distance from v_0 to H_2 (for any choice of v_0 as described).
I think this will be the 'formal" argument, but somehow it seems very intuitive.

Break down in dimension 14

For the diagonal quadratic form [1]*14+[-2], I get the following:

[ Info: Initialized root iterator.
[ Info: Got all roots at distance zero.
[ Info: Creating a channel for root enumeration (buffer size 16)
[ Info: Starting the root enumeration process.
[ Info: Found candidate roots for fundamental cone (numbering 392).
[ Info: Returning cone roots, numbering 14.
[ Info: Got the roots of a fundamental cone.
[ Info: Built the corresponding Coxeter diagram.
[ Info: Found new satisfying root: [2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1].
[ Info: Found new satisfying root: [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1].
[ Info: Found new satisfying root: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2].

Julia has exited.

It finds new roots extremely slowly compared to Guglielmetti's code, and also exits unexpectedly at some point. I have no idea what causes this issue.

Other number fields

The main difficulties arise at two places:

  • The root enumeration procedure must be overhauled, since the current one is tailor-made for ZZ.
  • The hyperplane necessity check works only in ZZ.

Maybe instead of adapting the root enumeration procedure, we could just enumerate roots in any order:ย we can probably still see if some finite volume polyhedron appears, but it won't be the minimal one yetโ€ฆ

For the hyperplane necessity, maybe we can actually do precise enough "float" arithmetic to get a first guess, and then take the "witness" the solver gives us and try to verify that it or some close enough element is a valid formal witness?

Decide cone roots "combinatorially"

Say we build a graph all of whose vertices are the roots at distance zero from the basepoint, and with an edge between two roots if the angle defined by their hyperplanes is acute.

Is there a relation between cliques in this graph and choices of fundamental cones?

I don't understand FundCone in B&P

See the implementation: here.

What they seem to do is:

  1. Take all roots in vโ‚€^\perp = Vโ‚.
  2. Add them to the cone as necessary.

I have a problem with step 1.
The idea is that any element of the lattice can be represented as aโ‚€vโ‚€ + vโ‚ + w, where vโ‚ is in Vโ‚ and w is a coset representative for the sublattice vโ‚€ (+) Vโ‚ of our big lattice.
Then, the procedure s.RootsDecomposedInto(s.V([0]*s.n), k) returns elements where both w and aโ‚€ are zero, so that what remains is the vโ‚ component, which is indeed an element of Vโ‚.
But it could also happen that an element of Vโ‚ย cannot be written as 0*vโ‚€ + vโ‚ย + 0, and thus, not come from the RootsDecomposedInto procedure.
The condition for an element of the form aโ‚€vโ‚€ + vโ‚ + w to be in Vโ‚ is that (aโ‚€vโ‚€ + vโ‚ + w | vโ‚€ ) == 0, which means only that aโ‚€ == -(vโ‚€|w) / (vโ‚€|vโ‚€).

Another thing that Iย don't understand:
Afaik, the condition that roots defining the polytope have pairwise non-positive product should hold for all of them, "cone" roots included: but B&P do not check this condition, and adding it actually changes the results! what is the reason?

Am I mistaken? pinging @sashakolpakov

Reduce memory usage

Memory usage shouldn't be too high, yet it quickly gets in the order of gigabytesโ€ฆ

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.