Giter Club home page Giter Club logo

spectral_inference_networks's Issues

up to date code? results look wrong when I update by myself..

Inorder to run hydrogen.py I had to make some changes, mainly stuff like -

tf.rand_uniform==> tf.rand.uniform

tf.<some_matrix_operation> ==> tf.compat.v1.<some_matrix_operation>

and at the top of hydrogen.py I added -
tf.compat.v1.disable_eager_execution()

Now it works, but the result I get seems quite different than yours.. It looks more like your beta=1 results :\ Guess some of the changes I made caused that.

Do you have an already updated working version?

spin_test.py unexpected behavior?

Hi, I'm was running spin_test.py to get a basic understanding for how spin works and I noticed that it seems to be consistently finding the smallest magnitude eigenvalues rather than the largest. Is this expected behavior? I thought the paper specified that the goal of spin was to find the top N eigenfunctions? If it is expected behavior, is there a way to reverse the problem formulation to get the largest rather than smallest?

I was running spin_test with random seed 0 with matrices of size 10, and finding 5 eigenfunctions.

Actual eigenvalues:
image

Calculated eigenvalues:
image

Jax implementation and stability of training

Hey, we implemented the algorithm in jax here: https://github.com/Binbose/SpIN-Jax

We also made a colab notebook with some extra visualizations like animated training and phase diagrams here

Generally, the algorithm is really cool and runs well. We can recover the first 4 eigenfunctions of hydrogen reliably. However, we noticed that the variance of the energies of our implementation (and also your TensorFlow implementation here, which virtually behaves identically to our implementation) is much bigger compared to the graph in your paper. We are not quite sure how you achieved such stable training, do you remember if you used some additional tricks?
(The variance in the colab notebook is divided by 15 to make the graphs readable)
This is also in accordance with the case of 9 eigenfunctions, for which the training generally behaves a lot less stable than what we can see in the paper and the eigenfunctions look somewhat recognizable but are still far off.

Things we tried:
Reduce the learning rate up to 1e-6
Up the batch size to 512
Increase and decrease beta
Play around with different 'sparsifying' K (we couldn't find the parameter you used in your work)
Tried different decay rates for rmsprop and adam

Can you offer some additional insights?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.