Giter Club home page Giter Club logo

Comments (6)

moble avatar moble commented on May 18, 2024

Seems like a good idea, as long as the code still uses these big tables. A few questions / comments:

  1. I can't quite follow the logic. Are the big tables stored, or just the sparse ones if you pull in Hugo's modifications? Is there anything else that's really slow about constructing the layout? If not, would it be better to just store the sparse objects and hook them in to the layout initializer?

  2. I'm a little fuzzy on pickle, but I think one of its advantages/problems is that it reconstitutes the object as it was at the time it was stored. So if you change the Layout class between the time the pickle is created and the time it is read, there will be an inconsistency β€”Β it might be missing some feature, or have incompatible data format, etc. This might not be a problem when people are just installing once and using the code as is, but I imagine it would be a huge pain when you're trying to develop the package.

  3. I do something similar in one of my projects. I store and retrieve the data using numpy's own save and load functions because I found them to be the fastest option. As I recall, the compression could have been a bit better, but it wasn't really worth fussing about. In particular, the loading part is very fast, and versioning isn't a problem. The data are then distributed and installed with my code. Especially if you're using the sparse matrices, this shouldn't be a problem. My steps were these:

    a. I generated the data once on my own machine
    b. I then copy them over during installation. They wind up in the installation directory with the rest of the __init__.py files and such.
    c. They then get loaded during import. As you mentioned, you'll want to do this dynamically only when a particular data set is needed, rather than load them all during import. But the key point here is that I use the __file__ constant to get the path to the __init__.py file, which I then use to figure out where the data files are.

from clifford.

arsenovic avatar arsenovic commented on May 18, 2024

ok, my usage scenario was a little different.

on my machine init'ing a 6D GA is slow, and 8D really slow. also, i like having fixed ga's, like from clifford import g300 for euclidean 3-space. currently these are implemented via hand-written sub-module stubs. perhaps this method is good enough, but the idea behind a cache was to

  1. dynamically generate predefined ga submodules
  2. speed up [user defined] big algebra init's

i am now working on a config file, so that a user could add their own predifined signtures to a list of fixed ones. kind of like matplotlib.styles. so, there will be

  • static : g200, g300, g400, g310, g130, g520
  • user defined : g600,g800, [whatever]

reloading the cache should take seconds to minutes, so reloading is no big deal (making pickle ok) , but you dont want to do it in every notebook.

all that being said, if 99% of people just want G2,G3 and their CGA's this effort is a waste of time. however, i think supporting arbitrary GA's will be very beneficial in the future.

from clifford.

hugohadfield avatar hugohadfield commented on May 18, 2024

@arsenovic I've just pushed another change into hugo/performance that jits some of the generation code, on my laptop i get 33.7s from master to 7.5s for clifford.Cl(8).
The overhead of jitting might cause it to be slower for low dimension algebras though..

from clifford.

arsenovic avatar arsenovic commented on May 18, 2024

this feature has stalled due to lack of need, and thus low priority. i think its a decent idea and worth keeping open for the future.

from clifford.

hugohadfield avatar hugohadfield commented on May 18, 2024

I think this is a high priority issue, lots of people are using high dimension algebras and the slow initialisation really hits hard. The slowness really comes in on generating the sparse multiplication tables and storing these in memory. I think we need to do a couple of things here:
Remove all references to the multiplication tables themselves
Pre compute a wide range of algebras sparse tables and filter to get only the non zero elements, store these, probably with the numpy file format or something similar
Load the saved sparse objects on algebra creation

from clifford.

arsenovic avatar arsenovic commented on May 18, 2024

i accidentally added this to the oo PR, but this was my first attempt at the caching.

https://github.com/arsenovic/clifford/blob/oo_cga/clifford/caching.py

from clifford.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.