Giter Club home page Giter Club logo

Comments (14)

gmaze avatar gmaze commented on June 7, 2024 2

I would surely have interest in working on this but have no timeline before the end of February and would certainly need a lot of help in order to follow the dask-ml code logic

At this point, I don't quite yet understand where the need for a specific distribution method arises, ie why people publish papers on new GMM algorithm vs distribute the bottleneck operation of the classic EM algorithm for a GMM (which is, as you pointed, the cholesky factorization of the covariance matrices)

The first step, may be to try the benchmark the regular GMM EM algorithm with and without specific dask-ml optimized operators

from dask-ml.

gmaze avatar gmaze commented on June 7, 2024 1

I didn't get the time to work on this yet because I wanted to focus on releasing a clean version of http://github.com/obidam/pyxpcm , which now implement the choice of 2 stats backend (scikit-learn or dask_ml).
Now that it's done, I plan to focus on optimisation, hence this issue of having EM/GMM optmized for dask_ml.
But I can't guaranty any timeline

from dask-ml.

remiadon avatar remiadon commented on June 7, 2024 1

Hi,

I made a bit of literature search on my side.

IMO the resource mentioned by @gmaze is a good start, but it's basically a re-implementation designed to reduce data exchange on cluster of machines. Quoting page 2

we developed a newframework called Distrim from scratch, aiming to minimize space and communication overheads as much as possible, and to maximize the usage of computational power of multicore clusters as much as possible

I suggest to use a different methodology. One concept that I find particularly interesting is called coreset
A coreset is a subset of the original data that gives theoretical guarantees on the shape (the shape of a coreset is close from the shape of the original data)

Coresets have already proven to be useful for large scale modeling of Gaussian Mixture, as well as K-means and K-median clustering

proposed solution

  • implement a Coreset class (or method, anyway) using dask.arrays. This would return a subsample of the original dask.array as a numpy.array, along with associated weigths for those points, also as a numpy array.
  • tweak sklearn.GaussianMixture to accept weighted datasets, and run the clustering via this sklearn model

I believe this methodology is compatible with the current philosophy of dask-ml ("re-implement at scale if required, or simply allow sklearn estimators to scale with a different methodology). It can also benefit other methods, not only GMMs

Regards,
Rémi

References
Scalable Training of Mixture Models via Coresets
Coresets for k-Means and k-Median Clustering and their Applications

from dask-ml.

remiadon avatar remiadon commented on June 7, 2024 1

@TomAugspurger, a Coreset meta-estimator would be great !

Another way of achieving an equivalent goal would be to implement a CoresetTransformer that would return the data fully transformed (the set of points, weighted). But as far as I know sklearn prohibits having a different number of rows between intput/output of a transformer ...

Any of those solutions suits me, I can try submitting a PR

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024 1

Yes, the Transformer would also work well, but would I think require scikit-learn/enhancement_proposals#15. I haven't read through that in a while, but I don't know how it proposes to deal with weights.

Anyway, I think for now an implementation using a metaestimator would be most welcome. I think the logic of selecting the coreset is likely to be the most difficult part, regardless of the API :)

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024

That'd certainly be in scope. I likely won't have time to work on this until the end of the month, but may be able to after that.

Do you have any other references for parallel or distributed GMM? That paper doesn't seem to be publicly available.

from dask-ml.

gmaze avatar gmaze commented on June 7, 2024

The paper is here:
Yang_et_al.IEEE2012.pdf
But I'm not sure that this is the most relevant implementation for dask-ml, more biblio should be done

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024

Gave a quick skim of scikit-learn's implementation. A translation of that to use work on dask arrays doesn't look too difficult. Unless I missed something, the fanciest thing was a cholesky decomposition, which is implemented in dask.array.

@gmaze do you have any interest in working on this?

from dask-ml.

mrocklin avatar mrocklin commented on June 7, 2024

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024

from dask-ml.

DaniJonesOcean avatar DaniJonesOcean commented on June 7, 2024

Hi all. I would like to flag my interest in this project as well. It doesn't look like there has been much activity in this area lately.

Does anyone have plans to work on this issue in the near-term future? I would be interested in contributing, but like gmaze I would need help getting started.

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024

Thanks for the update @gmaze.

from dask-ml.

TomAugspurger avatar TomAugspurger commented on June 7, 2024

Thanks for sharing @remiadon. One API question around your proposed Coreset class.

This would return a subsample of the original dask.array as a numpy.array, along with associated weigths for those points, also as a numpy array

I see the suggestion of a method like coreset(*arrays) that handles all the logic of extracting a coreset from a dask Array. But for an end-user API, I instead think of some kind of meta-estimtaor like

>>> model = Coreset(sklearn.mixture.GaussianMixture())
>>> model.fit(big_X, big_y)  # extracts the coreset, fits the weighted(?) sklearn GMM on the coreset (small, in memory)
>>> model.predict(big_X)  # Dask Array of predictions

from dask-ml.

remiadon avatar remiadon commented on June 7, 2024

I created a PR here #799

This is work in progress for now, as most of the sampling methods were designed for KMeans, and usage with Gaussian Mixture is still a bit obscure to me.

from dask-ml.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.