Comments (14)
I would surely have interest in working on this but have no timeline before the end of February and would certainly need a lot of help in order to follow the dask-ml code logic
At this point, I don't quite yet understand where the need for a specific distribution method arises, ie why people publish papers on new GMM algorithm vs distribute the bottleneck operation of the classic EM algorithm for a GMM (which is, as you pointed, the cholesky factorization of the covariance matrices)
The first step, may be to try the benchmark the regular GMM EM algorithm with and without specific dask-ml optimized operators
from dask-ml.
I didn't get the time to work on this yet because I wanted to focus on releasing a clean version of http://github.com/obidam/pyxpcm , which now implement the choice of 2 stats backend (scikit-learn or dask_ml).
Now that it's done, I plan to focus on optimisation, hence this issue of having EM/GMM optmized for dask_ml.
But I can't guaranty any timeline
from dask-ml.
Hi,
I made a bit of literature search on my side.
IMO the resource mentioned by @gmaze is a good start, but it's basically a re-implementation designed to reduce data exchange on cluster of machines. Quoting page 2
we developed a newframework called Distrim from scratch, aiming to minimize space and communication overheads as much as possible, and to maximize the usage of computational power of multicore clusters as much as possible
I suggest to use a different methodology. One concept that I find particularly interesting is called coreset
A coreset is a subset of the original data that gives theoretical guarantees on the shape (the shape of a coreset is close from the shape of the original data)
Coresets have already proven to be useful for large scale modeling of Gaussian Mixture, as well as K-means and K-median clustering
proposed solution
- implement a Coreset class (or method, anyway) using dask.arrays. This would return a subsample of the original dask.array as a numpy.array, along with associated weigths for those points, also as a numpy array.
- tweak sklearn.GaussianMixture to accept weighted datasets, and run the clustering via this sklearn model
I believe this methodology is compatible with the current philosophy of dask-ml ("re-implement at scale if required, or simply allow sklearn estimators to scale with a different methodology). It can also benefit other methods, not only GMMs
Regards,
Rémi
References
Scalable Training of Mixture Models via Coresets
Coresets for k-Means and k-Median Clustering and their Applications
from dask-ml.
@TomAugspurger, a Coreset meta-estimator would be great !
Another way of achieving an equivalent goal would be to implement a CoresetTransformer that would return the data fully transformed (the set of points, weighted). But as far as I know sklearn prohibits having a different number of rows between intput/output of a transformer ...
Any of those solutions suits me, I can try submitting a PR
from dask-ml.
Yes, the Transformer would also work well, but would I think require scikit-learn/enhancement_proposals#15. I haven't read through that in a while, but I don't know how it proposes to deal with weights.
Anyway, I think for now an implementation using a metaestimator would be most welcome. I think the logic of selecting the coreset is likely to be the most difficult part, regardless of the API :)
from dask-ml.
That'd certainly be in scope. I likely won't have time to work on this until the end of the month, but may be able to after that.
Do you have any other references for parallel or distributed GMM? That paper doesn't seem to be publicly available.
from dask-ml.
The paper is here:
Yang_et_al.IEEE2012.pdf
But I'm not sure that this is the most relevant implementation for dask-ml, more biblio should be done
from dask-ml.
Gave a quick skim of scikit-learn's implementation. A translation of that to use work on dask arrays doesn't look too difficult. Unless I missed something, the fanciest thing was a cholesky decomposition, which is implemented in dask.array
.
@gmaze do you have any interest in working on this?
from dask-ml.
from dask-ml.
from dask-ml.
Hi all. I would like to flag my interest in this project as well. It doesn't look like there has been much activity in this area lately.
Does anyone have plans to work on this issue in the near-term future? I would be interested in contributing, but like gmaze I would need help getting started.
from dask-ml.
Thanks for the update @gmaze.
from dask-ml.
Thanks for sharing @remiadon. One API question around your proposed Coreset
class.
This would return a subsample of the original dask.array as a numpy.array, along with associated weigths for those points, also as a numpy array
I see the suggestion of a method like coreset(*arrays)
that handles all the logic of extracting a coreset from a dask Array. But for an end-user API, I instead think of some kind of meta-estimtaor like
>>> model = Coreset(sklearn.mixture.GaussianMixture())
>>> model.fit(big_X, big_y) # extracts the coreset, fits the weighted(?) sklearn GMM on the coreset (small, in memory)
>>> model.predict(big_X) # Dask Array of predictions
from dask-ml.
I created a PR here #799
This is work in progress for now, as most of the sampling methods were designed for KMeans, and usage with Gaussian Mixture is still a bit obscure to me.
from dask-ml.
Related Issues (20)
- Add backward compatibility for supported version of scikit-learn
- Bug in ColumnTransformer HOT 2
- HashingVectorizer behaves differently from FeatureHasher HOT 1
- sklearn handles text labels differently than ml_dask on OneHotEncoding
- Implementation for make_s_curve HOT 2
- Import dask_ml with python 3.10 failed due to conflict with dask.distributed HOT 4
- Python 3.11 support HOT 2
- LogisticRegression.score returns an empty dask array
- Incremental does not handle dask arrays of ndim>2 in estimator training HOT 2
- loading dask_ml gives error contextualversionconflict with sklearn HOT 4
- For a single record data frame train_test_split() sometimes assigns this single record to test set. HOT 2
- The `log_loss`-function crashes when using mixed types
- Area under the receiving operating characteristic curve (AUROC) calculation. HOT 2
- The latest version doesn't support perceptron model
- sklearn StandardScaler vs dask StandardScaler. HOT 1
- Nearest Neighbors
- `TypeError` when predicting non-array data with `dask-expr` HOT 6
- Undeclared runtime dependency on setuptools HOT 1
- Documentation on PCA expected max memory usage HOT 1
- Support GPU-backed data for metrics.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dask-ml.