Giter Club home page Giter Club logo

pygmmis's Introduction

PyPI License DOI arXiv

pyGMMis

Need a simple and powerful Gaussian-mixture code in pure python? It can be as easy as this:

import pygmmis
gmm = pygmmis.GMM(K=K, D=D)      # K components, D dimensions
logL, U = pygmmis.fit(gmm, data) # logL = log-likelihood, U = association of data to components

However, pyGMMis has a few extra tricks up its sleeve.

  • It can account for independent multivariate normal measurement errors for each of the observed samples, and then recovers an estimate of the error-free distribution. This technique is known as "Extreme Deconvolution" by Bovy, Hogg & Roweis (2011).
  • It works with missing data (features) by setting the respective elements of the covariance matrix to a vary large value, thus effectively setting the weights of the missing feature to 0.
  • It can deal with gaps (aka "truncated data") and variable sample completeness as long as
    • you know the incompleteness over the entire feature space,
    • and the incompleteness does not depend on the sample density (missing at random).
  • It can incorporate a "background" distribution (implemented is a uniform one) and separate signal from background, with the former being fit by the GMM.
  • It keeps track of which components need to be evaluated in which regions of the feature space, thereby substantially increasing the performance for fragmented data.

If you want more context and details on those capabilities, have a look at this blog post.

Under the hood, pyGMMis uses the Expectation-Maximization procedure. When dealing with sample incompleteness it generates its best guess of the unobserved samples on the fly given the current model fit to the observed samples.

Example of pyGMMis

In the example above, the true distribution is shown as contours in the left panel. We then draw 400 samples from it (red), add Gaussian noise to them (1,2,3 sigma contours shown in blue), and select only samples within the box but outside of the circle (blue).

The code is written in pure python (developed and tested in 2.7), parallelized with multiprocessing, and is capable of performing density estimation with millions of samples and thousands of model components on machines with sufficient memory.

More details are in the paper listed in the file CITATION.cff.

Installation and Prerequisites

You can either clone the repo and install by python setup.py install or get the latest release with

pip install pygmmis

Dependencies:

  • numpy
  • scipy
  • multiprocessing
  • parmap

How to run the code

  1. Create a GMM object with the desired component number K and data dimensionality D: gmm = pygmmis.GMM(K=K, D=D)

  2. Define a callback for the completeness function. When called with with data with shape (N,D) and returns the probability of each sample getting observed. Two simple examples:

    def cutAtSix(coords):
    	"""Selects all samples whose first coordinate is < 6"""
        return (coords[:,0] < 6)
    
    def selSlope(coords, rng=np.random):
        """Selects probabilistically according to first coordinate x:
        Omega = 1    for x < 0
              = 1-x  for x = 0 .. 1
              = 0    for x > 1
        """
        return np.max(0, np.min(1, 1 - coords[:,0]))
  3. If the samples are noisy (i.e. they have positional uncertainties), you need to provide the covariance matrix of each data sample, or one for all in case of i.i.d. noise.

  4. If the samples are noisy and there completeness function isn't constant, you need to provide a callback function that returns an estimate of the covariance at arbitrary locations:

    # example 1: simply using the same covariance for all samples
    dispersion = 1
    default_covar = np.eye(D) * dispersion**2
    covar_cb = lambda coords: default_covar
    
    # example: use the covariance of the nearest neighbor.
    def covar_tree_cb(coords, tree, covar):
        """Return the covariance of the nearest neighbor of coords in data."""
        dist, ind = tree.query(coords, k=1)
        return covar[ind.flatten()]
    
    from sklearn.neighbors import KDTree
    tree = KDTree(data, leaf_size=100)
    
    from functools import partial
    covar_cb = partial(covar_tree_cb, tree=tree, covar=covar)
  5. If there is a uniform background signal, you need to define it. Because a uniform distribution is normalizable only if its support is finite, you need to decide on the footprint over which the background model is present, e.g.:

    footprint = data.min(axis=0), data.max(axis=0)
    amp = 0.3
    bg = pygmmis.Background(footprint, amp=amp)
    
    # fine tuning, if desired
    bg.amp_min = 0.1
    bg.amp_max = 0.5
    bg.adjust_amp = False # freezes bg.amp at current value
  6. Select an initialization method. This tells the GMM what initial parameters is should assume. The options are 'minmax','random','kmeans','none'. See the respective functions for details:

    • pygmmis.initFromDataMinMax()
    • pygmmis.initFromDataAtRandom()
    • pygmmis.initFromKMeans()

    For difficult situations, or if you are not happy with the convergence, you may want to experiment with your own initialization. All you have to do is set gmm.amp, gmm.mean, and gmm.covar to desired values and use init_method='none'.

  7. Decide to freeze out any components. This makes sense if you know some of the parameters of the components. You can freeze amplitude, mean, or covariance of any component by listing them in a dictionary, e.g:

    frozen={"amp": [1,2], "mean": [], "covar": [1]}

    This freezes the amplitudes of component 1 and 2 (NOTE: Counting starts at 0), and the covariance of 1.

  8. Run the fitter:

    w = 0.1    # minimum covariance regularization, same units as data
    cutoff = 5 # segment the data set into neighborhood within 5 sigma around components
    tol = 1e-3 # tolerance on logL to terminate EM
    
    # define RNG for deterministic behavior
    from numpy.random import RandomState
    seed = 42
    rng = RandomState(seed)
    
    # run EM
    logL, U = pygmmis.fit(gmm, data, init_method='random',\
                          sel_callback=cb, covar_callback=covar_cb, w=w, cutoff=cutoff,\
                          background=bg, tol=tol, frozen=frozen, rng=rng)

    This runs the EM procedure until tolerance is reached and returns the final mean log-likelihood of all samples, and the neighborhood of each component (indices of data samples that are within cutoff of a GMM component).

  9. Evaluate the model:

    # log of p(x)
    p = gmm(test_coords, as_log=False)
    N_s = 1000
    # draw samples from GMM
    samples = gmm.draw(N_s)
    
    # draw sample from the model with noise, background, and selection:
    # if you want to get the missing sample, set invert_sel=True.
    # N_orig is the estimated number of samples prior to selection
    obs_size = len(data)
    samples, covar_samples, N_orig = pygmmis.draw(gmm, obs_size, sel_callback=cb,\
                                                  invert_sel=False, orig_size=None,\
                                                  covar_callback=covar_cb,background=bg)

For a complete example, have a look at the test script. For requests and bug reports, please open an issue.

pygmmis's People

Contributors

adrn avatar pmelchior avatar zeehio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pygmmis's Issues

Code is unlicensed

Hi @pmelchior,

great paper and your code looks really useful, I'm excited about trying to apply this for classification in transient astronomy.

However, I just realized you haven't stated a license. I'm guessing this is an oversight (since the readme mentions to 'cite the paper if you use the code') but it effectively implies the code is "look, don't touch without explicit permission" in its current state. BSD, MIT, Apache, or GPL would be great :)

Cheers!
-Tim

Can't install via pip: multiprocessing is already present in Python 3

$ pip install pygmmis
Collecting pygmmis
  Using cached pygmmis-1.1.3-py2.py3-none-any.whl (22 kB)
Requirement already satisfied: numpy in /home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages (from pygmmis) (1.18.1)
Collecting parmap
  Using cached parmap-1.5.2-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: scipy in /home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages (from pygmmis) (1.4.1)
Collecting multiprocessing
  Using cached multiprocessing-2.6.2.1.tar.gz (108 kB)
    ERROR: Command errored out with exit status 1:
     command: /home/gabriel/miniconda3/envs/py3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-redkuhhn/multiprocessing/setup.py'"'"'; __file__='"'"'/tmp/pip-install-redkuhhn/multiprocessing/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-redkuhhn/multiprocessing/pip-egg-info
         cwd: /tmp/pip-install-redkuhhn/multiprocessing/
    Complete output (6 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-redkuhhn/multiprocessing/setup.py", line 94
        print 'Macros:'
                      ^
    SyntaxError: Missing parentheses in call to 'print'. Did you mean print('Macros:')?
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

According to this SO post, multiprocessing is already included in Python 3.

Fitting two simple Gaussians

Hi, I have another issue. I'm trying to fit 2 Gaussian components in 2D onto samples obtained from a combination of two simple Gaussians. The example is pretty basic and the expectation is for pygmmis to find the two Gaussian centers easily. However, that is not the case.

Please check the minimal example below. There are two Gaussian components centered at (-1.5, 0) and (1.5, 0). pygmmis fits a single Gaussian at (-0.34, 0).

Is this expected? Am I doing something wrong?

Thanks!

import pygmmis

def gaus2d(xy, amplitude=1, mx=0, my=0, sx=1, sy=1, offset=0):
    x, y = xy
    g = offset + amplitude / (2. * np.pi * sx * sy) * np.exp(-((x - mx)**2. / (2. * sx**2.) + (y - my)**2. / (2. * sy**2.)))
    return g.ravel()

NPOINTS=201
x = np.linspace(-5, 5, NPOINTS)
y = np.linspace(-5, 5, NPOINTS)
x, y = np.meshgrid(x, y)
z = gaus2d((x, y), 15, -1.5, 0, 1, 1, 0)
g2dx2 = z.reshape(NPOINTS, NPOINTS)
z = gaus2d((x, y), 10, 1.5, 0, 1, 1, 0)
g2dx2 += z.reshape(NPOINTS, NPOINTS)

def normalize(img):
    positive = img - np.min(img)
    return positive / np.max(positive)

lkimghist = normalize(g2dx2) * 100
xmin, xmax, ymin, ymax = (x.min(), x.max(), y.min(), y.max())
samples = []
for i in range(NPOINTS):
    for j in range(NPOINTS):
        for n in range(int(lkimghist[i, j])):
            samples.append([x[i,j], y[i,j]])
samples = np.array(samples)
gmm = pygmmis.GMM(K=2, D=2)
mins = np.min(samples, axis=0)
maxs = np.max(samples, axis=0)
def selcallback(smpls):
    return np.all(np.all([smpls <= maxs, smpls >= mins], axis=0), axis=1)
pygmmis.fit(gmm, samples, sel_callback=selcallback, maxiter=50)

Divide by zero warning?

Hi Peter,

Thank you for developing the powerful tool. Unfortunately I have some issue while applying it to my data, and could you please take a look on it? My python version is 3.6.2 and all my site packages are up to date.

gmm_logL(simu,20)

The function gmm_logL is:

def gmm_logL(simu, K = 10):     #calculate log likelihood of GMM fitting for different K
    logL = []
    if type(K) is list:
        K_list = K
    else:
        K_list= list(np.arange(1,K+1))
    for i in K_list:
        start = time.time()
        try:
            gmm = pygmmis.GMM(K=i, D=len(simu[0]))
            logl, U = pygmmis.fit(gmm, simu, w = 1e-6)
            logL.append(logl)
        except Exception and RuntimeWarning:
            print('There is an error with K = %d' %(i))
            K_list.remove(i)

        print("Time used for K = %d is %gs, min amplitude is %g" % (i,(time.time() - start),np.min(gmm.amp)))
    plt.plot(K_list, logL, linewidth = 5)
    plt.title('Log likelihood of GMM fitting')
    plt.xlabel('$K$')
    plt.ylabel('$log-likelihood \lnU$')
    plt.show()

and the file for simu is as below:
simulation100000.txt

you can simply use simu = np.loadtext('simulation100000.txt') to load it.

I ran the code as above in ipython console and the trace back log is as below:

D:\Anaconda\lib\site-packages\pygmmis.py:910: RuntimeWarning: divide by zero encountered in log
  log_S[H] = np.log(log_S[H])

D:\Anaconda\lib\site-packages\pygmmis.py:153: RuntimeWarning: invalid value encountered in add
  return np.log(np.exp(logX + c[c_shape]).sum(axis=axis)) - c

D:\Anaconda\lib\site-packages\pygmmis.py:766: RuntimeWarning: invalid value encountered in greater
  moved = np.flatnonzero(shift2 > shift_cutoff)

D:\Anaconda\lib\site-packages\numpy\linalg\linalg.py:1741: RuntimeWarning: invalid value encountered in slogdet
  sign, logdet = _umath_linalg.slogdet(a, signature=signature)
D:\Anaconda\lib\site-packages\numpy\linalg\linalg.py:1741: RuntimeWarning: invalid value encountered in slogdet
  sign, logdet = _umath_linalg.slogdet(a, signature=signature)
D:\Anaconda\lib\site-packages\numpy\linalg\linalg.py:1741: RuntimeWarning: invalid value encountered in slogdet
  sign, logdet = _umath_linalg.slogdet(a, signature=signature)
D:\Anaconda\lib\site-packages\numpy\linalg\linalg.py:1741: RuntimeWarning: invalid value encountered in slogdet
  sign, logdet = _umath_linalg.slogdet(a, signature=signature)

above are the runtime warnings I've got while running gmm_logL(simu,20), and it stopped at here seems to fall into a infinite loop for more than 2 hour. So I pressed ctrl+c to stop it:
and it didn't stop directly but stop after generating these information:

^C
Process SpawnPoolWorker-7:
Process SpawnPoolWorker-6:
Process SpawnPoolWorker-5:
Process SpawnPoolWorker-3:
Process SpawnPoolWorker-2:
Process SpawnPoolWorker-1:
Process SpawnPoolWorker-4:
Process SpawnPoolWorker-8:
Traceback (most recent call last):
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 44, in mapstar
    return list(map(*args))
  File "D:\Anaconda\lib\site-packages\parmap\parmap.py", line 116, in _func_star_many
    **func_items_args[3])
  File "D:\Anaconda\lib\site-packages\pygmmis.py", line 932, in _Esum
    dx = d_ - gmm.mean[k]
KeyboardInterrupt
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
Traceback (most recent call last):
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 341, in get
    with self._rlock:
  File "D:\Anaconda\lib\multiprocessing\synchronize.py", line 96, in __enter__
    return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
  File "D:\Anaconda\lib\multiprocessing\process.py", line 249, in _bootstrap
    self.run()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Anaconda\lib\multiprocessing\pool.py", line 108, in worker
    task = get()
  File "D:\Anaconda\lib\multiprocessing\queues.py", line 342, in get
    res = self._reader.recv_bytes()
  File "D:\Anaconda\lib\multiprocessing\connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "D:\Anaconda\lib\multiprocessing\connection.py", line 306, in _recv_bytes
    [ov.event], False, INFINITE)
KeyboardInterrupt

Thank you for your time and patience!

Background amplitude fitting as part of normal EM, not hierarchical

Currently, we make a distinction samples belonging to background vs signal, and only the signal samples are fed to the GMM. This is suboptimal for samples that have comparable probability of being from background or the signal.
A more natural option is to compute the q_ik for the GMM and the background, and then to renormalize the amplitudes of background and signal. This could reduce some of the code complexity and give better results for the GMM means and covariances.

[Errno 12] Cannot allocate memory

Hi, This library works fine when I try to run it for a single group of data for curve fitting.
But when I iterate through multiple groups of data and try to fit the curve for each group, it works fine till ~300 iterations but after that, it shows #memory error.
Any fixes for this, I know the problem is happening somewhere in multiprocessing.

Early termination with truncated samples

When running some tests on simple models (2 dimensions and 3/4 components) I find that untruncated (i.e. sel_callback=None) performs better on these models than when I give pygmmis the selection function (i.e. sel_callback=selection_function).

I've created a gist with my script here

The likelihood seems to decrease immediately when moving away from the k-means estimate.
However, if I disable convergence detection and let them run forever, then we see convergence for some split-merge runs. In fact, the log_L jumps around quite a bit before finally converging (sometimes).

The figures below show my problem, the ellipses are 1-sigma; observed data is in green, unobserved data in red.
Red ellipses are the kmeans estimate,
Black ellipses are the pygmmis estimate
Blue ellipses are the truth

With tolerance
pygmmis

Without tolerance
pygmmis-converged

Loglikelihood

log_like

We need to detect convergence better otherwise it'll get stuck in a local minimum. Maybe a t-test for flat lines with tolerances?

Arbitrary termination conceals unstable EM steps?

I have found that --for some truncated models-- the log-likelihood decreases immediately from the starting point and EM steps taken by GMM.fit can make things worse.
If I remove the convergence conditions for these models and just run until maxiter is reached, the EM steps always result in a lower logL.
I was under the impression that EM is supposed to guarantee that the likelihood increases.
For the model I use here (where green is the ground truth):

aggressive

  • If I keep the convergence checks, the model doesn't move away from the k-means result (i.e. no steps are taken since the logL decreases). This is shown in black
  • if I remove the convergence checks but set maxiter=300, then I get something that appears reasonable (shown in red).
    -But this is arbitrary, if I allow it to continue then it degrades into rubbish (the result after 700 steps more is shown in blue)

So EM appears not to work for this model, the likelihood always decreases despite the fact that I gave it the k-means estimate based on all data, not just the observed.

This doesn't happen for all models. If I use a less aggressive truncation, then the model stablises as it should:
not-aggressive

My model is here and produces the plots above.

================
I discovered this effect after implementing my own convergence detection technique to deal with problems such as the one I've seen in #11.
The log-likelihood decreases but then goes back up higher than before so the original convergence checks would terminate early.

logL

My plan was to fix this with a gradient test (my feature/convergence - #12 ) branch (which also includes tools to visualise what's happening and a backend to store the EMSteps). This method has fixed that problem but it can't solve this one.

Comments and suggestions would be really helpful!

Thanks

numpy.linalg.linalg.LinAlgError: Singular matrix

Hi,
I'm using this library and restricting the mean update. I had initialized the gmm.mean, amp and covar.
My data is 1d and integer. I'm trying to fit gmm curve in this data.
This is the error I'm getting : LinAlgError: Singular matrix

gmm = pygmmis.GMM(K=3, D=1) # K components, D dimensions
gmm.amp = np.array([0.33,0.33,0.33])
gmm.mean = np.array([[7],[14],[30]])
gmm.covar = np.array([[[2]],[[3]],[[5]]])
logL, U = pygmmis.fit(gmm, t_data,init_method='none',frozen={"amp":[],"mean":[0,1,2],"covar":[]})

Handling Missing Data

Hello Peter,

I was wondering if I could use Pygmmis to handle missing data in a dataset, for example I've constructed a dummy dataset that has a few missing features and I'm trying to use Pygmmis to estimate them but I guess I can't find how.

array([[ 1., 5.],
[ 2., 5.],
[ 3., 5.],
[ 4., 5.],
[ 5., 5.],
[ 6., nan],
[ nan, 5.]], dtype=float32)
I'm trying to follow the example in the README or the test script.

Thank you.

Empty "U" / responsibilities from "fit"

Hi!

When running pygmmis.fit on my data, it fits the data nicely but I do not get any resulting responsibilities for the components.
logL, U = pygmmis.fit(gmm, coords)

results in
U = [None, None]

Do you have any idea on what that could be, or how to debug, or how to workaround?

macOS 10.15, Python 3.7, pygmmis 1.1.4

Cutoff: explanation/documentation

In performing some tests of pygmmis I have found that varying the cutoff argument drastically changes the end result of fitting even with split-and-merge turned on (and exhaustive).

My understanding of EM is that the responsibilities r_ik are calculated for all data and all components. Why then, does pygmmis use a cutoff to fit only to those data in the neighbourhood of each component? As far as I can understand, cutoff!=inf simply means that it will be labelling some data as not belonging to any component.

Is the reason something to do with the background or is it just to avoid outliers?

Thanks

P.S. This code is very cool!

fix component

Hello,

Great software and paper! I was wondering if you have implemented the ability to fix some of the parameters of one of the components (mean and covariance) while estimating the parameters of the others via EM. This would be very useful for one of my research projects.

see jobovy/extreme-deconvolution#9

Thank you,

Joe

Troubled by 1 instance of np.flatnonzero

Hi,

I am troubled by 1 instance of np.flatnonzero. In _ Esum the lines:
if U_k is None:
U_k = np.flatnonzero(indices)
Removing 0 index makes U_k length 1 less than chi2 length, T_inv_k length, and log_p[k] length. I don't see why the zero index is removed.

                                     Sincerely,

                                 William Gandler

Test script: ModuleNotFoundError: No module named 'parmap'

Running the test script results in:

initializing spheres with s=3.13 near data points
ITER    SAMPLES LOG_L   STTraceback (most recent call last):
  File "/home/gabriel/Descargas/pygmmis_test.py", line 254, in <module>
    l[r], _ = pygmmis.fit(gmms[r], data, w=w, cutoff=cutoff, background=bg, rng=rng)
  File "/home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages/pygmmis.py", line 667, in fit
    log_L, N, N2 = _EM(gmm, log_p, U, T_inv, log_S, H, data_, covar=covar_, R=R, sel_callback=sel_callback, oversampling=oversampling, covar_callback=covar_callback, w=w, pool=pool, chunksize=chunksize, cutoff=cutoff, background=background, p_bg=p_bg, changeable=changeable, maxiter=maxiter, tol=tol, rng=rng)
  File "/home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages/pygmmis.py", line 770, in _EM
    log_L_, N, N2, N0 = _EMstep(gmm, log_p, U, T_inv, log_S, H, N0, data, covar=covar, R=R,  sel_callback=sel_callback, oversampling=oversampling, covar_callback=covar_callback, background=background, p_bg=p_bg , w=w, pool=pool, chunksize=chunksize, cutoff=cutoff_nd, tol=tol, changeable=changeable, it=it, rng=rng)
  File "/home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages/pygmmis.py", line 828, in _EMstep
    log_L = _Estep(gmm, log_p, U, T_inv, log_S, H, data, covar=covar, R=R, background=background, p_bg=p_bg, pool=pool, chunksize=chunksize, cutoff=cutoff, it=it)
  File "/home/gabriel/miniconda3/envs/py3/lib/python3.7/site-packages/pygmmis.py", line 886, in _Estep
    import parmap
ModuleNotFoundError: No module named 'parmap'
bind: Invalid command `enable-meta-key'.

Installing parmap obviously solves the issue.

High memory usage

I'm trying to use PyGMMis with 5 clusters on a dataset with about 20k samples of 7 dimensions. The result is that the fit method eats up all the available memory on the machine (almost 60GB). I wasn't able to call fit on a dataset with more than 6000 samples.

Is this a known problem? Is PyGMMis capable of processing somewhat larger datasets? Is there a configuration option I am missing?

I am running this on Python 3.7.6. with something like:

def selcallback(smpls):
            return np.all(np.all([smpls <= maxs, smpls >= mins], axis=0), axis=1)

gmm = GMM(K=5, D=7)
fit(gmm, samples, sel_callback=selcallback, maxiter=20, tol=0.001)

Thanks!

Symmetric Gaussian?

Hello, thank you for this very useful package. It looks like it can do very nearly what I need for a project, but I am hoping that it may be possible to constrain the shape (but not scale) of a Gaussian component. Specifically, I'd like a Gaussian to be circular, with covariance proportional, but not equal, to [[1, 0],[0,1]]. Doesn't look like pygmmis can do that, but thought I'd ask since I saw in #1 that there was some built-in functionality that was just not exposed to the fit function. Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.