Giter Club home page Giter Club logo

empca's Introduction

empca: Weighted Expectation Maximization Principal Component Analysis

Classic PCA is great but it doesn't know how to handle noisy or missing data properly. This module provides Weighted Expectation Maximization PCA, an iterative method for solving PCA while properly weighting data. Missing data is simply the limit of weight=0.

Given data[nobs, nvar] and weights[nobs, nvar],

m = empca(data, weights, options...)

That returns a Model object m, from which you can inspect the eigenvectors, coefficients, and reconstructed model, e.g.

pylab.plot( m.eigvec[0] )
pylab.plot( m.data[0] )
pylab.plot( m.model[0] )

If you want to apply the model to new data:

m.set_data(new_data, new_weights)

and then it will recalculate m.coeff, m.model, m.rchi2, etc. for the new data.

m.R2() is the fraction of data variance explained by the model, while m.R2vec(i) is the amount of variance explained by eigenvector i.

This implementation of EMPCA does not subtract the mean from the data. If you don't subtract the mean yourself, it will still work, with the first eigenvector likely being something similar to the mean.

For comparison, two alternate methods are also implemented which also return a Model object:

m = lower_rank(data, weights, options...)
m = classic_pca(data)  #- but no weights or even options...

Everything is self contained in empca.py . Just put that into your PYTHONPATH and "pydoc empca" for more details. For a quick test on toy example data, run

python empca.py

This requires numpy and scipy; it will make plots if you have pylab installed.

The paper S. Bailey 2012, PASP, 124, 1015 describes the underlying math and is available as a pre-print at:

http://arxiv.org/abs/1208.4122

If you use this code in an academic paper, please include a citation as described in CITATION.txt, and optionally an acknowledgement such as:

This work uses the Weighted EMPCA code by Stephen Bailey, available at https://github.com/sbailey/empca/

The examples in the paper were prepared with version v0.2 of the code.

Stephen Bailey, Summer 2012

empca's People

Contributors

sbailey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

empca's Issues

Dealing with large data set

I am trying to run with a large data set, ~200,000 eBOSS spectra, and stumbled upon an issue with memory.
What would be the best strategy to deal with that?
Is there an option float32, or should I split the spectra I am looking into computing
in half according to lambdaRF and tape as best as I can after?

INFO: Starting EMPCA
       iter        R2             rchi2
Traceback (most recent call last):
  File "<HOME>/redvsblue/bin//redvsblue_compute_PCA.py", line 205, in <module>
    model = empca.empca(pcaflux, weights=pcaivar, niter=args.niter, nvec=args.nvec)
  File "<HOME>/Programs/sbailey/empca/empca.py", line 307, in empca
    model.solve_eigenvectors(smooth=smooth)
  File "<HOME>/Programs/sbailey/empca/empca.py", line 142, in solve_eigenvectors
    data -= np.outer(self.coeff[:,k], self.eigvec[k])    
  File "<HOME>/.local/lib/python3.6/site-packages/numpy/core/numeric.py", line 1203, in outer
    return multiply(a.ravel()[:, newaxis], b.ravel()[newaxis, :], out)
MemoryError

nan not allow in input data even if weight=0

As reported by Lingfeng Cheng at Cornell, input data cannot have NaN values even if the corresponding values are masked with weights=0. Normally masked data (weights=0) are effectively ignored via line 247:

b = A.T.dot( w*b )

But if any of the data in b are NaN, this results in a NaN output, not a 0 output for that element even if w (=weights) is 0. This results in an error like:

ValueError: On entry to DGELSD parameter number 6 had an illegal value

At minimum, catch NaNs in the input and report as a meaningful error. Explore whether masked NaNs can be ignored without requiring rewriting or recopying the input array.

RunTimeWarning: line 129

Hi,

I just started using empca and it works great. However in a few instances, I have the following Warning which seems to make the PCA stop:
empca.py:129: RuntimeWarning: invalid value encountered in double_scalars
self.eigvec[k, j] = x.dot(cw) / c.dot(cw)

Any advice on how to get rid of this message?

Thanks in advance.

missing data imputation

Hi, Can you please tell me how can I use for missing data imputation for this program ? It is written as

Missing data is simply the limit of weight=0.

But I am not getting where to set weight = 0. When I did
m0 = empca(noisy_data, weights = 0, niter=20)
it gives error as

File "empca.py", line 290, in empca
assert data.shape == weights.shape

Can you please help me ? I want to use your code for imputation problem. Thank you.

warning when using weights

I get the following warning when using weights, I bet it can simply be corrected:

<me>/Programs/sbailey/empca/empca.py:256: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  x = np.linalg.lstsq(A, b)[0]

Here is a minimal example of code:

import scipy as sp
import empca

flux = sp.loadtxt('exemple_flux.txt')
weights = sp.loadtxt('exemple_weight.txt')
model = empca.empca(flux, niter=1, nvec=1, weights=weights)

exemple_flux.txt
exemple_weight.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.