Giter Club home page Giter Club logo

gptk's Issues

Method naming in CovarianceFunction

The CovarianceFunction class provides two methods: 
computeSymmetric(mat C, vec A)
computeCovariance(mat C, vec A, vec B)

The names are a little confusing - I did expect computeCovariance(C, A, A)
to return the same result as computeSymmetric(C, A), but this is not always
the case. I guess computeCovariance was intended for computation of
cross-covariances *only*, meaning we always have A != B.

I am not sure this is always going to be the case: a given location could
be both in the training and test set. Should the cross-covariance for A==B
be the same as the covariance? In GPML, the nugget term (WhiteNoiseCF)
assumes there is no correlation between points in the case of covariances,
regardless of whether the two points are the same. We could argue that if
A==B, the two locations certainly aren't uncorrelated! I am not sure what
the effects on computations are. This needs to be checked as it can affect
the results quite strongly.

Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 10:12

Split the code into library, examples and tests

Split code into:
* a core library (/libgptk)
* a set of examples (/examples)
* a set of tests (/tests)
* required dependencies (/lib)

There is a Makefile in the root folder (gptk) to build everything. Separate
Makefiles are also available in each folder to build part of the code only
(e.g. only the library or only the examples).

The Makefile instructions common to all folder have been put in the
gptk/Makefile.common file.

Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 10:20

Bug in ModelTrainer::checkGradient

==What steps will reproduce the problem?==
Repeated calls of ModelTrainer::checkGradient()

==What is the expected output? What do you see instead?==
Output should be the same for each call, i.e. all calls should output the
same gradient/finite differences. However, there is a slight change in
gradient after each call.

==What is the cause of the problem==
The error function and its gradient, as required by checkGradient(), rely
on stored values (i.e. the covariance function parameters stored in the
covariance function object) rather than parameters. To compute the
error/gradient at a given set of parameters X, we need to:
# set the parameters in the model to X (overriding current parameters)
# call the error/gradient function (which takes no parameter)

This means computing the error/gradient actually changes the state of the
model. This is very likely to break SCG, which needs to compute gradients
without changing the parameters (the parameters only get changed if a step
is taken, which depends on the scale and gradient value). In the current
setting, the model's parameters get changed even if the step is discarded.

==Directions for fix==
I think we should rethink the design slightly here. The error/gradient
methods should take a vector of parameter values as argument and base their
computations on these values, not on the current state of the model. We
want to replace:

 model.setParameters(x);
 g = model.gradient();

with

 g = model.gradient(x);

leaving the parameters of the model unchanged (same for the error function).

Because this requires some thought (it is not straightforward to implement,
as parameters need to be passed to the model, which passes them on to the
covariance function, etc. - this could get messy), I suggest the following
temporary fix. Replace:

 model.setParameters(x);
 g = model.gradient();

with

 xOld = model.getParameters();
 model.setParameters(x);
 g = model.gradient();
 model.setParameters(xOld);

I will implement this for now - and will suggest further changes (for an
improved design) at a later stage.

 g = model.gradient(x);



Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 11:38

Add support for multiple outputs (cokriging)

=New feature=

==Description==
It would be nice to offer support for multiple outputs in the psgp framework.

==Classes/Files affected==
Need to modify the Gaussian process classes and the covariance I guess


Original issue reported on code.google.com by [email protected] on 13 Oct 2009 at 10:34

Superclass for stationary covariance functions

=New feature=

==Description==
Add a IsotropicCF class as a superclass to all isotropic (i.e. distance
based) covariance functions.

==Classes/Files affected==
Classes/Files: IsotropicCF
Location:      covarianceFunctions/


Original issue reported on code.google.com by [email protected] on 13 Oct 2009 at 3:33

SequentiaGP::gradientEvidenceUpperBound - Still wrong!

==Description of the problem==
After applying the fix, we get the correct gradient at the beginning of the
optimisation. However, as soon as the parameters change, the gradients
become wrong again.

==How do I reproduce it==
Create SequentialGP with covariance function parameters x0. Compute the
gradient at point x=x0 (works fine) and the move away from x0. The gradient
becomes more and more wrong as x gets further away from x0.
I have put a test in TestSequentialGP to reproduce the error.

==What is the cause of the problem (optional)==
It seems to be a numerical issue with the way the gradient is computed. I
found another way of computing the gradient which gives the correct value
even for x!=x0 (see fix below).

==Directions for fix (optional)==
In the computation of the gradient, use:

W = W-( eye(sizeActiveSet) + (KB * (C + outer_product(Alpha, Alpha))));
mat U = backslash(KB_new,KB);

for(int i = 0; i < covFunc.getNumberParameters(); i++)
{
  covFunc.getParameterPartialDerivative(partialDeriv, i, ActiveSet);
  mat V = backslash(KB_new,partialDeriv*U).transpose();
  grads(i) = elem_mult_sum(W, V) / 2.0;
}

It seems including the partial derivative *inside* the computation of the
inverse is a lot more robust than computing the inverse and then
multiplying it by the derivative inside the loop (as done before). This way
is also more computationally demanding, though. We might need to look for
more efficient ways to compute this (maybe using Cholesky decompositions).

Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 5:15

Header files missing (e.g. SamplingLikelihood.h)

==Description of the problem==
In the current available download (linux v0.2), the download package seems to 
be missing a few header files in the:

tests/
examples/
src/likelihood_models

directories causing the compiler to complain. I fixed this issue in an ad-hoc 
manner by simply getting the svn version and copying the files over. 

Original issue reported on code.google.com by [email protected] on 25 Feb 2012 at 5:06

Class for CSV file input/output

=New feature=

==Description==
Added CSV input/output class to read from and write to CSV files. 

==Classes/Files affected==
Class:    csvstream
Location: libgptk/io

==Usage/main methods==
csvstream::read(mat &A, string filename)
csvstream::write(mat A, string filename)

==Test class==
TestCSVStream


Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 10:35

Exception handling

Improve exception handling (at the moment, error messages printed onto
standard output): throw proper exceptions instead.

Original issue reported on code.google.com by [email protected] on 13 Oct 2009 at 3:40

Bug in GaussianProcess::makePredictions()

==Description of the problem==
When making predictions with a Gaussian covariance function + noise, the
predicted variance is wrong (noisy sine test, 50 training points taken from
100 test points).

==How do I reproduce it==
Make prediction using Gaussian covariance function + noise.

==What is the cause of the problem (optional)==
The computation of:

  mat v = ls_solve(computeCholesky(Sigma), Cpred);

should use the transpose of the Cholesky decomposition.

==Directions for fix (optional)==
Replace above with:

  mat v = ls_solve(computeCholesky(Sigma).transpose(), Cpred);


Original issue reported on code.google.com by [email protected] on 1 Jul 2009 at 9:07

Error in SequentialGP::gradientEvidenceUpperBound (wrong length scales gradient)

==Description of the problem==
In SequentialGP, the gradient of the evidence upper bound seems wrong for
the length scale parameter of the Gaussian covariance function.

==How to reproduce the problem==
Run ModelTrainer::checkGradient() with a SequentialGP object.

==What is the expected output? What do you see instead?==
The analytical gradients do not agree with the finite difference estimate
(length scales are wrong - the rest looks fine).




Original issue reported on code.google.com by [email protected] on 29 Jun 2009 at 11:50

Improve parameter masking in optimisation

We can revert back to Ben's initial implementation in the way parameters
are masked during optimisation (i.e. extract the parameters to be optimised
rather than setting the gradient/error of the fixed parameters to 0).

Original issue reported on code.google.com by [email protected] on 27 Jul 2009 at 12:00

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.