Giter Club home page Giter Club logo

lmfit-py's Introduction

LMfit-py

image

image

image

image

image

Overview

The lmfit Python library supports provides tools for non-linear least-squares minimization and curve fitting. The goal is to make these optimization algorithms more flexible, more comprehensible, and easier to use well, with the key feature of casting variables in minimization and fitting routines as named parameters that can have many attributes beside just a current value.

LMfit is a pure Python package, built on top of Scipy and Numpy, and so easy to install with pip install lmfit.

For questions, comments, and suggestions, please use the LMfit google mailing list or Github discussions. For software issues and bugs, use Github Issues, but please read Contributing.md before creating an Issue.

Parameters and Minimization

LMfit provides optimization routines similar to (and based on) those from scipy.optimize, but with a simple, flexible approach to parameterizing a model for fitting to data using named parameters. These named Parameters can be held fixed or freely adjusted in the fit, or held between lower and upper bounds. Parameters can also be constrained as a simple mathematical expression of other Parameters.

A Parameters object (which acts like a Python dictionary) contains named parameters, and can be built as with:

import lmfit
fit_params = lmfit.Parameters()
fit_params['amp'] = lmfit.Parameter(value=1.2)
fit_params['cen'] = lmfit.Parameter(value=40.0, vary=False)
fit_params['wid'] = lmfit.Parameter(value=4, min=0)
fit_params['fwhm'] = lmfit.Parameter(expr='wid*2.355')

or using the equivalent:

fit_params = lmfit.create_params(amp=1.2,
                                 cen={'value':40, 'vary':False},
                                 wid={'value': 4, 'min':0},
                                 fwhm={'expr': 'wid*2.355'})

In the general minimization case (see below for Curve-fitting), the user will also write an objective function to be minimized (in the least-squares sense) with its first argument being this Parameters object, and additional positional and keyword arguments as desired:

def myfunc(params, x, data, someflag=True):
    amp = params['amp'].value
    cen = params['cen'].value
    wid = params['wid'].value
    ...
    return residual_array

For each call of this function, the values for the params may have changed, subject to the bounds and constraint settings for each Parameter. The function should return the residual (i.e., data-model) array to be minimized.

The advantage here is that the function to be minimized does not have to be changed if different bounds or constraints are placed on the fitting Parameters. The fitting model (as described in myfunc) is instead written in terms of physical parameters of the system, and remains remains independent of what is actually varied in the fit. In addition, which parameters are adjusted and which are fixed happens at run-time, so that changing what is varied and what constraints are placed on the parameters can easily be modified by the user in real-time data analysis.

To perform the fit, the user calls:

result = lmfit.minimize(myfunc, fit_params, args=(x, data), kws={'someflag':True}, ....)

After the fit, a MinimizerResult class is returned that holds the results the fit (e.g., fitting statistics and optimized parameters). The dictionary result.params contains the best-fit values, estimated standard deviations, and correlations with other variables in the fit.

By default, the underlying fit algorithm is the Levenberg-Marquardt algorithm with numerically-calculated derivatives from MINPACK's lmdif function, as used by scipy.optimize.leastsq. Most other solvers that are present in scipy (e.g., Nelder-Mead, differential_evolution, basin-hopping, and more) are also supported.

Curve-Fitting with lmfit.Model

One of the most common use of least-squares minimization is for curve fitting, where minimization of data-model, or (data-model)*weights. Using lmfit.minimize as above, the objective function would take data and weights and effectively calculated the model and then return the value of (data-model)*weights.

To simplify this, and make curve-fitting more flexible, lmfit provides a Model class that wraps a model function that represents the model (without the data or weights). Parameters are then automatically found from the named arguments of the model function. In addition, simple model functions can be readily combined and reused, and several common model functions are included in lmfit.

Exploration of Confidence Intervals

Lmfit tries to always estimate uncertainties in fitting parameters and correlations between them. It does this even for those methods where the corresponding scipy.optimize routines do not estimate uncertainties. Lmfit also provides methods to explicitly explore and evaluate the confidence intervals in fit results.

lmfit-py's People

Contributors

aaristov avatar allanlrh avatar andyfaff avatar arunpersaud avatar caldwellshane avatar cdeil avatar danielballan avatar dimapu avatar eendebakpt avatar faustincarter avatar gitj avatar gpasquev avatar jcjaskula-aws avatar jenshnielsen avatar leonfoks avatar lneuhaus avatar matpompili avatar mgunyho avatar mpmdean avatar newville avatar oliver-frost avatar openafox avatar rawlik avatar rayosborn avatar reneeotten avatar s-weigand avatar stuermer avatar tillsten avatar tritemio avatar zobristnicholas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lmfit-py's Issues

leastsq keyword passing

Hi!

Thank you very much for lmfit. I discovered it just last week, but it seems it will be extremely useful!

I have however troubles with passing the keywords to leastsq - I hope I didn't missread something, but these are my attempts none of which works (the rest of the parameters passes correctly, fitting works, but for debugging I'd like it to stop after 2 evaluations):

leastsq_kws={'xtol': 1.0e-7, 'ftol': 1.0e-7, 'maxfev': 2}

# attempt 1
result = lm.minimize(myfunction,
                     pars,
                     args=nonfit_args,
                     **leastsq_kws
                     )

# attempt 2
result = lm.Minimizer(myfunction,
                         pars,
                         fcn_args=nonfit_args,
                         **leastsq_kws
                         )
result.leastsq()


# attempt 3
result = lm.Minimizer(myfunction,
                         pars,
                         fcn_args=nonfit_args
                         )
result.leastsq(**leastsq_kws)

Could you please clarify how to correctly pass maxfev etc.?

Thank you very much!

Add confidence interval method

It would be nice to have a method to compute profile likelihood confidence intervals to get asymmetric errors.
Similar to what minuit.minos() or sherpa.conf() does, i.e. find the points x_min, x_max where the fit statistic increases by a given amount up wrt. the best fit statistic, reoptimizing all other parameters for each tested x.

I have a prototype implementation based on scipy.optimize.brentq, which I have to test some more and then could add.

improve tests

add 2-d data-set example.
add test with known uncertainties in data

Allow precision in report

It's not apparently obvious if this exists, but in several cases the report lists parameters as 0.000000 because they are so small. Should there be a way to feed a precision value, or have the output be in scientific format?

Parameters should have a scale

When fitting two parameter with very different orders magnitude (e.g. 1e-12 and 1),
the estimated covariances will often be completely incorrect.

This problem can be solved by adding a scale attribute to the Parameter class, so that true_value = scale * value.
The default would be scale=1, so true_value = value unless explicitly reset by the user.
The optimizer only sees value and reports the best-fit values and covariance matrix without scale applied.
The user sees scale and value in printed output.

One example where this is implemented is in the Fermi software
http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#xmlModelDefinitions
as can be seen e.g. in their xml parameter interface (min and max are relative to value, not true_value):
<parameter free="1" max="1e4" min="1e-4" name="Prefactor" scale="1e-12" value="2.5588953699"/> <parameter free="1" max="5.0" min="0.0" name="Index" scale="-1.0" value="1.66533"/>

I can implement this and add a test case that shows the problem if you agree it is a useful feature to add.

Fixing independent_vars in specified_models.py?

Right now, our models allow the freedom to chose the name
the independent variable. I would to prefer to drop the feature in
exchange for simpler code.

Example:

class Linear(BaseModel):
    __doc__ = linear.__doc__ + COMMON_DOC

    def __init__(self, independent_vars, missing='none', suffix=None):
        _validate_1d(independent_vars)
        var_name, = independent_vars
        self.suffix = suffix
        self._param_names = ['slope', 'intercept']
        p = self._parse_params()

        def func(**kwargs):
            slope = kwargs[p['slope']]
            intercept = kwargs[p['intercept']]
            var = kwargs[var_name]
            return linear(var, slope, intercept)

        super(Linear, self).__init__(func, independent_vars, missing)

If we fix the the independent variable name, we just can use
(note that BaseModel is also not necessery anymore):

class Linear(Model):
    __doc__ = linear.__doc__ + COMMON_DOC

    def __init__(self, missing='none', suffix=None):
        independent_vars = ['x']
        self.suffix = suffix

        def func(x, slope, intercept):
            return linear(x, slope, intercept)

        super(Linear, self).__init__(func, independent_vars, missing)

Add tests for Model-specific features

We need unit tests for the new features in #58 . Especially:

  • data alignment
  • parameter checking (example: Model constructor raises if the model function argument is named 'sigma')
  • partial parameter assignment

I did some basic checks, many of which are in the example notebook, but we need real unit tests.

Minor fixes to make scalar minimization more usable

I've needed to make three minor fixes to lmfit, listed below, to get scalar minimization with the conjugate gradient (cg) algorithm working. All changes are to lmfit/minimizer.py:
#1. Rationale: The way scalar_minimize is declared and used the 'tol=None' parameter seems to override, and yet it cannot be set explicitly through minimize(). With this change and the first part of #2, below, a tol= parameter on the minimize() command line is properly passed through to the scipy minimize code. (perhaps hess= should be removed also)

Patch:

@@ -343,7 +343,7 @@ or set  leastsq_kws['maxfev']  to increase this maximum."""
         self.unprepare_fit()
         return

-    def scalar_minimize(self, method='Nelder-Mead', hess=None, tol=None, **kws):
+    def scalar_minimize(self, method='Nelder-Mead', hess=None, **kws):
         """use one of the scaler minimization methods from scipy.
         Available methods include:
           Nelder-Mead

#2. Rationale: The first line of this change completes fixing the tolerance parameter. The remainder of the change allows passing a jacobian to minimizers (such as cg) that can make use of one. Before this fix, while jac can be supplied, it does not get user arguments, and its parameters are not scaled according to the algorithm used to make variables bounded.

Patch:

@@ -376,10 +376,20 @@ or set  leastsq_kws['maxfev']  to increase this maximum."""
         if method not in ('L-BFGS-B', 'TNC', 'SLSQP'):
             opts['maxfev'] = maxfev

-        fmin_kws = dict(method=method, tol=tol, hess=hess, options=opts)
+        fmin_kws = dict(method=method, hess=hess, options=opts)
         fmin_kws.update(self.kws)
         fmin_kws.update(kws)

+        if 'Dfun' in fmin_kws and fmin_kws['Dfun'] is not None and not isinstance(fmin_kws['Dfun'],bool):
+            # Provided an explicit derivative (jacobian) function
+            self.jacfcn = fmin_kws['Dfun']
+
+            # scipy.minimize uses 'jac' to name the jacobian parameter, not Dfun
+            del fmin_kws['Dfun'] 
+            fmin_kws['jac'] = self.__jacobian
+
+
+
         ret = scipy_minimize(self.penalty, self.vars, **fmin_kws)
         xout = ret.x
         self.message = ret.message

#3: Rationale: The 'CG' in the _scalar_methods dictionary contains an extraneous space, which prevents it from being used.

Patch:

@@ -528,7 +538,7 @@ def minimize(fcn, params, method='leastsq', args=None, kws=None,
                        iter_cb=iter_cb, scale_covar=scale_covar, **fit_kws)

     _scalar_methods = {'nelder': 'Nelder-Mead',     'powell': 'Powell',
-                       'cg': 'CG ',                 'bfgs': 'BFGS',
+                       'cg': 'CG',                  'bfgs': 'BFGS',
                        'newton': 'Newton-CG',       'anneal': 'Anneal',
                        'lbfgs': 'L-BFGS-B',         'l-bfgs': 'L-BFGS-B',
                        'tnc': 'TNC',                'cobyla': 'COBYLA',

add unittest

the tests in the test directory should be moved to using unittest

Error in Parameter() documentation?

In the documentation, the first argument of Parameter() is "value", while in the code it is "name".
Thus according to the documentation this should work:

In [14]: u = Parameter(42, name='u')
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-14-c8568fd8c004> in <module>()
----> 1 u = Parameter(42, name='u')

TypeError: __init__() got multiple values for keyword argument 'name'

Is there a typo error in the documentation?

include option for soft constraints

instead of using min()/max() on value between leastsq() and the residual function, add an element to the objective function for each finite bound. That element should be 0 as long as the bound is respected and grow quickly (quadratically? exponentially?) as the value exceeds the bound.

Obviously, scaling is the main issue. Having a per-parameter, per-bound
scale (maxscale, minscale, say) and per-parameter, per-bound tolerance
would be ideal, but these should both default to 1.

I think having (for an upper bound maxval
penalty = 0 for val < maxval
= chi2 * scale * exp((val - maxval)/toler) / 10.0

where chi2 is sum of squares of the fit residual of the real data.

dimension check failed using user defined jacobian

Hi there,

Thanks for the wonderful code! I've experiencing difficulty using user defined jacobian. Suppose I have m data points and n state variables. I calculated the jacobian myself and returned a matrix of shape (m, n). It works well for leastsq but failed typecheck in lmfit. Is there any possible way to fix this issue?

Cheng

dict comprehension and python 2.6

Python2.6 doesn't support dict comprehension such as

{name: p.value for name, p in self.params.items()}

but this can be achieved with

dict([(name, p.value) for name, p in self.params.items()])

I'm not 100% that python2.6 compatibility is a strong requirement, but it's an easy enough fix. I'll push a commit for this, but leave this issue open for discussion.

minimizer.py does not correctly pass parameters to the jacobian

in minimizer.py/__jacobian():

for varname, val in zip(self.var_map, fvars):
# self.params[varname].value = val
self.params[varname].from_internal(val)
The commented line correctly sets the parameter.
The uncommented line accomplishes nothing. I think this should be:
self.params[varname].value=self.params[varname].from_internal(val)

Include brute as method option

Is it possible to include scipy's brute method as an option? I know this may be rather specific -- I need to do a comparative analysis -- but I was hoping if its not too difficult it can be done. I'm looking at the Minimizer function now, but I don't want to muck around in there if it's something that could be easily done by someone who knows the code better.

Report at every iteration?

I see there is a callable function argument in the minimizer for each iteration, but is there a way of outputting the parameter set and, for instance, chi-square at each iteration?

issues with minimize() modifying Parameters object

I'm running into some "gotchas" when working with lmfit that seem to be related to how it modifies the Parameters object passed to minimize(). Consider the following code that fits a model to two different datasets sequentially:

import numpy as np
from lmfit import Parameters, minimize, report_fit

x = np.arange(0, 1, 0.01)
y1 = 1.0*np.exp(1.0*x)
y2 = 1.0 + x + 1/2.*x**2 +1/3.*x**3

def residual(params, x, data):
    a = params['a'].value
    b = params['b'].value

    model = a*np.exp(b*x)
    return (data-model)

params = Parameters()
params.add('a', value = 2.0)
params.add('b', value = 2.0)

# fit to first data set
out1 = minimize(residual, params, args=(x, y1))
print "\n out1.params from fit to y1"
report_fit(out1.params)

# fit to second data set
out2 = minimize(residual, params, args=(x, y2))
print "\n out2.params from fit to y2"
report_fit(out2.params)

# now look at first fit results again
print "\n out1.params again"
report_fit(out1.params)

which outputs:

 out1.params from fit to y1
[[Variables]]
     a:     1 +/- 0 (0.00%) initial =  2
     b:     1 +/- 0 (0.00%) initial =  2
[[Correlations]] (unreported correlations are <  0.100)
    C(a, b)                      =  nan 

 out2.params from fit to y2
[[Variables]]
     a:     0.9892979 +/- 0.0007045643 (0.07%) initial =  1
     b:     1.049585 +/- 0.001006103 (0.10%) initial =  1
[[Correlations]] (unreported correlations are <  0.100)
    C(a, b)                      = -0.930 

 out1.params again
[[Variables]]
     a:     0.9892979 +/- 0.0007045643 (0.07%) initial =  1
     b:     1.049585 +/- 0.001006103 (0.10%) initial =  1
[[Correlations]] (unreported correlations are <  0.100)
    C(a, b)                      = -0.930 

Two things happen that are unexpected, at least to me: 1) the params object passed to the second minimize() call has been modified by the first minimize() call. The user might expect that it should remain as specified. 2) the output of the first fit is modified by the second fit, which seems to be because the Minimizer() object includes a reference to the params object rather than a copy.

My solution has been to wrap the output of minimizer in another class that does a copy.deepcopy() of the Parameters object, but this doesn't address issue (1). It seems to me that ideally minimize() should make a copy of the Parameters object when called, use the copy to do the fit, and include the copy in the returned Minimizer object. I could be doing something wrong, though, in which case I'd appreciate any advice.

Thanks for your work on lmfit.

Vinny

removing absolute imports of lmfit from within lmfit

A few modules within lmfit import names using the lmfit namespace, which only works if lmfit is already installed on the system. Here's a fix that converts these absolute imports to relative imports:

--- lmfit/model.py  2014-01-06 22:52:07 +0000
+++ lmfit/model.py  2014-01-06 22:53:29 +0000
@@ -6,7 +6,7 @@
 import warnings
 import inspect
 import copy
-import lmfit
+from . import Parameter, Parameters, minimize
 import numpy as np

 try:
@@ -99,7 +99,7 @@
         >>> params['tau'].value = 2.0  # initial guess
         >>> params['tau'].min = 0  # (optional) lower bound
         """
-        params = lmfit.Parameters()
+        params = Parameters()
         [params.add(name) for name in self.param_names]
         return params

@@ -177,11 +177,11 @@
         param_kwargs = set(kwargs.keys()) & self.param_names
         for name in param_kwargs:
             p = kwargs[name]
-            if isinstance(p, lmfit.Parameter):
+            if isinstance(p, Parameter):
                 p.name = name  # allows N=Parameter(value=5) with implicit name
                 params[name] = copy.deepcopy(p)
             else:
-                params[name] = lmfit.Parameter(name=name, value=p)
+                params[name] = Parameter(name=name, value=p)
             del kwargs[name]

         # Keep a pristine copy of the initial params.
@@ -217,8 +217,8 @@
             if not np.isscalar(self.independent_vars):  # just in case
                 kwargs[var] = _align(kwargs[var], mask, data)

-        result = lmfit.minimize(self._residual, params,
-                                args=(data, weights), kws=kwargs)
+        result = minimize(self._residual, params,
+                          args=(data, weights), kws=kwargs)

         # Monkey-patch the Minimizer object with some extra information.
         result.model = self

--- lmfit/models1d.py   2014-01-06 22:52:07 +0000
+++ lmfit/models1d.py   2014-01-06 22:55:02 +0000
@@ -21,8 +21,8 @@
 import numpy as np
 from scipy.special import gamma, gammaln, beta, betaln, erf, erfc, wofz

-import lmfit
-from lmfit import Parameter, Parameters, Minimizer
+from . import Parameter, Parameters, Minimizer
+from . import fit_report as lmfit_fit_report

 VALID_BKGS = ('constant', 'linear', 'quadratic')

@@ -126,7 +126,7 @@
     def fit_report(self, params=None, **kws):
         if params is None:
             params = self.params
-        return lmfit.fit_report(params, **kws)
+        return lmfit_fit_report(params, **kws)

     def fit(self, y, x=None, dy=None, **kws):
         fcn_kws = {'y': y, 'x': x, 'dy': dy}

--- lmfit/specified_models.py   2014-01-06 22:52:07 +0000
+++ lmfit/specified_models.py   2014-01-06 22:53:56 +0000
@@ -1,9 +1,9 @@
 import numpy as np
 from scipy.special import gamma, gammaln, beta, betaln, erf, erfc, wofz
 from numpy import pi
-from lmfit import Model
-from lmfit.utilfuncs import (gaussian, normalized_gaussian, exponential,
-                             powerlaw, linear, parabolic)
+from . import Model
+from .utilfuncs import (gaussian, normalized_gaussian, exponential,
+                        powerlaw, linear, parabolic)


 class DimensionalError(Exception):

--- lmfit/wrap.py   2014-01-06 22:52:07 +0000
+++ lmfit/wrap.py   2014-01-06 22:52:54 +0000
@@ -1,7 +1,7 @@
 #!/usr/bin/env python

 from inspect import getargspec
-from lmfit.parameter import Parameters
+from .parameter import Parameters


 def make_paras_and_func(fcn, x0, used_kwargs=None):

AttributeError: 'Parameter' object has no attribute 'ast'

When use an expression in parameter and after a fit try to run result.leastsq(), complains about the attribute 'ast':

AttributeError                            Traceback (most recent call last)
<ipython-input-3-feae373da82d> in <module>()
----> 1 result.leastsq()

/usr/lib/python2.7/site-packages/lmfit/minimizer.pyc in leastsq(self, **kws)
    385             lskws['Dfun'] = self.__jacobian
    386 
--> 387         lsout = scipy_leastsq(self.__residual, self.vars, **lskws)
    388         _best, _cov, infodict, errmsg, ier = lsout
    389 

/usr/lib/python2.7/site-packages/scipy/optimize/minpack.pyc in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
    366     if not isinstance(args, tuple):
    367         args = (args,)
--> 368     shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
    369     m = shape[0]
    370     if n > m:

/usr/lib/python2.7/site-packages/scipy/optimize/minpack.pyc in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape)
     17 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     18                 output_shape=None):
---> 19     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
     20     if (output_shape is not None) and (shape(res) != output_shape):
     21         if (output_shape[0] != 1):

/usr/lib/python2.7/site-packages/lmfit/minimizer.pyc in __residual(self, fvars)
    173         self.nfev = self.nfev + 1
    174 
--> 175         self.update_constraints()
    176         out = self.userfcn(self.params, *self.userargs, **self.userkws)
    177         if hasattr(self.iter_cb, '__call__'):

/usr/lib/python2.7/site-packages/lmfit/minimizer.pyc in update_constraints(self)
    156         self.updated = dict([(name, False) for name in self.params])
    157         for name in self.params:
--> 158             self.__update_paramval(name)
    159 
    160     def __residual(self, fvars):

/usr/lib/python2.7/site-packages/lmfit/minimizer.pyc in __update_paramval(self, name)
    144             for dep in par.deps:
    145                 self.__update_paramval(dep)
--> 146             par.value = self.asteval.run(par.ast)
    147             out = check_ast_errors(self.asteval.error)
    148             if out is not None:

AttributeError: 'Parameter' object has no attribute 'ast'

Indeed, there is no 'ast' attribute in Parameter code.

add optional multiprocessing for leastsq()

When using finite-diff Jacobian (ie, most of the time), it would be useful to add an option (normally off) to use multiprocessing with leastsq(). That is, following the
simple example at
http://stackoverflow.com/a/19679060

one could use multiple processes for the calls to the objective function in order to calculate the Jacobian -- basically write an internal Dfun that farmed out the work of fdjac2() to a multiprocess Pool.

This could be exposed by adding a
use_multicores=4

argument to leastsq() to indicate the size of the multiprocessing pool.
The default value of None would not use multiprocessing at all.

method selection does not work when calling minimize directly

Hi there,

great work. This interface really improves the handling of numpy.optimize.
The issue:
When passing method="nelder" to minimize, a leastsquare fit is performed.
This is due to the code fragment line 498 in minimizer.py. The method argument is only parsed if it refers to a scalar function (HAS_SCALAR_MIN is true).

I suggest replacing

    if not found:
        fitter.leastsq()

by

    if not found:
        if meth in _methods:
            func = 'fitter.'+_methods[meth]+'(**fit_kws)'
            found = True
            exec(func)
        else:
            fitter.leastsq(**fit_kws)

Best regards,

Alois

report_errors does division by zero

in printfuncs line 32 occurs a dvision by zero,
if the aktual parameter is Fixed and equal 0. (float); in this case the stderr is 0. which is not equal to None

rename lib into lmfit

I think renaming lib into lmfit would be quite helpful for development.
Right now i have run "setup.py install" before testing changes, renaming
lib would allow to just add to sys.path.

doc improvements

auto-include some examples from test or example directory.

"Table of Goodness-of-Fit Statistics" should be "Fit result table".

Try to reduce bias on leastsq.

NIST test failure

Hi,

I ran the NIST test embedded within the project. I was surprised that some of them fail. For instance Gauss3 with start2 or Lanczos* with start1.

To investigate a bit what's happening, I checked if it is an upstream issue or not.
Thus, I reused some functions implemented here (for NIST) and I applied them on leastsq() provided by scipy.optimize, the same which is called by lmfit.
It appears that all tests succeed (ie, I have fairly good results) for all tests with start2 and more than lmfit for start1.

You can see in this pastebin the results with leastsq. (Optimized vs certified parameters and the highest relative error).
http://pastebin.com/NNrj82q6 (start1)
http://pastebin.com/7HmjB5Ju (start2)
The output is very basic but enough for the purpose I guess.

For the moment, I have no idea on what's going wrong in lmfit. I'm not familiar enough with the minimizer code.

Typos

I've just corrected some typos from the documentation. See my fork.

Boundary issue?

Hey guys, first off: love the program. But I'm running into a weird issue in a rather complex model. It seems that the program can have problems making the covariance matrix. This happened to me during the 30th iteration:

File "/usr/local/lib/python2.7/dist-packages/lmfit-0.7-py2.7.egg/lmfit/minimizer.py", line 498, in minimize fitter.leastsq() File "/usr/local/lib/python2.7/dist-packages/lmfit-0.7-py2.7.egg/lmfit/minimizer.py", line 408, in leastsq cov = inv(dot(transpose(rvec),rvec)) File "/usr/local/lib/python2.7/dist-packages/scipy-0.11.0-py2.7-linux-x86_64.egg/scipy/linalg/basic.py", line 308, in inv a1 = np.asarray_chkfinite(a) File "/usr/local/lib/python2.7/dist-packages/numpy-1.6.2-py2.7-linux-x86_64.egg/numpy/lib/function_base.py", line 590, in asarray_chkfinite "array must not contain infs or NaNs") ValueError: array must not contain infs or NaNs

The code is known to run well with scipy.optimize.leastsq and openopt packages, and the error only occurs when I fix my boundary conditions. Is this a known problem? Can we catch for Inf and Nan in the matrix?

Otherwise I'll have to publish the code here.

preserve covariance matrix

save covar matrix from scipy.optimize.leastsq. This means: Parameters should derive from OrderedDict.

Unitests in lmfit/ are failing.

$ nosetests3 lmfit                                                                                                                    
EEE
======================================================================
ERROR: Failure: ImportError (No module named 'core')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.3/site-packages/nose/failure.py", line 38, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/usr/lib/python3.3/site-packages/nose/loader.py", line 413, in loadTestsFromName
    addr.filename, addr.module)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/usr/lib/python3.3/imp.py", line 190, in load_module
    return load_package(name, filename)
  File "/usr/lib/python3.3/imp.py", line 160, in load_package
    return _bootstrap.SourceFileLoader(name, path).load_module(name)
  File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1022, in load_module
  File "<frozen importlib._bootstrap>", line 1003, in load_module
  File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 868, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "/home/fr/github/lmfit-py/lmfit/uncertainties/unumpy/__init__.py", line 71, in <module>
    from core import *
ImportError: No module named 'core'

======================================================================
ERROR: Failure: ImportError (No module named 'uncertainties')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.3/site-packages/nose/failure.py", line 38, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/usr/lib/python3.3/site-packages/nose/loader.py", line 413, in loadTestsFromName
    addr.filename, addr.module)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/usr/lib/python3.3/imp.py", line 180, in load_module
    return load_source(name, filename, file)
  File "/usr/lib/python3.3/imp.py", line 119, in load_source
    _LoadSourceCompatibility(name, pathname, file).load_module(name)
  File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1022, in load_module
  File "<frozen importlib._bootstrap>", line 1003, in load_module
  File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 868, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "/home/fr/github/lmfit-py/lmfit/uncertainties/test_umath.py", line 16, in <module>
    import uncertainties
ImportError: No module named 'uncertainties'

======================================================================
ERROR: Failure: ImportError (No module named 'uncertainties')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.3/site-packages/nose/failure.py", line 38, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/usr/lib/python3.3/site-packages/nose/loader.py", line 413, in loadTestsFromName
    addr.filename, addr.module)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python3.3/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/usr/lib/python3.3/imp.py", line 180, in load_module
    return load_source(name, filename, file)
  File "/usr/lib/python3.3/imp.py", line 119, in load_source
    _LoadSourceCompatibility(name, pathname, file).load_module(name)
  File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1022, in load_module
  File "<frozen importlib._bootstrap>", line 1003, in load_module
  File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 868, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "/home/fr/github/lmfit-py/lmfit/uncertainties/test_uncertainties.py", line 25, in <module>
    import uncertainties
ImportError: No module named 'uncertainties'

----------------------------------------------------------------------
Ran 3 tests in 0.030s

FAILED (errors=3)

Origin like models

Origin and other programs used to fit data have predefined models.
These models provide the following:

  • named parameters
  • the function itself
  • maybe the derivate of the function
  • starting values, possible calculated from the data.

The usage for simple problem could then be:

 #x, y is the data
 from lmfit.models import ExpModel
 model=ExpModel(x,y)
 model.fit()

Advanced features would be:

  • linking some parameters between models
  • linking models itself, e.g. polynomial to model background and
    a gaussian peak.

A first scratch for the basic functionality
is already in the test directory. Is there any
kind of interest to continue the idea?

usage of lower and upper in anneal is wrong

The lower and upper arguments of scipy.optimize.anneal are not bounds on the parameters, they are only bounds for the stepsize. (I also find scipys documentation on this very misleading)

ValueError from conf_intervals()

A user (S. C. Read) writes:

I'm a big fan of lmfit and am using it in my final year physics project of my degree. I found that you could use it to calculate confidence limits and promptly implemented it. However, occasionally I get a ValueError: f(a) and f(b) must have different signs. Everyone I've spoken to has been stumped. Could you perhaps suggest a solution?

I can verify that I also get this message, some of the time. Other times, the confidence intervals are calculated, and look reasonable.

Anyone have a good idea?

example script:

import lmfit
import numpy as np
def residual(p, X):
    a1, a2, t1, t2 = [i.value for i in p.values()]
    return a1*np.exp(-x/t1)+a2*np.exp(-x/t2)-y

if __name__ == '__main__':
    x = np.linspace(0.3,10,100)
    y = 3*np.exp(-x/2.)-5*np.exp(-x/10.)+0.2*np.random.randn(x.size)
    p = lmfit.Parameters()
    p.add_many(('a1', 5), ('a2', -5), ('t1', 2), ('t2', 5))
    mi = lmfit.minimize(residual, p, args=(x,))
    lmfit.printfuncs.report_fit(mi.params, show_correl=False)
    ci, trace = lmfit.conf_interval(mi, sigmas=[0.68,0.95],
                                             trace=True, verbose=False)
lmfit.printfuncs.report_ci(ci)

__all__ list must contain strings

>>> from lmfit import *
...
TypeError: Item in ``from list'' not a string

This is with Python 2.7. It occurs because the entries in all must be strings, not the actual objects themselves.

Invalid syntax in printfuncs.py

In file lib/printfuncs.py in line 29 there is an invalid syntax. Diff output of fixed line:

@@ -26,7 +26,7 @@

         try:
             sval = '% .6f' % par.value
-        except (TypeError, ValueError):
+        except TypeError, ValueError:
             sval = 'Non Numeric Value?'

         if par.stderr is not None:

[Deprecation] Only use .leastsq and .scaler_minimize.

I think we should use the version bump to drop supper for the direct
wrappers methods for all the other minimizer. We duplicate
code already available in reasonable old scipy-versions.

methods to drop:
fmin
anneal
lbfgsb

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.