Giter Club home page Giter Club logo

lsqfit.jl's People

Contributors

andreasnoack avatar ararslan avatar bjarthur avatar blakejohnson avatar cmcbride avatar femtocleaner[bot] avatar github-actions[bot] avatar iainnz avatar iewaij avatar jebej avatar johnmyleswhite avatar johnomotani avatar jw3126 avatar kronosthelate avatar magalame avatar marcusps avatar mlubin avatar mo8it avatar mzaffalon avatar nignatiadis avatar pkofod avatar rafaelarutjunjan avatar sanga avatar singularitti avatar stakaz avatar timholy avatar tkelman avatar tlienart avatar viktmar avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lsqfit.jl's Issues

Can't Call curve_fit With p0 but not w

Hello,
I'm trying to fit a curve, but the default tests are giving poor results. I'm trying to include a default value without weight since I don't care about weight, but since Vector matches w the method curve_fit(model::Function, xpts, ydata, p0; kwargs...) doesn't seem to be useable. In addition, when plugging in [0.5, 0.5] for the weight like in the demo and then [a, b] for p0, there is an error raised: DimensionMismatch("arrays could not be broadcast to a common size").

dep warning in 0.5.0 - MultivariateOptimizationResults

The current master results in the following:

WARNING: MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,trace,f_calls,g_calls) is deprecated, use MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,trace,f_calls,g_calls,0) instead.
 in depwarn(::String, ::Symbol) at ./deprecated.jl:64
 in Optim.MultivariateOptimizationResults{T,N,M}(::String, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Int64, ::Bool, ::Bool, ::Float64, ::Bool, ::Float64, ::Bool, ::Float64, ::Array{Optim.OptimizationState{LsqFit.LevenbergMarquardt},1}, ::Int64, ::Int64) at ./deprecated.jl:50
 in #levenberg_marquardt#1(::Float64, ::Float64, ::Int64, ::Float64, ::Bool, ::Array{Float64,1}, ::Array{Float64,1}, ::Function, ::LsqFit.#f#4{#model,LinSpace{Float64},Array{Float64,1}}, ::Calculus.#g#5{LsqFit.#f#4{#model,LinSpace{Float64},Array{Float64,1}},Symbol}, ::Array{Float64,1}) at /Users/IanB/.julia/v0.5/LsqFit/src/levenberg_marquardt.jl:169
 in #lmfit#2(::Array{Any,1}, ::Function, ::LsqFit.#f#4{#model,LinSpace{Float64},Array{Float64,1}}, ::Array{Float64,1}) at /Users/IanB/.julia/v0.5/LsqFit/src/curve_fit.jl:29
 in #curve_fit#3(::Array{Any,1}, ::Function, ::#model, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/IanB/.julia/v0.5/LsqFit/src/curve_fit.jl:43
 in curve_fit(::Function, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/IanB/.julia/v0.5/LsqFit/src/curve_fit.jl:42
 in include_string(::String, ::String) at ./loading.jl:441
 in execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/IanB/.julia/v0.5/IJulia/src/execute_request.jl:175
 in eventloop(::ZMQ.Socket) at /Users/IanB/.julia/v0.5/IJulia/src/eventloop.jl:8
 in (::IJulia.##13#19)() at ./task.jl:360
while loading In[21], in expression starting on line 14

I can't easily figure out how the current use of Optim.MultivariateOptimizationResults is wrong..

How to cite this package ?

Hello, is there an "official" publication or documentation manual that can be used to cite this package in a applied research work?

Execution in parallel

Hi,

Curve_fit is currently the bottleneck of a data analysis code I wrote, so I was wondering if there were plans to parallelize it, or maybe an estimation of how difficult that would be?

Thanks a lot for your wonderful package!

Ps: is support for lazy structures (like iterators, or lazy arrays) planned?

estimate_errors: Type of error

Hi,

The type of error measure returned might be confusing.
In applications like QtiPlot/SciDaVis the returned error is just the square root of the covariance diagonal.
Can we make the scaling with quantile(TDist(N), alpha) an optional keyword?

Thanks,
Andreas

Better docs?

Before moving into JuliaOpt, would it be possible to spruce up the README a bit? The example is helpful, but there should also be a formal listing of the inputs to curve_fit, for example. (I know this wasn't originally in Optim.)

add LinearAlgebra

This packages uses some functions that have been moved to the LinearAlgebra standard library in julia 0.7, e.g. At_mul_B!. Can you please add LinearAlgebra to your imports?

Fitting 2D data with LsqFit.jl

Hi,

I was wondering if LsqFit.jl could be used to fit a 2D-data.
For example, one might want to fit a 2D data with a fit model (x, y)-> 1/(1 + a*x^2 + b * y) in order to find constants a and b.
If possible, could you provide me with some examples on this?

Thanks in advance.

[PkgEval] LsqFit may have a testing issue on Julia 0.3 (2014-07-15)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2014-07-14 the testing status was Tests pass.
  • On 2014-07-15 the testing status changed to Package doesn't load.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Package doesn't load. means that PackageEvaluator did not find tests for your package. Additionally, trying to load your package with using failed.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

INFO: Installing ArrayViews v0.4.6
INFO: Installing Calculus v0.1.4
INFO: Installing Distributions v0.5.2
INFO: Installing DualNumbers v0.1.0
INFO: Installing LsqFit v0.0.1
INFO: Installing Optim v0.3.0
INFO: Installing Options v0.2.2
INFO: Installing PDMats v0.2.1
INFO: Installing StatsBase v0.5.3
INFO: Package database updated
ERROR: NumericExtensions not found
 in require at loading.jl:47
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:51
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354
while loading /home/idunning/pkgtest/.julia/v0.3/PDMats/src/PDMats.jl, in expression starting on line 9
while loading /home/idunning/pkgtest/.julia/v0.3/Distributions/src/Distributions.jl, in expression starting on line 4
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/src/Optim.jl, in expression starting on line 5
while loading /home/idunning/pkgtest/.julia/v0.3/LsqFit/src/LsqFit.jl, in expression starting on line 7
while loading /home/idunning/pkgtest/.julia/v0.3/LsqFit/testusing.jl, in expression starting on line 1
INFO: Package database updated

Heuristics for initial choice of `lambda`

Madsen, Nielsen, Tingleff (2004) suggest choosing an initial lambda of:

lambda_0 = tau * maximum(J'*J)

where tau is supplied by the user, but they recommend values of 10^-6, 10^-3, or 1.

We need a better test of test cases to see if this improves convergence.

Using LsqFit with several x variables?

Hello

I'm trying to find a Julia pacakge faster than GLM.jl

How would you use it to fit a simple linear regression with several variables?

N=1000
x1 = repeat(1:N, outer=N)
x2 = repeat(1:N, inner=N)
x3 = sqrt.(repeat(1:N^2))
x1x2 = x1 .* x2
gg = repeat(1:5,inner=trunc(Int,N^2/5))
y = 1 .- 2*x1 + 3*x2 + 0.5*x1x2 + rand(N^2) + x3.*rand(N^2)
data = DataFrame(x1=x1, x2=x2, x3=x3, x1x2=x1x2, y=y,gg=gg)
categorical!(data, :gg)

This is the way GLM does it:
fit(LinearModel, @formula(y ~ x1+x2+x3+x1x2), data)

Or with MixedModels I can even use random effects (repeated measures)
fit(LinearMixedModel, @formula(y ~x1+x2+x3+x1x2 + (1|gg)), data)

How would you do it with LsqFit?

Basically I don't know the syntax to specify the model.

BoundsError using curve_fit with Julia 0.5

When running the example code given in the README:

using LsqFit
model(x, p) = p[1]*exp(-x.*p[2])
xdata = linspace(0,10,20)
ydata = model(xdata, [1.0 2.0]) + 0.01*randn(length(xdata))
fit = curve_fit(model, xdata, ydata, [0.5, 0.5])

I get the following error:

ERROR: BoundsError
 in getindex at ./number.jl:21 [inlined]
 in model(::LinSpace{Float64}, ::Float64) at ./REPL[98]:1
 in f at /Users/julian/.julia/v0.5/LsqFit/src/curve_fit.jl:38 [inlined]
 in #lmfit#1(::Array{Any,1}, ::Function, ::LsqFit.#f#3{#model,LinSpace{Float64},Array{Float64,1}}, ::Array{Float64,1}) at /Users/julian/.julia/v0.5/LsqFit/src/curve_fit.jl:31
 in #curve_fit#2(::Array{Any,1}, ::Function, ::#model, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/julian/.julia/v0.5/LsqFit/src/curve_fit.jl:39
 in curve_fit(::Function, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/julian/.julia/v0.5/LsqFit/src/curve_fit.jl:38

Here's my versioninfo():

Julia Version 0.5.0
Commit 3c9d753 (2016-09-19 18:14 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, ivybridge)

weights matrix should be outside the square

Hello, in the current implementation it is not possible to use weight matrix with negative values which can arise once you use an inverse of a co-variance matrix as weights.

The weight matrix W should be multiplied after the square:

χ² = W * (f(x) - y)^2

instead of

χ² = (W * (f(x) - y))^2

Add warning for #65 changes

Before tagging a new version, the deprecation of the old meaning of functions changed in #65 should be handled.

I'm not sure we can handle it perfectly, but it should definitely be mentioned somewhere.

DomainError() with exponential model

Hi, I am trying to the following:

using LsqFit
model(x, p) = p[1] +  p[2] .* (x + p[3]).^(4/3)
fit_d_1 = curve_fit(model, time, pl_norm, [2.0, 1.0, 0.1])

where "model" and pl_"norm" are float vectors. The second line gives an error related to the exponential:

DomainError()
in .^ at arraymath.jl:57
in model_comparison at filename...
i f at curve_fit.jl:39
in levenberg_marquart at levenberg_marquart.jl:124
in lmfit at curve_fit.jl:30
in curve_fit at curve_fit.jl:40

I am aware of https://github.com/JuliaLang/julia/issues/4475 and https://github.com/JuliaLang/julia/issues/3024, as I already tried the alternatives without success.

Is there any problem specifically related to curve_fit or a julia issue? I am using julia v0.4.7 in Debian, which may also be the other issue.

Any comment would be much appreciated.

ERROR: LoadError: UndefVarError: chol not defined

Hello, when trying to use LsqFit on julia 1.0 and using weights matrix I get the following error:

ERROR: LoadError: UndefVarError: chol not defined

Is there some missing using Package entry?

Also, could you please clarify the situation with the weights being standard deviations in vector case but inverse of tcovariance matrix in the matrix case, I cannot see any improvment on it since several months after the issue #69 was posted. Thnaks.

Setup attobot

I would love to be able to manage releases with attobot.

@mlubin (or someone else with appropriate permissions) could you add LsqFit.jl to attobot for me?

curve_fit not working anymore

I used the curve_fit function for a while now. After the last Pkg.update() my code crashes though.
I then tried the basic example from the docs which also fails with the following error msg:

ERROR: DimensionMismatch("first dimension of A, 20, does not match length of y, 2")
 in gemv! at linalg/matmul.jl:225
 in A_mul_B! at linalg/matmul.jl:91
 in levenberg_marquardt at /home/abieler/.julia/v0.4/Optim/src/levenberg_marquardt.jl:88
 in lmfit at /home/abieler/.julia/v0.4/LsqFit/src/curve_fit.jl:29
 in curve_fit at /home/abieler/.julia/v0.4/LsqFit/src/curve_fit.jl:39

Cheers,
Andre

MethodError: no method matching Optim.MultivariateOptimizationResults

I just updated my installed packages. LsqFit was one of them. Now it doesn't work for me...

I am getting an absurdly long method mismatch error:

Got an exception of type MethodError outside of a @test
  MethodError: no method matching Optim.MultivariateOptimizationResults(::String, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Int64, ::Bool, ::Bool, ::Float64, ::Bool, ::Float64, ::Bool, ::Float64, ::Bool, ::Array{Optim.OptimizationState{LsqFit.LevenbergMarquardt},1}, ::Int64, ::Int64, ::Int64)
  Closest candidates are:
    Optim.MultivariateOptimizationResults(!Matched::O<:Optim.Optimizer, !Matched::Bool, ::Array{T,N}, !Matched::Array{T,N}, !Matched::T, !Matched::Int64, ::Bool, !Matched::Bool, !Matched::T, ::T, ::Bool, ::T, !Matched::T, !Matched::Bool, !Matched::T, !Matched::T, !Matched::Bool, !Matched::Array{Optim.OptimizationState{M},1}, !Matched::Int64, !Matched::Int64, !Matched::Int64) where {O<:Optim.Optimizer, T, N, M} at C:\Users\datseris\.julia\v0.6\Optim\src\types.jl:65

I am using only curve_fit in an almost trivial manner:

function exponential_decay(c::AbstractVector)
    # Do curve fit from LsqFit
    model(x, p) = @. exp(-x/p[1])
    decay = curve_fit(model, 0:length(c)-1, abs.(c), [1.0]).param[1]
    return decay
end

function linear_region(x::AbstractVector, y::AbstractVector;
    dxi::Int = 1, tol::Real = 0.2)

    # Find biggest linear region:
    reg_ind = max_linear_region(linear_regions(x,y; dxi=dxi, tol=tol)...)
    # Prepare least squares fit:
    xfit = view(x, reg_ind[1]:reg_ind[2])
    yfit = view(y, reg_ind[1]:reg_ind[2])
    p0 = [1.0, 1.0]
    model(x, p) = p[1].*x .+ p[2]
    # Find fit of tangent:
    fit = curve_fit(model, xfit, yfit, p0)
    approx_tang = fit.param[1]
    return reg_ind, approx_tang
end

function slope(xfit, yfit)
    p0 = [(yfit[end] - yfit[1])/(xfit[end] - xfit[1]), yfit[1]]
    model(x, p) = p[1].*x .+ p[2]
    # Find fit of tangent:
    curve_fit(model, xfit, yfit, p0).param[1]
end

So I don't know why this error happened. I do not know the inner workings of the package and thus I cannot help more...

Standard errors too small when performing weighted fit

Hello,

I want to fit some data with this model :
@. model(z, p) = p[1] / cos(deg2rad(z)) + p[2].

When z = collect(0:5:50) and p = [A, B] (where A = 1.408363211310102e-6 and
B = 1.4034083334666746e-5) I get my theoretical curve.

The data I want to fit are y. I know also the standard deviation of y, let's call it delta_y.

When I fit my data with:
model_fit = LsqFit.curve_fit(model, z, y, p0) I obtain a reasonable standard error.
When instead I weight my fit with the inverse of the variance:
model_fit = LsqFit.curve_fit(model, z, y, 1 ./delta_y.^2, p0) I get an insane standard error that is about 13 orders of magnitude smaller than the previous case.

How is it possible?

I have:
y = [1.54378e-5, 1.54341e-5, 1.55381e-5, 1.54877e-5, 1.55381e-5, 1.56225e-5, 1.56568e-5, 1.56822e-5, 1.58282e-5, 1.60016e-5, 1.62545e-5] ;

delta_y = [1.0376e-7, 1.05538e-7, 1.82407e-7, 2.07246e-7, 1.65643e-7, 1.76996e-7, 1.75132e-7, 2.05185e-7, 1.28095e-7, 1.07386e-7, 1.9007e-7] ;

p0 = [0., 0.] ;

to compute the standard error I do:
LsqFit.standard_error(model_fit).

Thanks

edit from @pkofod

using LsqFit
@. model(z, p) = p[1] / cos(deg2rad(z)) + p[2]
z = collect(0:5:50)
A = 1.408363211310102e-6
B = 1.4034083334666746e-5
p = [A, B]

y = [1.54378e-5, 1.54341e-5, 1.55381e-5, 1.54877e-5, 1.55381e-5, 1.56225e-5, 1.56568e-5, 1.56822e-5, 1.58282e-5, 1.60016e-5, 1.62545e-5]
delta_y = [1.0376e-7, 1.05538e-7, 1.82407e-7, 2.07246e-7, 1.65643e-7, 1.76996e-7, 1.75132e-7, 2.05185e-7, 1.28095e-7, 1.07386e-7, 1.9007e-7] 
p0 = [0., 0.]

model_fit = LsqFit.curve_fit(model, z, y, p0)
model_fit_weighted = LsqFit.curve_fit(model, z, y, 1 ./delta_y.^2, p0) 

LsqFit.stderror(model_fit)
LsqFit.stderror(model_fit_weighted)

Covariance different from scipy `curve_fit` with the same data

gist

I'm trying to fit an exponential decay of an autocorrelation function with LsqFit.jl. The optimized value I obtain is correct and is the same that I get with the scipy. But the reported covariance and associated standard error is unreasonable large with LsqFit.jl. Scipy reports a value of ~1e-5 and LsqFit has a value of ~1. I would tend to believe scipy more because visually the data I have fits perfect to a single exponential decay with a very small error. See gist for the code.

I'm using the latest release version of LsqFit (4ecb0ec). I'm not sure if this is fixed in the current master branch. Mostly because I'm new to julia and I do not know how to best test using a checkout of the master branch.

too many function calls?

I am starting with Julia, so this might be something obvious I am missing. I re-wrote a code I have from Python to Julia, and it involves an LM fit. The results are the same, but all things being equal, scipy.optimize.leastsqfit calls the function to fit 16 time, whereas curve_fit calls it >250 times (!). I need to reduce a lot tolX and tolG to converge faster, but the result is no longer accurate. The code is rather complex to I cannot easily post it, I was just wondering if there is a know way to minimize the number of function calls.

Edit: I find the same behavior for a simple 2nd polynomial fit: curve_fit makes 74 calls, whereas scipy.optimize.leastsq makes 9 call, with same accuracy:
testfit_jl_py.txt

boundserror after Pkg.update() and Pkg.test("LsqFit")

ERROR: LoadError: LoadError: BoundsError
in getindex at ./number.jl:21 [inlined]
in model(::LinSpace{Float64}, ::Float64) at /Users/raukhur/.julia/v0.5/LsqFit/test/curve_fit.jl:2
in f at /Users/raukhur/.julia/v0.5/LsqFit/src/curve_fit.jl:38 [inlined]
in #lmfit#1(::Array{Any,1}, ::Function, ::LsqFit.#f#3{#model,LinSpace{Float64},Array{Float64,1}}, ::Array{Float64,1}) at /Users/raukhur/.julia/v0.5/LsqFit/src/curve_fit.jl:31
in #curve_fit#2(::Array{Any,1}, ::Function, ::#model, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/raukhur/.julia/v0.5/LsqFit/src/curve_fit.jl:39
in curve_fit(::Function, ::LinSpace{Float64}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/raukhur/.julia/v0.5/LsqFit/src/curve_fit.jl:38
in include_from_node1(::String) at ./loading.jl:488
in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
in macro expansion; at /Users/raukhur/.julia/v0.5/LsqFit/test/runtests.jl:13 [inlined]
in anonymous at ./:?
in include_from_node1(::String) at ./loading.jl:488
in include_from_node1(::String) at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
in process_options(::Base.JLOptions) at ./client.jl:262
in _start() at ./client.jl:318
in _start() at /Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
while loading /Users/raukhur/.julia/v0.5/LsqFit/test/curve_fit.jl, in expression starting on line 9
while loading /Users/raukhur/.julia/v0.5/LsqFit/test/runtests.jl, in expression starting on line 11

Maybe something with new Optim.jl version?

Testing against SciPy to assess relative performance

Shameless repost of JuliaNLSolvers/Optim.jl#298

@ChrisRackauckas wrote:
Are there any tests against the SciPy LM implementation to see if this implementation is as robust as possible (assuming the SciPy implementation is good)? I can't seem to find out how to properly use PyCall to use the SciPy curve_fit function, but I think it would be a necessary test since the current LM implementation is lacking a lot of robustness (and while that's a standard feature of the algorithm, it just seems even pickier than normal)

`MultivariateOptimizationResults` depwarn

I tried to fix this by adding a false to MultivariateOptimizationResults but it still depwarns

IanButterworth@b3391a1

WARNING: MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,trace,f_calls,g_calls,h_calls) is deprecated, use MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,false,trace,f_calls,g_calls,h_calls) instead.

warning | MultivariateOptimizationResults

I'm getting the following warning each time fit_curve is executed.

I appreciate any thoughts on it. Should I be using Optim.jl or GLM.jl for regression instead on Julia 5.0?

Thanks,

WARNING: MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,trace,f_calls,g_calls) is deprecated, use MultivariateOptimizationResults(method,initial_x,minimizer,minimum,iterations,iteration_converged,x_converged,x_tol,f_converged,f_tol,g_converged,g_tol,trace,f_calls,g_calls,0) instead.
 in depwarn(::String, ::Symbol) at ./deprecated.jl:64
 in Optim.MultivariateOptimizationResults{T,N,M}(::String, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Int64, ::Bool, ::Bool, ::Float64, ::Bool, ::Float64, ::Bool, ::Float64, ::Array{Optim.OptimizationState{LsqFit.LevenbergMarquardt},1}, ::Int64, ::Int64) at ./deprecated.jl:50
 in #levenberg_marquardt#1(::Float64, ::Float64, ::Int64, ::Float64, ::Bool, ::Array{Float64,1}, ::Array{Float64,1}, ::Function, ::LsqFit.#f#4{#model_expon,Array{Float64,1},Array{Float64,1}}, ::Calculus.#g#5{LsqFit.#f#4{#model_expon,Array{Float64,1},Array{Float64,1}},Symbol}, ::Array{Float64,1}) at /Users/brumc/.julia/v0.5/LsqFit/src/levenberg_marquardt.jl:169
 in (::LsqFit.#kw##levenberg_marquardt)(::Array{Any,1}, ::LsqFit.#levenberg_marquardt, ::Function, ::Function, ::Array{Float64,1}) at ./<missing>:0
 in #lmfit#2(::Array{Any,1}, ::Function, ::LsqFit.#f#4{#model_expon,Array{Float64,1},Array{Float64,1}}, ::Array{Float64,1}) at /Users/brumc/.julia/v0.5/LsqFit/src/curve_fit.jl:29
 in (::LsqFit.#kw##lmfit)(::Array{Any,1}, ::LsqFit.#lmfit, ::Function, ::Array{Float64,1}) at ./<missing>:0
 in #curve_fit#3(::Array{Any,1}, ::Function, ::#model_expon, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}) at /Users/brumc/.julia/v0.5/LsqFit/src/curve_fit.jl:43
 in (::LsqFit.#kw##curve_fit)(::Array{Any,1}, ::LsqFit.#curve_fit, ::Function, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}) at ./<missing>:0
 in include_string(::String, ::String) at ./loading.jl:441
 in include_string(::String, ::String, ::Int64) at /Users/brumc/.julia/v0.5/CodeTools/src/eval.jl:28
 in include_string(::Module, ::String, ::String, ::Int64, ::Vararg{Int64,N}) at /Users/brumc/.julia/v0.5/CodeTools/src/eval.jl:32
 in (::Atom.##55#58{String,Int64,String})() at /Users/brumc/.julia/v0.5/Atom/src/eval.jl:40
 in withpath(::Atom.##55#58{String,Int64,String}, ::String) at /Users/brumc/.julia/v0.5/CodeTools/src/utils.jl:30
 in withpath(::Function, ::String) at /Users/brumc/.julia/v0.5/Atom/src/eval.jl:46
 in macro expansion at /Users/brumc/.julia/v0.5/Atom/src/eval.jl:57 [inlined]
 in (::Atom.##54#57{String,Int64,String})() at ./task.jl:60
while loading /Users/brumc/Documents/code/soli/unit_test/dt_tunning_cfar.jl, in expression starting on line 153

Incorporate ForwardDiff.jl

Rather than use finite difference or input closed form for the Jacobian, use AD.

Note: Leaving this here as a reminder from a brief conversation with @ChrisRackauckas on Slack. I hope to take a stab at it if no one beats me to it.

UndefVarError: Optimizer not defined

Getting the following error on load in Julia 0.6.2

LoadError: LoadError: UndefVarError: Optimizer not defined
while loading C:\Users\Ian\.julia\v0.6\LsqFit\src\levenberg_marquardt.jl, in expression starting on line 1
while loading C:\Users\Ian\.julia\v0.6\LsqFit\src\LsqFit.jl, in expression starting on line 12

Stacktrace:
 [1] include_from_node1(::String) at .\loading.jl:576
 [2] include(::String) at .\sysimg.jl:14
 [3] include_from_node1(::String) at .\loading.jl:576
 [4] eval(::Module, ::Any) at .\boot.jl:235
 [5] _require(::Symbol) at .\loading.jl:490
 [6] require(::Symbol) at .\loading.jl:405
 [7] include_string(::String, ::String) at .\loading.jl:522

Welcome to JuliaNLSolvers

Just wanted to leave a note here. As part of the JuliaOpt restructuring, LsqFit has been moved out together with Optim and related packages. I hope you are fine with this @blakejohnson. If you want it to live somewhere else, I'm fine with that, but I think it makes sense for the package to stay near Optim.

Multivarite Curve Fitting Error

I am trying to fit a curve in multi-dimensions. But I get a MethodError. I just installed LsqFit yesterday. I am on Julia 0.5.

using LsqFit
hinge(X, param) = max(param[1], X*param[2:end])
n, p = 60, 3
x = rand(n, p)-.5
par = rand(p+1)-.5
truth  = hinge(x, par)
y = truth + randn(n, 1) * .1
fit = curve_fit(hinge, x, y, zeros(p+1))

gives

LoadError: MethodError: no method matching finite_difference_jacobian!(::LsqFit.#f#3{#hinge,Array{Float64,2},Array{Float64,2}}, ::Array{Float64,1}, ::Array{Float64,2}, ::Array{Float64,2}, ::Symbol)
Closest candidates are:
  finite_difference_jacobian!{R<:Number,S<:Number,T<:Number}(::Function, ::Array{R<:Number,1}, !Matched::Array{S<:Number,1}, ::Array{T<:Number,N}, ::Symbol) at /home/raka/.julia/v0.5/Calculus/src/finite_difference.jl:165
  finite_difference_jacobian!{R<:Number,S<:Number,T<:Number}(::Function, ::Array{R<:Number,1}, !Matched::Array{S<:Number,1}, ::Array{T<:Number,N}) at /home/raka/.julia/v0.5/Calculus/src/finite_difference.jl:165
while loading In[1], in expression starting on line 8

 in finite_difference_jacobian(::LsqFit.#f#3{#hinge,Array{Float64,2},Array{Float64,2}}, ::Array{Float64,1}, ::Symbol) at /home/raka/.julia/v0.5/Calculus/src/finite_difference.jl:204
 in #levenberg_marquardt#65(::Float64, ::Float64, ::Int64, ::Float64, ::Bool, ::Array{Float64,1}, ::Array{Float64,1}, ::Function, ::LsqFit.#f#3{#hinge,Array{Float64,2},Array{Float64,2}}, ::Calculus.#g#5{LsqFit.#f#3{#hinge,Array{Float64,2},Array{Float64,2}},Symbol}, ::Array{Float64,1}) at /home/raka/.julia/v0.5/Optim/src/levenberg_marquardt.jl:73
 in #lmfit#1(::Array{Any,1}, ::Function, ::LsqFit.#f#3{#hinge,Array{Float64,2},Array{Float64,2}}, ::Array{Float64,1}) at /home/raka/.julia/v0.5/LsqFit/src/curve_fit.jl:29
 in #curve_fit#2(::Array{Any,1}, ::Function, ::#hinge, ::Array{Float64,2}, ::Array{Float64,2}, ::Array{Float64,1}) at /home/raka/.julia/v0.5/LsqFit/src/curve_fit.jl:39
 in curve_fit(::Function, ::Array{Float64,2}, ::Array{Float64,2}, ::Array{Float64,1}) at /home/raka/.julia/v0.5/LsqFit/src/curve_fit.jl:38

How to get fit coeficients, i.e parameters ?

Well the README file claims:

fit = curve_fit(model, xdata, ydata, p0)
# fit is a composite type (LsqFitResult), with some interesting values:
#	dof(fit): degrees of freedom
#	coef(fit): best fit parameters

But that is not true.

In my code I do:

params = []
λs = []
for (B, tc, inc_c, jh) in zip(BS, TCS, INCS, JHS)
    ps = [mean(jh), mean(tc)]
    
    fit = curve_fit(model, tc, inc_c, [1, 1, 1.0])
    append!(ps, LsqFit.coef(fit))
    push!(params, ps)
    push!(λs, toymodel(ps))    
end

but this gives UndefVarError: coef not defined. Doing instead fit.coef also doesn't work, the field doesn't exist.

I have latest version: [2fda8390] LsqFit v0.6.0

Precompile fails: ERROR: LoadError: LoadError: UndefVarError: Optimizer not defined

Hello, I cannot precompile the latest LsqFit version on julia 0.6.2 (official release).

Herer the error:

julia> using LsqFit
ERROR: LoadError: LoadError: UndefVarError: Optimizer not defined
Stacktrace:
 [1] include_from_node1(::String) at ./loading.jl:576
 [2] include(::String) at ./sysimg.jl:14
 [3] include_from_node1(::String) at ./loading.jl:576
 [4] eval(::Module, ::Any) at ./boot.jl:235
 [5] _require(::Symbol) at ./loading.jl:490
 [6] require(::Symbol) at ./loading.jl:405
while loading /home/gluon/.julia/v0.6/LsqFit/src/levenberg_marquardt.jl, in expression starting on line 1
while loading /home/gluon/.julia/v0.6/LsqFit/src/LsqFit.jl, in expression starting on line 12
julia> versioninfo()
Julia Version 0.6.2
Commit d386e40c17 (2017-12-13 18:08 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Pentium(R) CPU  N3530  @ 2.16GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Atom)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, silvermont)
julia> Pkg.status()
28 required packages:
 - AmplNLWriter                  0.4.0
 - Atom                          0.6.6
 - BuildExecutable               0.1.2
 - CSV                           0.2.1
 - Clp                           0.3.1                                                                                                                                                                            
 - Colors                        0.8.2                                                                                                                                                                            
 - Combinatorics                 0.5.0                                                                                                                                                                            
 - DataArrays                    0.7.0                                                                                                                                                                            
 - DataFrames                    0.11.5                                                                                                                                                                           
 - DecFP                         0.4.2                                                                                                                                                                            
 - Formatting                    0.3.0              master                                                                                                                                                        
 - GSL                           0.3.6                                                                                                                                                                            
 - Interpolations                0.7.3                                                                                                                                                                            
 - Ipopt                         0.2.6                                                                                                                                                                            
 - IterTools                     0.2.1                                                                                                                                                                            
 - JLD                           0.8.3                                                                                                                                                                            
 - JuMP                          0.18.0                                                                                                                                                                           
 - LaTeXStrings                  0.3.0
 - Lint                          0.5.0+             master
 - LsqFit                        0.2.0
 - NLopt                         0.3.6
 - NLsolve                       0.14.1
 - Optim                         0.12.0
 - PGFPlots                      2.2.2
 - Plots                         0.15.0
 - PyPlot                        2.3.2
 - RDatasets                     0.3.0
 - SpecialFunctions              0.3.8
116 additional packages:
 - ASTInterpreter2               0.1.1
 - AbstractFFTs                  0.2.1
 - AutoHashEquals                0.2.0
 - AxisAlgorithms                0.2.0
 - AxisArrays                    0.2.0
 - BinDeps                       0.8.6
 - Blink                         0.6.0
 - Blosc                         0.3.0
 - Calculus                      0.2.2
 - CatIndices                    0.1.0
 - CategoricalArrays             0.3.3
 - Cbc                           0.3.2
 - CodeTools                     0.4.7
 - CodecZlib                     0.4.2
 - Codecs                        0.4.0
 - ColorBrewer                   0.3.1
 - ColorTypes                    0.6.6
 - ColorVectorSpace              0.5.2
 - CommonSubexpressions          0.0.1
 - Compat                        0.49.0
 - ComputationalResources        0.2.0
 - Conda                         0.7.1
 - Contour                       0.4.0
 - CoordinateTransformations     0.4.1
 - CustomUnitRanges              0.1.0
 - DataStreams                   0.3.4
 - DataStructures                0.7.4
 - DebuggerFramework             0.1.2
 - DiffBase                      0.3.2
 - DiffEqDiffTools               0.3.0
 - DiffResults                   0.0.3
 - DiffRules                     0.0.3
 - Discretizers                  2.1.0
 - Distances                     0.5.0
 - Distributions                 0.15.0
 - DocSeeker                     0.1.0
 - DualNumbers                   0.3.0
 - FFTViews                      0.1.0
 - FFTW                          0.0.4
 - FileIO                        0.6.1
 - FixedPointNumbers             0.4.3
 - ForwardDiff                   0.7.2
 - Graphics                      0.2.0
 - HDF5                          0.8.8+             master
 - Hiccup                        0.1.1
 - HttpCommon                    0.3.0
 - HttpParser                    0.3.0
 - HttpServer                    0.2.0
 - IdentityRanges                0.1.0
 - ImageAxes                     0.4.0
 - ImageCore                     0.5.0
 - ImageFiltering                0.2.3
 - ImageMagick                   0.5.0
 - ImageMetadata                 0.4.0
 - ImageMorphology               0.0.2
 - ImageTransformations          0.4.1
 - Images                        0.12.0
 - IndirectArrays                0.3.0
 - IntervalSets                  0.1.1
 - JSON                          0.16.4
 - Juno                          0.3.2
 - LNR                           0.0.2
 - Lazy                          0.12.0
 - LegacyStrings                 0.3.0
 - LineSearches                  3.2.5
 - MacroTools                    0.4.0
 - MappedArrays                  0.0.7
 - MathProgBase                  0.7.0
 - MbedTLS                       0.5.5
 - Measures                      0.1.0
 - Media                         0.3.0
 - Missings                      0.2.5
 - Mustache                      0.3.0
 - Mux                           0.2.3
 - NLSolversBase                 4.2.0
 - NaNMath                       0.3.0
 - NamedTuples                   4.0.0
 - Nullables                     0.0.3
 - OffsetArrays                  0.4.2
 - PDMats                        0.8.0
 - PaddedViews                   0.2.0
 - Parameters                    0.8.1
 - PlotThemes                    0.2.0
 - PlotUtils                     0.4.4
 - Polynomials                   0.2.1
 - PositiveFactorizations        0.1.0
 - PyCall                        1.15.0
 - QuadGK                        0.2.0
 - RData                         0.3.0
 - RangeArrays                   0.2.0
 - Ratios                        0.2.0
 - RecipesBase                   0.2.3
 - Reexport                      0.1.0
 - Requires                      0.4.3
 - ReverseDiffSparse             0.8.1
 - Rmath                         0.3.2
 - Rotations                     0.6.1
 - SHA                           0.5.6
 - SIUnits                       0.1.0
 - ShowItLikeYouBuildIt          0.2.0
 - Showoff                       0.1.1
 - SimpleTraits                  0.5.1
 - SortingAlgorithms             0.2.0
 - StaticArrays                  0.6.6
 - StatsBase                     0.19.5
 - StatsFuns                     0.5.0
 - StringDistances               0.2.0
 - TexExtensions                 0.1.0
 - TikzPictures                  1.2.0
 - TiledIteration                0.1.0
 - Tokenize                      0.4.2
 - TranscodingStreams            0.4.1
 - URIParser                     0.3.0
 - WeakRefStrings                0.4.1
 - WebSockets                    0.4.0
 - WoodburyMatrices              0.2.2

Can someone help me with this issue?

Merging OptimBase

I'm making a lot of changes to LsqFit.jl and OptimBase.jl and having some doubts towards the usage of OptimBase.jl. Is it wise to use OptimBase.jl while LsqFit.jl and OptimBase.jl are under active development? It might be easier if we keep an "updated copy" of OptimBase.jl in the source code, which I will later send PR to OptimBase.jl. Otherwise, I need send PR to OptimBase.jl first then update LsqFit.jl when the PR is merged.

improve tests

The test is a minimal single example. Weighted versions of curve_fit() are not tested, but should be.

curve_fit

If p0 is entered with Int64 instead of Float... you get the error:

no method matching eps(::Type{Int64})
Closest candidates are:...

The problem is unrecognizable if you are not into julia that much.

standard_errors

After performing:
fit=curve_fit(...)
and executing:
sigma = standard_errors(fit)
as in the examples, I get the error message:
UndefVarError: standard_errors not defined

Result comparison with CMPFit

I made some comparison with the CMPFit package, a wrapper to a C library implementing the MINPACK minimization algorithm. The results of the CMPFit library are particularly important, at least in my field, since it is widely used and it became almost a standard. Of course I would prefer to use a pure Julia solution (like LsqFit.jl), hence I decided to make these tests. However a few issues arised...

The whole code to reproduce the issues is in a single function named compare, reported at the end of this message. The purpose of the function is to compare the LsqFit and CMPFit results. The tasks performed are:

  1. it implements a model to be fitted, a simple Gaussian function plus an offset (4 parameters);
  2. it generates mock data of the model, by adding random noise;
  3. it fit the data against the model, using both LsqFit and CMPFit;
  4. it repeats steps 2 and 3 several times (keyword nloop, default: 10,000). The seed of the random number generator is fixed before the first iteration, hence the whole sequence is always the same;
  5. for the first iteration it prints the following informations:
  • the number of times the model function has been evaluated;
  • the reduced chi-squared;
  • the best fit parameter values and uncertainties;
  1. for all the iterations and for all the parameters the quantity N = (best_fit_value - true_value) / fit_uncertainty is stored;
  2. after nloop iterations it prints for all parameters the average of N (supposed to be 0) and its standard deviation (supposed to be 1), regardless of the initial parameter values.

I made two tests, changing the way uncertainties in the data are taken into account.

The results of the first test (Case A) are:

julia> compare()
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:        102     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512
 LsqFit |    Uncert.:  0.000252   0.000293   0.000443   0.000725

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.475844   2.170798  -0.100276  -2.239008   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N): 99.762665  98.581601 100.674642 100.450846   (should be ~1)

Comments:

  • The location of the minimum in parameter space is the same for both packages, since the chi-squared value and the parameter values are the same;
  • (Issue 1) the model evaluations for LsqFit is ~4 times higher than CMPFit. This problem has already been noticed here, the code attached here provide a minimum working example to verify it;
  • (Issue 2) the parameter uncertainties do not match! Specifically, they are 100 times smaller than expected, and this is linked to the uncertainties in input data (which are 0.1 for all samples). Also this problem has already been noticed (#103), here I only provide the code to test it;
  • The mean(N) and std(N) values for CMPFit are ~0 and ~1 respectively, as expected, while those of LsqFit are definitely wrong, likely because of Issue 2;

In order to solve Issue 2 I tried to incorporate the data uncertainties into the model, i.e. instead of fitting the model against the data with given weights, I fit the normalized residuals = (data - model) / uncertainties against an array of zeros, each with weight 1 (Case B).

To run the Case B test simply add the resid=true keyword:

julia> compare(resid=true)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:        102     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

Comments:

  • Again, the location of the minimum in parameter space is the same for both packages;
  • The CMPFit results are exactly the same as Case A, as expected since the problem has just been reformulated but is actually the same as before;
  • The LsqFit parameter uncertainties are now of the correct order of magnitude but are different both from the CMPFit ones and from the scaled values of Case A (remeber: all data have the same uncertainty!). Specifically, they are on average ~1% larger than CMPFit ones.
  • (Issue 3) Why do the LsqFit uncertainties differ between Case A and Case B? Why do their ratio is not a fixed number (100 in this case) ?
  • (Issue 4) What are the correct uncertainties to associate to the parameter best fit values? The ones reported by CMPFit or the slightly larger ones reported by LsqFit?

In summary: 4 issues have arised in the comparison. Issue 1 is surely solvable (CMPFit does it...) and it will provide significant boost in performances. Issue 2, 3 and 4 are possibly different aspects of the same problem, which may even be due to a misunderstanding on the term weights to be passed as 3rd argument to curve_fit. However, the fact that the results change between Case A and Case B (Issue 3) seems weird to me....

To repeat the above tests simply install the CMPFit package and run the following code:

using CMPFit, LsqFit, Random, Printf, Statistics

function compare(;nloop=10000, resid=false)
    # Variable to count function evaluations
    count = 0
    ndata = 100
    x = Vector{Float64}(undef, ndata)
    y = Vector{Float64}(undef, ndata)
    e = Vector{Float64}(undef, ndata)
    
    function func(x, p)
        count += 1
        return p[1] .+ p[2] * exp.(-0.5 .* ((x .- p[3]) ./ p[4]).^2)
    end
    residuals(x, p) = ((func(x, p) .- y) ./ e)

    # "True" parameter values
    params = [2.0, 3.0, 4.0, 5.0]

    # Accumulate data on accuracies in parameter estimation
    N_cmpfit = Vector{Float64}()
    N_lsqfit = Vector{Float64}()

    rng = MersenneTwister(0) # Initialize random generator
    for ii in 1:nloop
        # Generate data
        x = range(-3., stop=3., length=ndata) .* params[4] .+ params[3]
        y = func(x, params)
        e = fill(0.1, length(x))
        y .+= e .* randn(rng, length(x))

        # Fit with CMPFit
        count = 0
        if resid
            res1 = cmpfit(x, fill(0., length(x)), fill(1., length(x)), residuals, params)
        else
            res1 = cmpfit(x, y, e, func, params)
        end
        count_CMPFit = count

        # Fit with LsqFit
        count = 0
        if resid
            res2 = curve_fit(residuals, x, fill(0., length(x)), params)
        else
            res2 = curve_fit(func, x, y, (1 ./ e).^2, params)
        end
        count_LsqFit = count
        ratio = count_LsqFit / count_CMPFit

        if ii == 1
            @printf "First iteration:\n"
            @assert res1.dof == res2.dof            
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "CMPFit" "Count" count_CMPFit "Red. χ^2" sum(abs2, (func(x, res1.param) - y) ./ e) / res1.dof
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "LsqFit" "Count" count_LsqFit "Red. χ^2" sum(abs2, (func(x, res2.param) - y) ./ e) / res2.dof
            @printf "%7s | %10s:" "CMPFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " res1.param[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " res2.param[i]
            end
            println()
            @printf "%7s | %10s:" "CMPFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " res1.perror[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " standard_error(res2)[i]
            end
            println()
        end
        append!(N_cmpfit, (res1.param .- params) ./ res1.perror)
        append!(N_lsqfit, (res2.param .- params) ./ standard_error(res2))
    end
    N_cmpfit = reshape(N_cmpfit, length(params), nloop)
    N_lsqfit = reshape(N_lsqfit, length(params), nloop)
    @printf "\nN = (best_fit_value - true_value) / fit_uncertainty\n"
    @printf "%7s | %10s:" "CMPFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_cmpfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "LsqFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_lsqfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "CMPFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_cmpfit[i,:])
    end
    println("  (should be ~1)")
    @printf "%7s | %10s:" "LsqFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_lsqfit[i,:])
    end
    println("  (should be ~1)")
end

Levenberg-Marquardt in LsqFit?

I've tried to start a discussion over at JuliaNLSolvers/Optim.jl#207 , but maybe it is just as relevant here.

Since all the nnls functionality was recently removed from Optim, I am wondering if Levenberg-Marquardt should be moved as well. It is really a rather special case, that doesn't fit with the rest of the package (in my view). It accepts an f an g, but they are quite different than our usual f and g! in Optim.

What do you think? Would it be a bad idea to move the functionality into LsqFit, and obviously still depend on the relevant parts of Optim? Maybe there is even a (need for?) LeastSquares.jl algorithm package that I'm not aware of?

estimate_covar doubles the use of the weights

Since the weights are already included in the function which we want to minimize, the correct covarinace should be: `inv(J' * J). I have tested it with the generalized least square covariance estimators and the results agree. I am wokring on a pull request for this and some more issues...

weights in curve_fit

I should have opened a new issue for this, so here it comes:
The parameter w for weights in curve_fit is not very thoroughly documented. What I see is:
w: (optional) weight applied to the residual; can be a vector (of length(x) size or empty) or matrix (inverse covariance matrix)
From this I assumed that when providing w as a vector the routine expects the inverse of the variance, as this would be in line with the concept of covariance matrix if w is given as a matrix.

However from my application, and after comparing my results with fits done in C and with gnuplot, it looks as if curve_fit uses the weight vector as inverse standard deviations rather than inverse variances. I had a quick non-expert look in the curve_fit source, and that is also what I think I see in the source code. Could you confirm this and explain the logics?

Kind regards,
Jacques Bloch

MethodError: `isless` has no method matching isless(::Float64, ::LinSpace{Float64})

Hello, i not managed to find solution for this issue.
When i run this code:

using LsqFit

Ms=70/4000
H=4000
mu=(4/3)*pi*((4.0E-9)^3)
sigma=0.3*mu
Kb=1.38E-23
Ku=24000
M=800/4000

langiv(x)=(coth(x)-1/x)
function Mzfc(T,prms)
    count=10000
    M=prms[1]
    Ku=prms[2]
    Vc=(25)*Kb*T/Ku
    temp=Array(Float64,count)
    V=linspace(mu-3.2*sigma,mu+3.2*sigma,count)
    res=0
    for i in 1:count
        if V[i]<Vc
            temp[i]=Ms*langiv((M*V[i]*H)/(T*Kb))*exp(-(V[i]-mu)*(V[i]-mu))/sqrt(2*pi*sigma*sigma)
        else
            temp[i]=Ms*H*M./(3.*Ku)*exp(-(V[i]-mu)*(V[i]-mu))/sqrt(2*pi*sigma*sigma)
        end
    end
    for i in 2:count
        res=(temp[i-1]+temp[i])*(V[i]-V[i-1])/2+res
    end
    return res
end

T=linspace(5,405,401)
res=Array(Float64,length(T))

params=[M,Ku]
for i in 1:401
    res[i]=Mzfc(T[i],params)
end

 fit=curve_fit(Mzfc,T,res,[0.5,20000])

It results in error:

MethodError: `isless` has no method matching isless(::Float64, ::LinSpace{Float64})
Closest candidates are:
  isless(::Float64, ::Float64)
  isless(::AbstractFloat, ::AbstractFloat)
  isless(::Real, ::AbstractFloat)
...
in < at operators.jl:33
in Mzfc at In[7]:22
in f at /home/vlad/.julia/v0.4/LsqFit/src/curve_fit.jl:39
in levenberg_marquardt at /home/vlad/.julia/v0.4/LsqFit/src/levenberg_marquardt.jl:62
in lmfit at /home/vlad/.julia/v0.4/LsqFit/src/curve_fit.jl:30
in curve_fit at /home/vlad/.julia/v0.4/LsqFit/src/curve_fit.jl:40

Looks like i do something wrong and it somehow connected with my integration inside function.

Allow non-vectorized model functions?

It seems it cannot. When I use a model that multiplies a parameter within itself, a no method error appears.

model(x,p)=(1/(sqrt(2*pi)*p[1])) * exp(-( (x-p[2])/(sqrt(2)*p[1])) ^2)
xdata=collect(cuac[1][2:end]) #my bins
ydata=cuac[2] ;    #my histograms

And I get this beautiful error

LoadError: MethodError: `*` has no method matching *(::Array{Float64,1}, ::Array{Float64,1})
hile loading In[104], in expression starting on line 1
 in power_by_squaring at intfuncs.jl:78
 in model at In[103]:1
 in f at /home/karel/.julia/v0.4/LsqFit/src/curve_fit.jl:38
 in levenberg_marquardt at /home/karel/.julia/v0.4/Optim/src/levenberg_marquardt.jl:38
 in lmfit at /home/karel/.julia/v0.4/LsqFit/src/curve_fit.jl:29
 in curve_fit at /home/karel/.julia/v0.4/LsqFit/src/curve_fit.jl:39

I have already tried to write the function in different ways, but it seems that somewhere, inside the fit, the routine tries to multiply the parameters without knowing what are they.

My Julia is 0.4.3 and LsqFit is the last one avaible.

Place Lower Acceptable Bound on Residual

LsqFit.jl does not, currently, allow for any kwarg specification on what an allowable residual value is. Could this be provided such as to limit the time spent optimizing the parameter set?

I have tried to use the kwarg tolX to limit the accuracy of the fit. However, I have found that setting tolX to a higher value than the default (1e-8), like 1e-2, does not actually produce results that are within 1e-2 of the actual parameter set used to generate the data set that LsqFit.jl fits. It changes the size of the step used by the Levenberg-Marquardt algorithm but this does not have a one-to-one correspondence with the errors of the final parameter set.

It would be nice to set bounds on the residual and possibly, also, the errors associated with the parameters (maybe as a vector where each entry indicates the acceptable error for each parameter to be fit).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.