Giter Club home page Giter Club logo

chaostools.jl's Introduction

JuliaDynamics

This repository serves the following purposes:

  • Contains the source code for the JuliaDynamics website in the src and build folders.
  • Hosts the website via GitHub-pages and Jekyll.
  • Contains tutorials for all packages of JuliaDynamics in the tutorials folder.
  • Contains video resources for all packages of JuliaDynamics in the videos folder.

The website was modeled after the website of QuantumOptics.jl and most code that builds the site was re-used from the repository of QuantumOptics.jl (with permission).


To build locally do follow the instructions from here: https://jekyllrb.com/docs/

(install Jekyll and then do bundle exec jekyll serve which serves by default to http://localhost:4000)

chaostools.jl's People

Contributors

anantha-rao12 avatar apbraun avatar asinghvi17 avatar awage avatar datseris avatar dependabot[bot] avatar efosong avatar github-actions[bot] avatar halleysfifthinc avatar ikottlarz avatar jonasisensee avatar jonaskoziorek avatar juliatagbot avatar kalelr avatar oneill avatar rusandris avatar spaette avatar tkf avatar yuxi-liu-wired avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chaostools.jl's Issues

Separate lyapunovs into a setup and stepping part

(See the proposal here: #14 by @tkf )

In short, the idea is to separate the lyapunov function into a part that sets up stuff and a part that does orthonormalization steps. I think this is a good step that will have a lot of benefits: Firstly, it will be easy to solve #7 while. Secondly it will also make easier controlling #15 . It can also have other benefits like allowing using different qr methods or different methods at different steps, but I view these as very minor and they do not concern me.

Takafumi has already set up a great code that does what is described in the issue: https://github.com/tkf/LyapunovExponents.jl

the problem is that at the moment the above is too complicated in order to offer these extra features, as for example there is no reason to associate structs with the individual steps, and also the "relaxing" part is done in a single line in our package:

st = Ttr != 0 ? evolve(ds, Ttr; diff_eq_kwargs = diff_eq_kwargs) : ds.prob.u0

(I find it unwise to change something that needs 1 line into something that needs 200 ++ lines). Changing the transient integration can also be done with more arguments.

Maybe we can all learn from both repositories, the one here keeping things very simple, and the second one linked in this post, that offers a lot of capabilities but is too complicated, and then create something that is simple and offers a lot of capabilities (I am very optimistic that this is possible.

Dimension of lorenz system is inaccurate

So the Capacity dimension of the lorenz system is very close to 2.0.

Yet, check this out:

using ChaosTools
using PyPlot

lor = Systems.lorenz()
tr_lor = trajectory(lor, 10000.0; dt = 0.05);

sizes = logspace(-1, 2, 10)
es = estimate_boxsizes(tr_lor)
ds = genentropy.(0, sizes, tr_lor)

figure(figsize=(6,4))
plot_linear_regions(-log.(sizes), ds)
D = linear_region(-log.(sizes), ds)[2]
1.8

Try whatever combination of parameters you want, different boxsizes, different initial conditions.

The result is just "wrong", assuming the literature is correct.

The problem is that the linear_regions function is behaving perfectly fine... :( And I cannot possibly believe that the entropy function is wrong, since it gives correct results in other cases....

I really can't understand where the problem is, pliz halp!

Make the `lyapunovs` function return only the k-th larger exponents

From @Datseris on July 19, 2017 12:7

This is extremely simple:

The only changes necessary are:

  1. in the methods add an optional argument (NOT a keyword) e.g. k::Int = dimension(ds)
  2. initialize the matrices Q and K in the lyapunovs function to be eye(k)
  3. For continuous systems make the function tangentbundle_setup also depend on k and initialize properly and define the e.o.m. properly
  4. update the doc string
  5. Pull request and BOOM! You contributed!

Copied from original issue: JuliaDynamics/DynamicalSystems.jl#12

HAVOK: Chaos as an intermittently forced linear system

HAVOK is a new method published recently in nature communications: https://www.nature.com/articles/s41467-017-00030-8

It describes a way where any chaotic system can be turned into a (higher-dimensional) linear system with a single (nonlinear) forcing term.

  • The process is data-driven
  • Universal method; it is the same for any system (irrespectively of how good the results are of course)
  • Relies on delay embeddings, which we have great support for
  • Seems to be a great tool for predicting reversal events, something that (to the best of my knowledge) other prediction methods are not very good at.

This is significantly large project and can also become a very big portion of a Bachelor thesis, and we already have ideas how to make a full thesis out of it.

Contact me if you are interested!


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Add checks for estimate_boxizes

Some checks can be done to see if the behavior of estimate_boxsizes is okay.

For example:

  1. Test if all elements of the returned vector are different.
  2. Test if maximum and minimum elements have at least 2 orders of magnitude difference.

gali is type unstable

Issue Description

Gali is type unstable

Expected Behavior

Gali should be type stable.

MWE

using DynamicalSystemsBase
ds = Systems.henon()
@inferred gali(ds, 2, 1000)

return type Tuple{Array{Float64,1},Array{Int64,1}} does not match inferred return type Tuple{Array{_,1} where _,Array{Int64,1}}

Numerical computation of Kolmogorov Sinai entropy

EDIT: solution for this issue exists in the book by Schuster and Just, equation 6.107.

EDIT2: solution for this issue exists also in the book by Kantz and Schreiber, section 11.4.5

EDIT3: solution for this also exists in P. Grassberger and I. Procaccia, “Estimation of the Kolmogorov entropy from a chaotic signal,” Phys. Rev. A, vol. 28, no. 4, pp. 2591–2593, 1983. They show how to compute a quantity approximately equal to KS.


From @Datseris on October 2, 2017 16:36

Another commonly used entropy is the "KS-entropy".

I would be reluctant to point people to the original papers (Ya.G. Sinai, On the Notion of Entropy of a Dynamical System, Doklady of Russian Academy of Sciences, (1959), 124, 768-771. and A.N. Kolmogorov, Entropy per unit time as a metric invariant of automorphism, Doklady of Russian Academy of Sciences, (1959), 124, 754-755.) as they are veeery mathematical and were written before computers became a thing. I also do not know any paper that treats the subject (I haven't searched for one!).

I think @RainerEngelken has done this, maybe he can help us a bit here with some comments.

Copied from original issue: JuliaDynamics/DynamicalSystems.jl#28


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Use `linreg` in `linear_regions` etc.

When I wrote the functions linear_regions and all the like, I was not aware that Julia base had the linreg function...

The self-made slope function, should just become slope(x, y) = linreg(x,y)[2]

create a "bifurcation" function that plots the orbit diagram

The function will be named orbitdiagram !!!!!! Enough, it is time for change and rebellion! No more using the completely incorrect name of bifurcation diagram!!!!

So this will work by using setfield!(ds.eom, :parametername, param) in a param loop. Then the final initial condition is saved and return for this param.

The user will the be able to either provide a vector of initial conditions (same for each param I guess) or a number (int) for random initial conditions? Something like that. Seems pretty easy if you ask me.

Local lyapunov exponents map

From @Datseris on October 2, 2017 16:38

I am talking about a function that would partition the phase space, and for each box of the phase space would give a value for a lyapunov exponent.

The naive way to do this is to run the already existing function lyapunov for each box of the phase-space, but maybe there are much better ways to do it. The above would be seriously slow, since the partitioning would have 3-6 orders of magnitude amount of initial conditions.

@RainerEngelken may have some comments for this.

Copied from original issue: JuliaDynamics/DynamicalSystems.jl#29


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Increase performance of `permentropy`

The current implementation of permetronpy is inefficient, and also does things like
nonzero = [c for c in count if c != 0] instead of using iterators.

There are certainly many ways to increase its performance.

Furthermore, as mentioned in #20 , it seems likely that one could use specialized datastructures for the calculation:

"""
Regarding further (memory/CPU) performance improvement around perms representation, probably using some kind of tree structure would help. Maybe this?:
https://www.sciencedirect.com/science/article/pii/0898122194001782

But I'd say this implementation is OK for some moderate order.
"""


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Add methods to estimate_delay

In particular, you can do:

  • First zero of correlation function
  • First minimum of correlation function
  • Exponential decay of correlation function (currently implemented)
  • minimum of the mutual information [1]

Also, maybe it is time to move estimate delay and dimension to DynamicalSystemsBase.jl ?

[1] : A.M. Fraser, H.L Swinney, Independent coordinates for strange attractors from mutual information, Phys. Rev. A 33 (1986) 1134--1140.

Benchmark the `neighborhood` function when using KDTree and BallTree

For this to happen, the first thing is to create a BallTree from a Dataset (trivial, since the dataset is already a vector of SVectors).

Then, compare and see if nearest neighbor searching is faster in a Ball Tree vs KDTree for:

  1. Random data
  2. Trajectory of lorenz
  3. Trajectory of towel
  4. Reconstruction of henon map with delay 2 and delay 10

(@JonasIsensee I think the BallTree from NearestNeighbors.jl is a data structure extremely similar with the one Ulrich gave us)

Bug: Missing warmup of orthonormal system in lyapunovs(..)

Issue Description

The orthonormal system should be iterated along the trajectory before starting the simulation, otherwise, there will be a bias from the transient because initially, the Q is not aligned (e.g. the first column does not point into the direction of the first Lyapunov vector.)

Improve existing estimate_delay

From @Datseris on January 16, 2018 16:56

Alright, its about time for fundamentally improving the function estimate_delay with option exp_decay.

For the Gissinger system at default parameters it gives delay 500 while very good values are in the range of 30-40 for timestep dt = 0.05.

Notice however that the delay time suggested by the function multiplied with dt gives the perfect delay of around 30.

Hmmmmmm

Copied from original issue: JuliaDynamics/DynamicalSystemsBase.jl#14

Increase performance of PSOS by using ODEIntegrator instead of Callbacks

I thought of the following; Currently PSOS are produced by using a specific callback structure from diff eq.

However, Continuous callbacks are in general expensive and also have a purpose much more general, which makes them have a lot of options for nonlinear solve which are unrelated to PSOS but still create problems.

I propose to create a new internal function, like create_psos that uses an ODEIntegrator instead, and steps step by step, checking the condition at each step. by saving only the current and last state, one can use a loop like:

if ceil(x[i+1]) > ceil(x[i]) # condition for PSOS
        if vx[i] > 0
          # Linear interpolation
          yp = y[i] + (y[i+1] - y[i])*(ceil(x[i]) - x[i])/(x[i+1] - x[i])
          vyp = vy[i] + (vy[i+1] - vy[i])*(ceil(x[i]) - x[i])/(x[i+1] - x[i])
          push!(tempy, mod(yp, 1))
          push!(tempvy, vyp)
        end
      end
    end

(the example is from personal code library, where I only care to get the y and vy variables when x becomes zero modulo 1. Here, instead of i and i+1 we can have previous and current state and update these in place. In addition, instead of interpolating each variable separately, one may be able to just interpolate the time of zero crossing by using the crossed variable, and then use the polynomial interpolation already offered by DiffEq.

Convergence timeseries for lyapunov functions

After Julia 0.7 with IPO we will be able to return convergence timeseries instead of final values without having to re-write code, while still ensuring type stability only by function arguments.

At the moment it is trivial to get the lyapunov convergence timeseries, by duplicating the main function lyapunovs and change this: λ += log.(abs.(diag(R)))
to this: push!(λ, λ[end] + log.(abs.(diag(R)))
and add 2 more lines:
λ = T[]; tvec = T[];
and inside the loop put push!(tvec, t).

It is literally 3 lines of code but it is very not elegant. With the new version it may be possible to do this changes without duplicating code and while still keeping things transparent and concise.

Add flexibility in defining the PSOS plane

Currently the definition of the plane the PSOS should be computed (see function poincaresos is quite limiting, as it allows only planes on the axis of the variables.

Quite often (see e.g. gissinger system paper) you want a different plane that could be e.g. diagonal in the variables.

automatic iterative determination of dt in tangent evolution

Currently, dt is a default value that can manually be changed. It would be great to have an initial short run of a system that iteratively tries different dt and computes the condition number of the resulting Q-matrix, until an acceptable value is reached. This could e.g. be done by iteratively multiplying dt This avoids loss of numerical precision (in case dt is too large) and avoids unnecessary computation (in case dt is too small). This is also good against climate change, in case you run lots of simulation or alternatively increases the number of simulations you can run using the same number of CPU hours ;-)


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Multichannel SSA extension to Broomhead and King

From @finmod on January 30, 2018 12:59

First, congratulations on how good DynamicalSystems is becoming. This work in Julia is amazing.

In nlts.jl the extension of reconstruction to Broomhead and King is really convincing and fast.
Since the pratitioner's starting point is always a dataset, it would be good to determine if the three times series from a given synthetic dataset (in your example gissinger + noise) come from either the same model or from three different models. At present, you apply separate SSA to each time series and then determine that tau=30 in each case. Can you extend SSA to Multichannel SSA https://arxiv.org/pdf/1309.5050 to complete the demonstration and bring to Julia a diagnostic tool already available in R.

Copied from original issue: JuliaDynamics/DynamicalSystems.jl#60


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

change docstrings to use "approximate dt"

We should change the documentation strings of functions that use dt and integrators, to say that dt is the approximate time step for continuous systems (for discrete it is exact of course).

New method: definition of chaos by Ott [$100]

From @Datseris on October 20, 2017 15:22

Ott (et al) came up with a definitive way to claim whether or not a system is chaotic. This was done for the 25th anniversary of the journal Chaos.

In the article Defining Chaos he and Brian R. Hunt define a quantity called "expansion entropy" which can characterize chaoticity.

It would be cool to have a function that computes that! The paper also includes a numerical algorithm to compute the quantity.

Copied from original issue: JuliaDynamics/DynamicalSystems.jl#39


Did you help close this issue? Go claim the $100 bounty on Bountysource.

Bug: avoid zero Lyapunov exponents for short run in lyapunov(..)

Problem:
Currently, if the Lyapunov exponent is small or the simulation time is small, dist ≥ threshold will never be true and therefore the returned Lyapunov exponent will be zero, although it is not actually zero.

Solution:
Add normalization at the end of the simulation.

Bug: Missing warmup of perturbation vector in lyapunov(..)

The perturbation vector should be first iterated along the trajectory so it can align in the most unstable (or in case of stable systems in the least stable) direction. Otherwise, your resulting Lyapunov exponent has an error related to the initial transient of the vector.

Proposal: More low-level APIs for Lyapunov exponent calculations

Reading how lyapunovs is implemented, I see that there is no easy way to interfere with the calculation once you call the function. However, in research, you may want/need to do stuff other than the algorithm implemented in ChaosTools.jl. For example, I can think of:

  1. Store the evolution of Lyapunov exponents (LEs) after orthonormalization step. This is an easy way to check that the LEs are actually converged.

    For example, as I do it in here:

    image

  2. Store the whole phase- and tangent-space traces. Useful for debugging/visualization and also required for calculating Covariant Lyapunov Vectors.

  3. Calculate quantities related to LEs. For example, LE fluctuations and domination of the Oseledec splitting.

An idea of the low-level API

The idea is to separate the initialization and the loop body of lyapunovs by providing a problem/integrator-like struct as in DifferentialEquations.jl. In a pseudo-Julia code, like this:

struct LyapunovExponentsSolver{DS}
    ds :: DS
    Q
    λ
    # ...and so on...
end

struct MaximumLyapunovExponentsSolver{DS}
    ds :: DS
    λ
    # ...and so on...
end

"""
Construct a `LyapunovExponentsSolver` after the preparation.
"""
function lyapunov_exponents_solver(ds; Ttr=100, dim_lyap=Inf)
   u = evolve(ds)
   prob = if dim_lyap == 1
      MaximumLyapunovExponentsSolver(ds, u)
   else
      LyapunovExponentsSolver(ds, dim_lyap, u)
   end
   return prob
end

"""
Evolve the DS `time` and do orthonormalization for each `interval`.

This would be the loop inside the current lyapunov/lyapunovs methods.
"""
solve!(::LyapunovExponentsSolver, time, interval)
# or maybe evolve!(::LyapunovExponentsSolver, ...)?

"""
Return the stored result of LEs.
"""
lyapunovs(::LyapunovExponentsSolver) :: Array
lyapunov(::LyapunovExponentsSolver) :: Real

"""
A high-level API re-defined in terms of the low-level one:
"""
function lyapunovs(ds, args...)
    lyapunovs(solve!(lyapunov_exponents_solver(ds), args...))
end

function lyapunov(ds, args...)
    lyapunov(solve!(lyapunov_exponents_solver(ds, dim_lyap=1), args...))
end

Usage

A user code for saving LEs after each orthonormalization would be:

function user_code(ds, repeat, time)
    solver = lyapunov_exponents_solver(ds, ...)
    le_hist = zeros(Float64, (dimension(ds), repeat))
    for i in 1:repeat
        solve!(solver, time, time)  # time == interval so that there would be only one orthonormalization
        le_hist[:, i] = lyapunovs(solver)  # last LEs
    end
    le_hist
end

Why a "factory" method lyapunov_exponents_solver?

You mentioned in the code that qr_sq is fast. If that's the case, lyapunov_exponents_solver can be extended so that it returns a different type of solver for which solve! is defined to use qr_sq:

struct FullLyapunovExponentsSolver{DS}
    ds :: DS
    λ
    # ...and so on...
end

function lyapunov_exponents_solver(ds; Ttr=100, dim_lyap=Inf)
   u = evolve(ds)
   prob = if dim_lyap == 1
      MaximumLyapunovExponentsSolver(ds, u)
   elseif dim_lyap == Inf || dim_lyap == dimension(ds)
      FullLyapunovExponentsSolver(ds, u)
   else
      LyapunovExponentsSolver(ds, dim_lyap, u)
   end
   return prob
end

So, I thought it'll be nice if we have a user-friendly solver factory which finds the best "solver" given the problem setting.

Thoughts

Naming is rather random and the details of API should be polished. For example, maybe the call signature should be solve!(solver) instead of solve!(solver, time, interval), by storing the duration (step) of the one solve! call and orthonormalization interval? In the future, you may want to have non-uniform orthonormalization interval for some efficiency. If that's the case, solve!(solver) would be the forward-compatible API.

But what do you think of the general picture? Do you think it is good to have this kind of low-level API?

Permutation entropy

Hi, I implemented the permutation entropy (Bandt & Pompe, 2002). Do you want add it to ChaosTools.jl? Let me know if you want a PR. Here is a demo: https://gist.github.com/tkf/31cb31564dc4f61c085d013656f1a9e9

function permutation_entropy(
        time_series::Array{T, 1}, ord::O, sampling_interval::S = 1;
        base = Base.e) where {T, O <: Integer, S <: Integer}
    n = length(time_series)
    perms = hcat(Combinatorics.permutations(O(1):ord)...)
    nperm = size(perms)[2]
    ptype = collect(O(1):ord)
    count = zeros(UInt64, nperm)

    for t in 1:n-sampling_interval * ord
        sample = time_series[t:sampling_interval:t+sampling_interval * (ord-1)]
        sortperm!(ptype, sample)
        for i in 1:nperm
            if perms[:, i] == ptype
                count[i] += 1
                break
            end
        end
    end

    nonzero = collect(c for c in count if c != 0)
    p = nonzero / sum(nonzero)
    return - sum(p .* log.(base, p))
end

I think this is not an optimal implementation though. I just wrote it as a quick exercise a while ago. Besides a few Julia related improvements (like using @views), I think the inner most loop that searches the permutation can be improved, using the fact that the permutation counter perms is sorted and maybe also by using better data structure.

Also, I'm not a data analysis guy so I don't know if this method is relevant these days.

Anyway, what do you think? It's good to add it?

feature proposal: adding Lyapunov exponent(s) for random dynamical systems

Currently, only ODEs are supported, it would be great to be able to calculate Lyapunov exponents of random dynamical systems (RDS). (https://www.springer.com/de/book/9783540637585).
RDS studies how an ensemble of initial conditions evolves in time under the influence of a frozen noise realization. This allows extending concepts from the ergodic theory of dynamical systems like Lyapunov spectra, Kolmogorov-Sinai entropy and attractor dimensionality to systems with a stochastic external input.

Consider a stochastic differential equation of the form:

$$dx_{t}=a(x_{t})+\sum_{i=1}^{N}b_{i}(x_{t})\circ dW_{t}^{i}$$

where $$dW_{t}^{i}$$ represent independent Brownian motions. An associated stochastic flow map is a solution for the dynamics, i.e. $$F_{t_{1},,t_{2};\zeta}(x_{t_{1}})=x_{t_{2}}$$. Instead of studying the temporal evolution of some initial measure \mu, where each initial condition receives private noise, as it is usually done in a Fokker-Planck ansatz, the theory of random dynamical systems studies the evolution of a sample measure $$\mu_{\zeta}^{t}$$, defined as $$\mu_{\zeta}^{t}=\lim_{s\rightarrow\infty}(F_{-s,,t;\zeta}){*}\mu$$ where the push-forward ($$F{-s,,t;\zeta}){*}$$ transports the initial measure $$\mu$$ for some fixed white noise realization $$\zeta(t)$$ defined for all $$t\in(-\infty,\infty)$$ along the flow $$F{-s,,t;\zeta}$$. In other words, the sample measure $$\mu_{\zeta}^{t}$$ is the conditional measure at time t given the infinite past history of $$\zeta(t)$$. Note that in general, while $$\mu_{\zeta}^{t}$$ depends both on time t and the noise realization $$\zeta$$, it possesses invariant properties, characterizing its structure. For example, the Lyapunov exponents $$\lambda_{1}\geqslant\lambda_{2}\geqslant\ldots\geqslant\lambda_{N}$$ are independent of the input realization $$\zeta$$.


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

lyapunovs is type unstable

Issue Description

@inferred lyapunovs(ds, 10000)
return type Array{Float64,1} does not match inferred return type Any

Expected Behavior

Type stability

MWE

using DynamicalSystems, Traceur

ds = Systems.henon()

@tracelyapunovs(ds, 10000)

return type Array{Float64,1} does not match inferred return type Any

Increase performance of `non0hist` [$100]

The goal

Maybe we can make non0hist function faster, by using a specialized datastructure instead of the standard Julia dictionary. Or maybe there is an even better approach that doesn't use dictionaries at all (see below for the algorithm). I am really bad at performance optimization so when I wrote the function I did not really know whether there is a better plan. I think it is best to ask on Discourse.

The current approach

non0hist produces a histogram in multidimensional space. However, it does it in a special, very cheap way. From the docstring:

This method is effecient in both memory
and speed, because it uses a dictionary to collect the information of bins with
elements, while it completely disregards empty bins. This allows
computation of entropies of high-dimensional datasets and
with small box sizes ε without memory overflow.

For reference, here is the current implementation:

(please recall: Dataset is nothing more than a wrapper of Vector{SVector}!)

function non0hist::Real, data::AbstractDataset{D, T}) where {D, T<:Real}
    # Initialize:
    mini = minima(data)
    L = length(data)
    # Perform "Histogram":
    # `d` is a dictionary that contains all the histogram information
    # the keys are the bin edges indices and the values are the amount of
    # datapoints in each bin
    d = Dict{SVector{D, Int}, Int}()
    for point in data
        # index of datapoint in the ranges space:
        # It is necessary to convert floor to Int (representation issues)
        ind::SVector{D, Int} = floor.(Int, (point - mini)/ε)

        # Add 1 to the bin that contains the datapoint:
        d[ind] = get(d, ind, 0) + 1
    end
    return collect(values(d))/L
end

The bounty

Any PR that generally increases the performance of non0hist will be accepted as winning the bounty. You don't have to come up with orders of magnitude gain: something x2 gain is already massive, since the function is on the order of several ms.

EDIT: ALSO SEE THE CODE HERE: https://github.com/heliosdrm/RecurrenceAnalysis.jl/blob/master/src/delay.jl


Did you help close this issue? Go claim the $100 bounty on Bountysource.

document and export stochastic_indicator

Which is in the file estimate_reconstruction.jl.

(we kind of need to re-work it to have same behavior as estimate_dimension, i.e. to return a vector of these E things

extension of estimate_delay to forward, backward and mixed delays as in Schumann-Bisschoff, Parlitz et al.

The state-of-the-art inspiration to estimate_delay is nicely condensed in the thesis of Jan Schumann-Bischoff which you can find here: https://d-nb.info/1116709740/34. Chapter 9 on the Shinriki oscillator has all the ingredients of a good benchmark for all of these delay, reconstruction and prediction methods. It also extends to state and parameter estimation for a system of three timeseries and 9 parameters.

In Julia, forward diff is considered efficient whereas backward diff may be more problematical.


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

lyapunov is type unstable

Issue Description

The function lyapunov is type-unstable.

@inferred lyapunov(ds, 10000)
return type Float64 does not match inferred return type Any

Expected Behavior

The returned type should be Float64.

The funny thing is that the method is already quite fast:

@btime lyapunov($ds, 10000);
  307.089 μs (46 allocations: 3.06 KiB)

MWE

Do

using DynamicalSystems, Traceur
ds = Systems.henon()
@trace lyapunov(ds, 10000)

and you get total chaos mess.

Helies -> Heiles

The correct last name of the second author is Heiles, not Helies.
This is correct in the reference, but the names of the functions are incorrect.

improve speed/accuracy: lyapunov(..): integrate perturbed and unperturbed trajectory simultaneously

Instead of separately iterating integ1 and integ2, they should be iterated together. Otherwise, the accumulate different noise along with their trajectories in every single step during the integration of the ODE. If they are iterated together (which means each time point is exactly the same), they would accumulate much less noise, therefore a larger reltol and abstol can be used. I have not tested this in ChaosTools.jl but I have seen a large difference in my own code (implemented unfortunately before ChaosTools came into existence).

Speedup: possibility to calculate not all Lyapunov exponents but only few

Just add a parameter for the desired number of Lyapunov exponents to be calculated. As the orthonormalization scales with order n*d^2 where n is the number of degrees of freedom and d is the number of Lyapunov exponents to be calculated, this can result in a speedup of order (n/d)^2, when only the first d Lyapunov exponents are calculated. So in a system of 10^4 equations, when you just need the first 10 Lyapunov exponents, this would be a speedup factor of 10^6.

Invariant set/manifold finding

Dellnitz, Froyland, Junge - The Algorithms behind GAIO - Set Oriented Numerical Methods for Dynamical Systems.pdf

The attached pdf file describes an algorithm (and its implementation) about finding invariant sets of a dynamical system.

This is definitely something important and we would definitely want some implementation of that.

They also have an algorithm for "natural/invariant measure" which we could welcome here as well, but I see it as more like "low priority"


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Frequency map analysis

Hey,

George requested me to open an issue for a new feature for your toolkit:
Frequency map analysis (by Laskar) is a competitive tool for investigating dynamical systems, especially multidimensional hamiltonian systems, see e.g.:
https://link.springer.com/chapter/10.1007/978-94-011-4673-9_13

I implemented the method of Hunter in python, although I did not have the time yet to develop it further and document it it may still be helpful:
https://github.com/mmlanger/saokit
Ref.:
https://link.springer.com/article/10.1023/A:1021360731798


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Relax time strictness in `lyapunov` for ContinuousDS

Currently, the method creates two integrators
and then does:

    for τ in tvector
        # evolve until rescaling:
        push!(integ1.opts.tstops, τ)
        step!(integ1)
        push!(integ2.opts.tstops, τ)
        step!(integ2)
        dist = norm(integ1.u .- integ2.u)
        # Rescale:
        if dist  threshold
            # add computed scale to accumulator (scale = local lyaponov exponent):
            a = dist/d0
            λ += log(a)
            finalτ = τ
            # Rescale and reset everything:
            # Must rescale towards difference direction:
            @. integ2.u = integ1.u + (integ2.u - integ1.u)/a
            u_modified!(integ2, true)
            set_proposed_dt!(integ2, integ1)
            dist = d0
        end
    end

However, I think this can be relaxed greatly. Instead of using tstops, one just "steps" the integrators normally. If the distance exceeds the threshold, we take the logarithm by interpolating at the nearest previous dt.

This will massively speed up the computation, because tstops is severely slowing an ODE solving. In addition, it won't affect the accuracy of the algorithm, since we will still be interpolating to the same point in time.

There is a caveat however, and that is to make sure that after the rescaling the integrators are also at the same time, in order to be sure that the method also works for non-autonomous systems.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.