Giter Club home page Giter Club logo

complexitymeasures.jl's Introduction

ComplexityMeasures.jl

docsdev docsstable CI codecov Package Downloads Package Downloads DOI

A Julia package that provides:

  • A rigorous framework for extracting probabilities from data, based on the mathematical formulation of probability spaces.
  • Several (12+) outcome spaces, i.e., ways to discretize data into probabilities.
  • Several estimators for estimating probabilities given an outcome space, which correct theoretically known estimation biases.
  • Several definitions of information measures, such as various flavours of entropies (Shannon, Tsallis, Curado...), extropies, and probability-based complexity measures, that are used in the context of nonlinear dynamics, nonlinear timeseries analysis, and complex systems.
  • Several discrete and continuous (differential) estimators for entropies, which correct theoretically known estimation biases.
  • Estimators for other complexity measures that are not estimated based on probability functions.
  • An extendable interface and well thought out API accompanied by dedicated developer documentation pages. These makes it trivial to define new outcome spaces, or new estimators for probabilities, information measures, or complexity measures and integrate them with everything else in the software.

ComplexityMeasures.jl can be used as a standalone package, or as part of other projects in the JuliaDynamics organization, such as DynamicalSystems.jl or CausalityTools.jl.

To install it, run import Pkg; Pkg.add("ComplexityMeasures").

All further information is provided in the documentation, which you can either find online or build locally by running the docs/make.jl file.

Previously, this package was called Entropies.jl.

complexitymeasures.jl's People

Contributors

datseris avatar github-actions[bot] avatar heinerugland avatar ikottlarz avatar jay-sanjay avatar kahaaga avatar martinuzzifrancesco avatar rusandris avatar shrutirdalvi avatar stevengj avatar white-alistair avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

complexitymeasures.jl's Issues

Complexity measure API

As documented by @Datseris in the Readme.md and discussed in #71, this package does not only contain entropy estimators, but also complexity measures such as approximate entropy (#72) and sample entropy (#71), which are not strictly probability/entropy estimators, but complexity measures.

I suggest we decouple these methods completely from probabilities and genentropy, either by:

  1. Introducing a complexity interface, which dispatches on subtypes of ComplexityEstimator, for which ApproximateEntropy and SampleEntropy would be subtypes.
  2. Don't provide a common interface, but offer approx_entropy and sample_entropy functions that does their job without dispatching on anything.
  3. Both, providing approx_entropy and sample_entropy as convenience functions, since these methods are so common (just like permutation entropy).

We could list these under a separate header in the documentation, like so:

## Complexity measures

Some methods in the literature estimate quantities that are "entropy-like", in the sense that they don't 
explicitly compute a probabilitity distribution.

```@docs
approx_entropy
sample_entropy
```

More concise documentation

  • Documentation for genentropy methods can be combined to avoid duplication.
  • Documentation for probabilities methods can be combined to avoid duplication.

Implement disequilibrium

Rosso, O. A., Larrondo, H. A., Martin, M. T., Plastino, A., & Fuentes, M. A. (2007). Distinguishing noise from chaos. Physical review letters, 99(15), 154102.

This quantity is also used in combination with 2D permutation entropy (#74).

Some unecessary exported names

julia> names(Entropies)
37-element Array{Symbol,1}:
 :CountOccurrences
 :Dataset
 :DirectDistance
 :Entropies
 :EntropyEstimator
 :KozachenkoLeonenko
 :Kraskov
 :NaiveKernel
 :PermutationProbabilityEstimator  
 :Probabilities
 :ProbabilitiesEstimator
 :RectangularBinning
 :SymbolicAmplitudeAwarePermutation
 :SymbolicPermutation
 :SymbolicProbabilityEstimator     
 :SymbolicWeightedPermutation      
 :TimeScaleMODWT
 :TreeDistance
 :VisitationFrequency
 :binhist
 :dimension
 :encode_as_bin
 :encode_motif
 :genentropy
 :genentropy!
 :get_edgelengths
 :get_maxima
 :get_minima
 :get_minima_and_edgelengths       
 :get_minmaxes
 :joint_visits
 :marginal_visits
 :minima_edgelengths
 :probabilities
 :probabilities!
 :symbolize
 :symbolize!

That's a lot of names, and some are internal functions. They might be expanded in the docs, but you can preface the expansion with Entropies.name and the docstring can be expanded nevertheless. I think we can consider not exporting the following:

 :EntropyEstimator
 :ProbabilitiesEstimator
 :PermutationProbabilityEstimator  
 :encode_as_bin
 :encode_motif
 :get_edgelengths
 :get_maxima
 :get_minima
 :get_minima_and_edgelengths       
 :get_minmaxes
 :joint_visits
 :marginal_visits
 :minima_edgelengths

Of course, please correct me if some of these names are important for normal usage of CausalityTools and co.

Error in nearest-neighbor `Kraskov` estimator?

Hey @Datseris,

The Kraskov implementation for genentropy currently looks like this:

function genentropy(x::AbstractDataset{D, T}, est::Kraskov; base::Real = MathConstants.e) where {D, T}
    N = length(x)
    ρs = maximum_neighbor_distances(x, est)
    h = -digamma(est.k) + digamma(N) + log(base, ball_volume(D)) + D/N*sum(log.(base, ρs))
    return h
end

This should be equation 20 in Kraskov et al. (2004). From using maximum_neighbor_distances, the entry ρs[i] now contains the distance between point i and its k-th nearest neighbor.

For the last summand, we do D/N*sum(log.(base, ρs)). However, on the first line after equation 20 in Kraskov et al., it is stated that ϵ(i) is twice the distance to its k-th nearest neighbor. Shouldn't the last summand be D/N*sum(log.(base, ρs .* 2)? Or am I missing something here?

Note that mutualinfo in TransferEntropy.jl is not affected, because for MI-specific formulas (equations 8 and 9 in Kraskov et al., the sum does not appear).

Implement `alphabet_length` for histograms

If we try to compute histogram with n::Int, then the alphabet length is known and is n^D with D the dimension of the dataset.
Else, it cannot be known, as the number of boxes depends on the values of x, and an error can be thrown.

Excess entropy

Ruedi Stoop and co-workers have used excess entropy over several publications in the context of language. I believe a typical application discussing excess entropy is this letter:

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.148101

Biological neuronal networks excel over artificial ones in many ways, but the origin of this is still unknown. Our symbolic dynamics-based tool of excess entropies suggests that neuronal cultures naturally implement data structures of a higher level than what we expect from artificial neural networks, or from close-to-biology neural networks. This points to a new pathway for improving artificial neural networks towards a level demonstrated by biology.

Upstream dependants before 2.0

Upstream packages that need to be sure that work with Entropies.jl before tagging 2.0:

  • TransferEntropy.jl. Release 1.10 version locks Entropies.jl to ~ 1.2.
  • CausalityTools.jl. Release 1.10 version locks Entropies.jl to ~ 1.2.
  • ChaosTools.jl

`DispersionEntropy` should be a `ProbabilitiesEstimator`

Currently, only the convenience method dispersion_entropy is provided. However, DispersionEntropy is a proper ProbabilitiesEstimator and should follow the main API.

This should be fixed asap so that the current API is obeyed, since dispersion entropy was released in v1.2 (tagged).

What does SymbolicPermutation do in case of equals?

When numbers are equal to each other, how does the probability estimator assign them the < or > option? Research has shown that it is best to pick one at random, but I don't remember the paper now to cite it.

Replace α with q...

In the literature of nonlinear dynamics, the generalized entropy/dimension order is often characterized with the letter q instead of α. Furthermore, in this context you also find an 'alpha' (multifractalspectrum f(alpha)), see
https://en.wikipedia.org/wiki/Multifractal_system#%7F'%22%60UNIQ--postMath-0000004A-QINU%60%22'%7F_versus_%7F'%22%60UNIQ--postMath-0000004B-QINU%60%22'%7F
or the Book Argyris et al.).

So 'q' and 'alpha' have both a particular (and different) meaning in the
theory of fractals. So we should consider exchanging α with q, because otherwise we will confuse the (student) readers more
than necessary.

The good thing is that this is non breaking and can also allow α. We simply allow both keywords q, α in genentropy, but only discuss q in the documentation.

Multiscale entropy in the source code

In the current source we have

# TODO: What's this file? It's incomplete and with rogue code...

struct MultiscaleEntropy end

"""
    multiscale_entropy(x::AbstractVector)

"""
function multiscale_entropy end

function coarse_grain(x::AbstractVector{T}, τ) where T
    N = length(x)
    ys = Vector{T}(undef, 0)

    for j = 1:floor(Int, N/τ)
        yⱼ = 0.0
        for i = (j-1)*τ+1:j*τ
            yⱼ += x[i]
        end
        yⱼ *= 1/τ
        push!(ys, yⱼ)
    end
    return ys
end

x = rand(3 * 10^4 )
y1 = coarse_grain(x, 1)
y2 = coarse_grain(x, 2)
y3 = coarse_grain(x, 3)
y4 = coarse_grain(x, 4)
y5 = coarse_grain(x, 5)
y6 = coarse_grain(x, 6)
y7 = coarse_grain(x, 20)

I'm guessing that in the future can be used to estimate some multiscale entropy. But is it really an entropy or a probabilities estimator?

Implement a SimpleWavelets package?

@Datseris For the wavelet stuff, we depend on Wavelets, which again depends on DSP (a relatively heavy dependency, because it depends on the FFTW library).

I would argue, both for package compilation times , and perhaps for instructive purposes, it would be nice to implement a package SimpleWavelets, analogous to SimpleDiffEq that contains a minimal interface for the maximal overlap discrete wavelet transform (MODWT) that we use here, but with no unneccesary dependencies. I also need some internal pieces of the MODWT machinery for TransferEntropy/CausalityTools that the Wavelets package currently do not offer, so this is high-hanging fruit for me.

What do you think?

Travis CI no longer have free plans, move to GitHub Actions?

Travis CI has cancelled their free plans (https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing). Open source projects (accounts?) now have limited build time, and have to apply for more credits (on a case-by case basis, meaning that it probably has to be applied for again and again and again and ...) .

The following post on Julia discourse outlines how to move to GitHub Actions (still free, as of today): https://discourse.julialang.org/t/easy-workflow-file-for-setting-up-github-actions-ci-for-your-julia-package/49765

@Datseris, shall we make the move to GitHub Actions?

Define setindex! and broadcasting for Probabilities

For consistency, if the probability array is to be pre-allocated, it has to be provided as a Probabilities array. Then probabilities!(p::Probabilities, x, est) should work. However, because setindex! and the remaining broadcasting machinery isn't defined for the Probabilities type, doing things like p ./= sum(p) doesn't work, so we end up doing variants that allocate more memory.

I haven't defined broadcasting for custom array-like types before, so I'll take a stab at it to learn - assigning myself.

Decode symbols constructed using `OrdinalPattern`

Given the embedding dimension m and an already encoded integer symbol s, one should be able to reconstruct the original ?m-dimensional permutation pattern from s. Make a function that does this. Se discussion in #80 .

Documentation strings of Symbolic..... are unecessarily long

Entropies.jl is now part of DynamicalSystems.jl (necessary since a large functionality of the package depends on genentropy) and as such I've also added it to the docs of DynamicalSystems.jl: https://juliadynamics.github.io/DynamicalSystems.jl/latest/entropies/estimators/

I see two "problems": if you go through the page of the estimators, you see a lot of duplicated text. It feels like all estimators of SymbolicSomething are copy paste of the same thing with one or 2 additional sentences added. The second problem is information overflow.

I think it is much better to only leave the 1-2 extra sentences in all documentation strings, and cross-reference a single docstring (probably SymbolicPermutation) which contains all shared information like default embedding dimension and lag, multivariate datasets, dynamic interpretation, etc.

The second problem I see is that the docstrings are full of Admonititions. This makes them visually distracting and also missleads regarding the importance of information. Are these docstrings so massively important, when compared to the rest of the docstrings of the library, so that they deserve to be full of Adminition boxes and draw so much attention? I am not sure.

For example the admonition box "Default embedding dimension and embedding lag" doesn't add much. Obviously a user should make an informed decision on choosing the only two parameters of the algorithm. I think that the entire admonition block can be removed and instead τ, m should be defined directly below the function call signature. (P.s.: throughout DynamicalSystems.jl I'm using the symbol d to represent the embedding dimension, not m. For consistency we should do the same in Entropies.jl docstrings).

Another example: The Admonition "Speeding up repeated computations" is a section by itself. It has a header already, why does it have to be highlighted by also being inside an admonition?

Transfer operator estimators

It would be nice to provide transfer operator-based estimation of entropies too. These are already implemented in PerronFrobenius.jl, but can be greatly simplified and moved here.

  • Rectangular-binning based estimator. Straight-forward to include, no additional libraries needed. In progress

  • Approximate simplex-based estimator. A bit more cumbersome. Will require dependency on Qhull.jl.

  • Exact simplex-based estimator. Depends on Simplices.jl (which depends on PyCall/scipy for access to qhull routines), which makes the package slow. Can be implemented once Simplices.jl is ported to Qhull.jl.

Signature for `genentropy`

Issue

Currently, there are two signatures of genentropy: one where α comes first, and one where it does not. I kept the two different methods until now, because that gives a simple way of distinguishing the following situtations:

  1. the input data x is assumed to be a probability distribution (genentropy(α::Real, x::AbstractVector{<:Real}))
  2. the input data x is a time series (genentropy(x::AbstractVector{<:Real}, est::ProbabilitiesEstimator, α::Real = 1))
  3. the input data x is a multivariate dataset (genentropy(x::AbstractDataset, est::ProbabilitiesEstimator, α::Real = 1))

In cases 1 and 2, we feed a real-valued vector as the input, but depending on the context, these vectors need to be treated differently (for time series, probabilities must be estimated using some ProbabilitiesEstimator before entropy is computed, while for distributions, entropy is computed directly).

Potential solutions

Ways of handling this issue:

  1. Make a type Probabilities(x::AbstractVector{<:Real}) that upon initialization imposes the constraint that sum(x)≈1. Then, let genentropy(x::Probabilities, α::Real = 1) compute the generalized entropy from the probability distribution.
  2. Let it be understood that genentropy(x::AbstractVector{<:Real}, α::Real = 1) (without specifying an estimator) must accept a probability distribution (can check that sum(x)≈1 when calling the method).

The first one is explicit, which is good. The second one is implicit, but less complicated. Any thoughts or other ideas, @Datseris ?

Mutual information

I'm often in need for mutual information. For me this is the go-to tool to test if two timeseries have significant correlation.

The Julia package InformationMeasures.jl is unreliable, has an extremely ugly interface, and has thousands upon thousands of lines of code for something extremely simple. We can do much better...

I've coded my own version of mutual information from scratch here: https://gist.github.com/Datseris/4c7404d458d27272badec2aefdfb2f9e I've also included an optimization for the mutual information of shuffled timeseries x,y, which is what is used in the test for significance of correlation.

Now as you will see, I went with explicitly allocating an array. I realized that our binhist version, while super performant, isn't so useful here. We constantly have to find the probabilities of the points p(x, y), most of which do not even exist in the original data. Thus, we would have to code a version that searches the bins for them to find an appropriate one (which wouldn't be that bad, because bins are sorted and we can use searchsortedfirst), and if no bin found that has the point then we return 0 as probability.
However, I still think that my version with the Array-based implementation will always be faster.

I need mutualinformation in a reliable package for my classes and the book. Is it okay if we put it here? If not, should we put it in TransferEntropy? I can then extend it to treat ε exactly like probabilities does: Float, Vector{Float}, Int, Vector{Int}.

@kahaaga

Permentropy gives same result for different permutation length/delay

It doesn't matter what values we use for m, τ, the number is the same:

julia> using DynamicalSystems

julia> ds = Systems.lorenz(); x = trajectory(ds, 1000.0; dt = 0.1)[:, 1];

julia> permentropy(x, 2; τ = 3)
1.3542918655432845

julia> permentropy(x, 3; τ = 3)
1.3542918655432845

julia> permentropy(x, 3; τ = 8)
1.3542918655432845

@kahaaga you coded the new version of permutation entropy. The old version was working fine. What happened here?

Consistent markdown formatting

Some docstrings and markdown documents in the repo have a zero-spacing type formatting. What is the reasoning for picking this style, @Datseris? Is it based on any established set of linting rules, or is it just a convention used in DynamicalSystems.jl and friends?

Whatever convention we stick with, we should at least be consistent when formatting documentation strings and markdown documentation pages. I use this set of linting rules when using VSCode, which I think looks great, and less messy than the no-space format.

Improve the tests! (aim for 90% coverage)

One of the things the Good Scientific Code Workshop teaches is writing good unit tests. I have to admit, Entropies.jl suffers from "bad tests". The two offenders in our tests is that (1) all tests are in a massive file and (2) we test non-API functions (like, testing internal functions that are not exported) and (3) that we don't have enough analytic tests, where we make stuff like a trivial timeseries with a 1 first and then all 0s and compute the entropy/probabilities for that, as we know what it should do. Little by little we should try to improve the test suite. I am pasting here the slide with "good advice on writing tests":

  • Actually unit: test atomic, self-contained functions. Each test must test only one thing, the unit. When a test fails, it should pinpoint the location of the problem. Testing entire processing pipelines (a.k.a. integration tests) should be done only after units are covered, and only if resources/time allow for it!
  • Known output / Deterministic: tests defined through minimal examples that their result is known analytically are the best tests you can have! If random number generation is necessary, either test valid output range, or use seed for RNG
  • Robust: Test that the expected outcome is met, not the implementation details. Test that the target functionality is met without utilizing knowledge about the internals. Also, never use internal functions in the test suite.
  • High coverage: the more functionality of the code is tested, the better
  • Clean slate: each test file should be runnable by itself, and not rely on previous test files
  • Fast: use the minimal amount of computations to test what is necessary
  • Regression: Whenever a bug is fixed, a test is added for this case
  • Input variety: attempt to cover a wide gambit of input types

One doesn't have to worry about re-writing all tests. In fact, a PR "correcting" a single test file is already very much welcomed, as is a PR putting testing of a specific method (like all the binning methods) in a specific test file.


Do we need a `VisitiationFrequency`?

I'm a bit unhappy that we need to do the complicated VisitationFrequency(RectangularBinning(n)). I understand that VisitationFrequency could take other things than RectangularBinning, but I don't understand where else RectangularBinning can be given to except VisitationFrequency. Why don't we allow RectangularBinning itself to be used directly with probabilities? @kahaaga

reset `gh-pages` and enable clearing preview PR commits

Documentation from Entropies.jl was never "publicly announced". Anything there is weighting down on the git size of this repo. We should reset the gh-pages branch just before tagging 2.0.

Furthermore, there is a single CI action that cleans preview PRs so that their content is also not saved. We do this in Agents.jl and we should do it here as well.

Measures that build on entropies

When planning restructuring of CausalityTools.jl into multiple smaller sub-packages, I have the following undetermined question:

Where do measures that build on the various entropies belong?

There are a plethora of such entropy-based methods in the literature (cross-entropy, joint entropy, conditional entropy, relative entropy or KL divergence, etc.). Although these exist scattered around in other packages such as StatsBase.jl and InformationMeasures.jl, it would be nice if we included them as part of our ecosystem, so that it is possible to compute these quantities using any of our probability estimators.

Currently, mutualinfo lives in TransferEntropy.jl, but "hiding" mutualinfo in TransferEntropy.jl seems a bit weird. I think it should be part of a lower level-package. Entropies.jl is a natural candidate package, but it is also conceivable that it belongs elsewhere.

I much more like the idea of having a separate dedicated package to all the "intermediate" measures, in the sense that they build on various entropies. Something like EntropyMeasures.jl/EntropicMeasures.jl? Such a package would contain the "basic" entropy-based measures, while more complicated/established-as-its-own-methodology like transfer entropy and friends can exist in their own top-level packages.

This package hierarchy is similar to what you mentioned that you imagine for DynamicalSystems.jl for the multi-repo approach, @Datseris, when discussing moving AbstractDataset and related functionality to a separate lower-level package.

On the other hand, these measures also naturally belong together with entropy, so we could include them here. Any thoughts on the matter, @Datseris?

Permutation Entropy for 2D and 3D Objects

I would like the possibility to calculate a permutation of 2D and 3D objects (possibly images, or 1D spatiotemporal systems in the 2D case, and 2D spatiotemporal systems or volumes in the 3D case).

An example for the 2D version is given in Ribero et al. (2012), and a 3D version is described in Schlemmer et al. (2018). Essentially, you take a subarray out of the original array, flatten it and calculate the permutation index of the resulting pattern.

But I would like to have this method in a much more generic way than it is presented in the papers above, where any "stencil shape" can be used, not just rectangles (2D) or the "tripod" in the 3D case. This can be realized by allowing the user to give a boolean array to the function. Example:

stencil = [0 1 0;
           1 1 1;
           0 1 0]

applied to matrix (image)

[ 1  2  3  4  5;
  6  7  8  9 10;
 11 12 13 14 15;
 16 17 18 19 20;
 21 22 23 24 25]

would result in (overlapping) patterns

[2 6 7 8 12]
[3 7 8 9 13]
...

that can then be translated into permutation indices, from which a distribution can be estimated, from which a permutation entropy can be calculated.

One could also (but imo does not necessarily need) to implement the option to pass a 2D (3D) lag and length instead of a stencil to easily get rectangular shaped stencils, similar to what's already described in the Ribero paper and implemented in the ordpy library.

Improve `PowerSpectrum` estimator: add threhsold

The power spectrum estimator will yield rather uncoverged results for small timeseries of regular signals, due to the noise induced by the FOurier transform affecting the signal a lot. We should add a threshold that reduces to 0 all spectral power less than this threshold. This would make the results much more reasonable for signals, and better for normalized. Here is the example code I have:

using DynamicalSystems
N1, N2, a = 101, 100001, 10

for N in (N1, N2)
    t = LinRange(0, 2*a*π, N)
    x = sin.(t) # periodic
    y = sin.(t .+ cos.(t/0.5)) # periodic, complex spectrum
    z = sin.(rand(1:15, N) ./ rand(1:10, N)) # random
    w = trajectory(Systems.lorenz(), N÷10; Δt = 0.1, Ttr = 100)[:, 1] # chaotic

    for q in (x, y, z, w)
        h = entropy(q, PowerSpectrum())
        n = entropy_normalized(q, PowerSpectrum())
        println("entropy: $(h), normalized: $(n).")
    end
end

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

Add examples reproducing scientific papers to all methods

The correctness of the implementations should be verified by reproducing examples from the original peer-reviewed papers which the implementations are based on. If no suitable examples are given, then examples that demonstrate the correctness of the methods should be constructed.

Status:

  • SymbolicPermutation. Example roughly reproduces original paper.
  • SymbolicAmplitudeAwarePermutation. Example is given, but not one reproducing the original paper.
  • SymbolicWeightedPermutation. Example is given, but not one reproducing the original paper.
  • VisitationFrequency.
  • TimeScaleMODWT. Example is given, but not one reproducing the original paper. This looks fine and behaves as expected, so ok!
  • NaiveKernel. Example of a kernel density estimate to a 2D gaussian is given.
  • Kraskov. Example of a 1D uniform distribution.
  • KozachenkoLeonenko. Example of a 1D uniform distribution.

Documentation of symbolize is unusual

The docstring of symbolize is unusual. It starts with headers and sections instead of what a docstring should start like. It also spends considerable length on examples. It is much more like a documentation page, rather than a documentation string. This is unusual, and unintuitive, because I haven't seen anything follow this approach. The documentation strings should really be about the function/method they document. Further details that make them documentation pages should be actual documentation pages, built with documenter, and not in the documentation string. Just like we do with the main API, now symbolize is part of the main api and hence have its own header in the docs just like genentropy, etc.

I don't remember what we said about this, but I do remember that some Pull Request associated with documentation should occur to DynamicalSystems.jl to see how things look like.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.