Giter Club home page Giter Club logo

gflops.jl's Introduction

GFlops.jl

Lifecycle Build Status Coverage

When code performance is an issue, it is sometimes useful to get absolute performance measurements in order to objectivise what is "slow" or "fast". GFlops.jl leverages the power of Cassette.jl to automatically count the number of floating-point operations in a piece of code. When combined with the accuracy of BenchmarkTools, this allows for easy and absolute performance measurements.

Installation

This package is registered and can therefore be simply be installed with

pkg> add GFlops

Example use

This simple example shows how to track the number of operations in a vector summation:

julia> using GFlops

julia> x = rand(1000);

julia> @count_ops sum($x)
Flop Counter: 999 flop
┌─────┬─────────┐
│     │ Float64 │
├─────┼─────────┤
│ add │     999 │
└─────┴─────────┘

julia> @gflops sum($x);
  8.86 GFlops,  12.76% peak  (9.99e+02 flop, 1.13e-07 s, 0 alloc: 0 bytes)

GFlops.jl internally tracks several types of Floating-Point operations, for both 32-bit and 64-bit operands. Pretty-printing a Flop Counter only shows non-zero entries, but any individual counter can be accessed:

julia> function mixed_dot(x, y)
           acc = 0.0
           @inbounds @simd for i in eachindex(x, y)
               acc += x[i] * y[i]
           end
           acc
       end
mixed_dot (generic function with 1 method)

julia> x = rand(Float32, 1000); y = rand(Float32, 1000);

julia> cnt = @count_ops mixed_dot($x, $y)
Flop Counter: 1000 flop
┌─────┬─────────┬─────────┐
│     │ Float32 │ Float64 │
├─────┼─────────┼─────────┤
│ add │       01000 │
│ mul │    10000 │
└─────┴─────────┴─────────┘

julia> fieldnames(GFlops.Counter)
(:fma32, :fma64, :muladd32, :muladd64, :add32, :add64, :sub32, ...)

julia> cnt.add64
1000

julia> @gflops mixed_dot($x, $y);
  9.91 GFlops,  13.36% peak  (2.00e+03 flop, 2.02e-07 s, 0 alloc: 0 bytes)

Caveats

Fused Multiplication and Addition: FMA & MulAdd

On systems which support them, FMAs and MulAdds compute two operations (an addition and a multiplication) in one instruction. @count_ops counts each individual FMA/MulAdd as one operation, which makes it easier to interpret counters. However, @gflops will count two floating-point operations for each FMA, in accordance to the way high-performance benchmarks usually behave:

julia> x = 0.5; coeffs = rand(10);

# 9 MulAdds but 18 flop
julia> cnt = @count_ops evalpoly($x, $coeffs)
Flop Counter: 18 flop
┌────────┬─────────┐
│        │ Float64 │
├────────┼─────────┤
│ muladd │       9 │
└────────┴─────────┘

julia> @gflops evalpoly($x, $coeffs);
  0.87 GFlops,  1.63% peak  (1.80e+01 flop, 2.06e-08 s, 0 alloc: 0 bytes)

Non-julia code

GFlops.jl does not see what happens outside the realm of Julia code. It especially does not see operations performed in external libraries such as BLAS calls:

julia> using LinearAlgebra

julia> @count_ops dot($x, $y)
Flop Counter: 0 flop

This is a known issue; we'll try and find a way to circumvent the problem.

gflops.jl's People

Contributors

charleskawczynski avatar ffevotte avatar github-actions[bot] avatar haampie avatar jeffreysarnoff avatar juliatagbot avatar milankl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gflops.jl's Issues

Test fails on Julia master (1.4)

Some of the tests fail with Julia master (1.4):

@count_ops: Test Failed at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:71
  Expression: cnt.mul64 == 100
   Evaluated: 0 == 100
Stacktrace:
 [1] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:71
 [2] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [3] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:57
 [4] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [5] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:35
@count_ops: Test Failed at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:72
  Expression: GFlops.flop(cnt) == 200
   Evaluated: 100 == 200
Stacktrace:
 [1] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:72
 [2] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [3] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:57
 [4] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [5] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:35
  100.00 GFlops,  75.57% peak  (2.00e+02 flop, 2.00e-09 s)
  50.00 GFlops,  33.47% peak  (1.00e+02 flop, 2.00e-09 s)
@gflops: Test Failed at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:104
  Expression: #= /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:104 =# @gflops(my_axpy!(π, $(rand(N)), y)) == N
   Evaluated: 50.0 == 100
Stacktrace:
 [1] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:104
 [2] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [3] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:97
 [4] top-level scope at /Users/kristoffer/julia/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1116
 [5] top-level scope at /Users/kristoffer/JuliaPkgEval/GFlops.jl/test/runtests.jl:35
  10000.00 GFlops,  6610.24% peak  (2.00e+04 flop, 2.00e-09 s)
Test Summary: | Pass  Fail  Total

It would be good if we could figure out the root cause of this to see if it is a regression in Julia or what is going on.

@gflops UndefVarError

julia> x = 0.5; coeffs = rand(10);

julia> cnt = @count_ops evalpoly($x, $coeffs)
Flop Counter: 18 flop
┌────────┬─────────┐
│        │ Float64 │
├────────┼─────────┤
│ muladd │       9 │
└────────┴─────────┘

julia> @gflops evalpoly($x, $coeffs);
ERROR: UndefVarError: x not defined
Stacktrace:
 [1] macro expansion
   @ ~/.julia/packages/BenchmarkTools/ms0Xc/src/execution.jl:440 [inlined]
 [2] top-level scope
   @ ~/.julia/packages/GFlops/16nkd/src/count_ops.jl:44

julia> x
0.5

Julia Version 1.6.0
[2ea8233c] GFlops v0.1.4

FMA for Julia 1.8+ on MacOS

It looks like FMA is software-implemented on MacOS since Julia v1.8... Not sure what to do about it (probably nothing) but for now I'm going to deactivate the corresponding tests.

GFlops should use minimum time instead of mean

GFlops should be estimated based on the minimum time measurement reported by BenchmarkTools, instead of taking the mean time. This would be more consistent with current best practices, as they are implemented in @btime itself.

World age errors and wrong results with LoopVectorization.jl

Thanks for this nice package! I get the following errors when trying to investigate some method using @turbo from LoopVectorization.jl.

julia> using GFlops, LoopVectorization

julia> function foo!(dest, src)
           @turbo for i in eachindex(dest, src)
               dest[i] = src[i]^2
           end
       end
foo! (generic function with 1 method)

julia> src = randn(10^3); dest = similar(src);

julia> foo!(dest, src)

julia> @count_ops foo!(dest, src)
ERROR: MethodError: no method matching namemap(::Type{LoopVectorization.OperationType})
The applicable method may be too new: running in world age 29623, while current world is 29660.
Closest candidates are:
  namemap(::Type{LoopVectorization.OperationType}) at Enums.jl:195 (method too new to be called from this world context.)
  namemap(::Type{Pkg.GitTools.GitMode}) at Enums.jl:195
  namemap(::Type{LibGit2.Consts.GIT_CREDTYPE}) at Enums.jl:195
  ...
Stacktrace:
  [1] Symbol(x::LoopVectorization.OperationType)
    @ Base.Enums ./Enums.jl:26
  [2] show(io::IOContext{IOBuffer}, x::LoopVectorization.OperationType)
    @ Base.Enums ./Enums.jl:31
  [3] _show_default(io::IOContext{IOBuffer}, x::Any)
    @ Base ./show.jl:412
  [4] show_default
    @ ./show.jl:395 [inlined]
  [5] show
    @ ./show.jl:390 [inlined]
  [6] show_delim_array(io::IOContext{IOBuffer}, itr::Tuple{Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
    @ Base ./show.jl:1124
  [7] show_delim_array
    @ ./show.jl:1109 [inlined]
  [8] show(io::IOContext{IOBuffer}, t::Tuple{Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct})
    @ Base ./show.jl:1142
  [9] show_datatype(io::IOContext{IOBuffer}, x::DataType)
    @ Base ./show.jl:928
 [10] show(io::IOContext{IOBuffer}, x::Type)
    @ Base ./show.jl:819
 [11] show_datatype(io::IOContext{IOBuffer}, x::DataType)
    @ Base ./show.jl:928
 [12] show(io::IOContext{IOBuffer}, x::Type)
    @ Base ./show.jl:819
 [13] show_delim_array(io::IOBuffer, itr::Tuple{DataType}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
    @ Base ./show.jl:1124
 [14] show_delim_array
    @ ./show.jl:1109 [inlined]
 [15] show
    @ ./show.jl:1142 [inlined]
 [16] print(io::IOBuffer, x::Tuple{DataType})
    @ Base ./strings/io.jl:35
 [17] print_to_string(::String, ::Vararg{Any, N} where N)
    @ Base ./strings/io.jl:135
 [18] string
    @ ./strings/io.jl:174 [inlined]
 [19] __overdub_generator__(self::Type, context_type::Type, args::Tuple{DataType})
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:605
 [20] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any, N} where N)
    @ Core ./boot.jl:571
 [21] overdub
    @ ./REPL[2]:2 [inlined]
 [22] overdub(::Cassette.Context{nametype(CounterCtx), GFlops.Counter, Nothing, Cassette.var"##PassType#259", Nothing, Nothing}, ::typeof(foo!), ::Vector{Float64}, ::Vector{Float64})
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:0
 [23] (::var"
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:27 [inlined]
 [24] overdub
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:27 [inlined]
 [25] overdub(overdub_context#257::Cassette.Context{nametype(CounterCtx), GFlops.Counter, Nothing, Cassette.var"##PassType#259", Nothing, Nothing}, overdub_arguments#258::var"#1#2")
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:0
 [26] top-level scope
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:26

caused by: MethodError: no method matching namemap(::Type{LoopVectorization.OperationType})
The applicable method may be too new: running in world age 29623, while current world is 29660.
Closest candidates are:
  namemap(::Type{LoopVectorization.OperationType}) at Enums.jl:195 (method too new to be called from this world context.)
  namemap(::Type{Pkg.GitTools.GitMode}) at Enums.jl:195
  namemap(::Type{LibGit2.Consts.GIT_CREDTYPE}) at Enums.jl:195
  ...
Stacktrace:
  [1] Symbol(x::LoopVectorization.OperationType)
    @ Base.Enums ./Enums.jl:26
  [2] show(io::IOContext{IOBuffer}, x::LoopVectorization.OperationType)
    @ Base.Enums ./Enums.jl:31
  [3] _show_default(io::IOContext{IOBuffer}, x::Any)
    @ Base ./show.jl:412
  [4] show_default
    @ ./show.jl:395 [inlined]
  [5] show
    @ ./show.jl:390 [inlined]
  [6] show_delim_array(io::IOBuffer, itr::Tuple{Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
    @ Base ./show.jl:1124
  [7] show_delim_array
    @ ./show.jl:1109 [inlined]
  [8] show(io::IOBuffer, t::Tuple{Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct, Symbol, Symbol, LoopVectorization.OperationStruct})
    @ Base ./show.jl:1142
  [9] show_datatype(io::IOBuffer, x::DataType)
    @ Base ./show.jl:928
 [10] show(io::IOBuffer, x::Type)
    @ Base ./show.jl:819
 [11] print(io::IOBuffer, x::Type)
    @ Base ./strings/io.jl:35
 [12] #print_within_stacktrace#429
    @ ./show.jl:2214 [inlined]
 [13] show_signature_function(io::IOBuffer, ft::Any, demangle::Bool, fargname::String, html::Bool, qualified::Bool)
    @ Base ./show.jl:2198
 [14] show_tuple_as_call(io::IOBuffer, name::Symbol, sig::Type, demangle::Bool, kwargs::Nothing, argnames::Nothing, qualified::Bool)
    @ Base ./show.jl:2232
 [15] show_tuple_as_call(io::IOBuffer, name::Symbol, sig::Type)
    @ Base ./show.jl:2220
 [16] sprint(::Function, ::Symbol, ::Vararg{Any, N} where N; context::Nothing, sizehint::Int64)
    @ Base ./strings/io.jl:105
 [17] sprint
    @ ./strings/io.jl:101 [inlined]
 [18] verbose_lineinfo!(ci::Core.CodeInfo, sig::Type{var"#s25"} where var"#s25"<:Tuple)
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:53
 [19] reflect(sigtypes::Tuple, world::UInt64)
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:119
 [20] reflect
    @ ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:84 [inlined]
 [21] __overdub_generator__(self::Type, context_type::Type, args::Tuple{DataType})
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:598
 [22] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any, N} where N)
    @ Core ./boot.jl:571
 [23] overdub
    @ ./REPL[2]:2 [inlined]
 [24] overdub(::Cassette.Context{nametype(CounterCtx), GFlops.Counter, Nothing, Cassette.var"##PassType#259", Nothing, Nothing}, ::typeof(foo!), ::Vector{Float64}, ::Vector{Float64})
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:0
 [25] (::var"
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:27 [inlined]
 [26] overdub
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:27 [inlined]
 [27] overdub(overdub_context#257::Cassette.Context{nametype(CounterCtx), GFlops.Counter, Nothing, Cassette.var"##PassType#259", Nothing, Nothing}, overdub_arguments#258::var"#1#2")
    @ Cassette ~/.julia/packages/Cassette/FwMN0/src/overdub.jl:0
 [28] top-level scope
    @ ~/.julia/packages/GFlops/jiDsP/src/count_ops.jl:26

Zero gflops with evalpoly

Hi, I get the following with an example from the README.md:

julia> x = 0.5; coeffs = rand(10);

julia> cnt = @count_ops evalpoly($x, $coeffs)
Flop Counter: 0 flop

julia> @gflops evalpoly($x, $coeffs);
  0.00 GFlops,  0.00% peak  (0.00e+00 flop, 1.88e-08 s, 0 alloc: 0 bytes)

I thought the implementation of evalpoly might have changed (to maybe C), but that's not the case:

julia> @edit evalpoly(x, coeffs)

leads me to the following implementation:

evalpoly(x, p::AbstractVector) = _evalpoly(x, p)

function _evalpoly(x, p)
    N = length(p)
    ex = p[end]
    for i in N-1:-1:1
        ex = muladd(x, ex, p[i])
    end
    ex
end

Is this expected? Some version info follows:

julia> versioninfo()
Julia Version 1.4.1
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-8.0.1 (ORCJIT, broadwell)
Environment:
  JULIA_NUM_THREADS = 4

The rem test fails with 1.9 beta and up

From 1.9 beta and up, julia has a native implementation of rem which is a bit faster than the openlibm version we currently use. That does mean the rem test now fails, since it doesn't see a frem operation but instead a couple fsubs and fabs

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

@fastmath operations are not counted (simple fix given)

When using the @fastmath macro, operators such as Core.Intrinsics.add_float are replaced by Core.Intrinsics.add_float_fast . These should be counted for correct flop count. The following patch works for me :

--- a/src/overdub.jl
+++ b/src/overdub.jl
@@ -8,6 +8,10 @@ const ternops = (
 )

 const binops = (
+    (:add_fast, Core.Intrinsics.add_float_fast, 1),
+    (:sub_fast, Core.Intrinsics.sub_float_fast, 1),
+    (:mul_fast, Core.Intrinsics.mul_float_fast, 1),
+    (:div_fast, Core.Intrinsics.div_float_fast, 1),
     (:add, Core.Intrinsics.add_float, 1),
     (:sub, Core.Intrinsics.sub_float, 1),
     (:mul, Core.Intrinsics.mul_float, 1),

adding unary ops (inv, sqrt)

I use them quite a bit within arithmetic expressions. If you add them (so I have something to copy), and you want them -- I would PR other widely used unary math functions.

Add Float16 support?

Running the following on an A64FX with native Float16 support yields

julia> @count_ops run_model(Float32,Ndays=5)
Starting ShallowWaters on Wed, 14 Apr 2021 05:36:36 without output.
60% Integration done in 1min, 22s.
Flop Counter: 221063593 flop
┌────────┬──────────┬─────────┐
│        │  Float32 │ Float64 │
├────────┼──────────┼─────────┤
│ muladd │        0333040 │
│    add │ 9173456572271 │
│    sub │ 22795413166263 │
│    mul │ 97619024244707 │
│    div │  258065381624 │
│    rem │        048 │
│    abs │        059320 │
│    neg │  499000048775 │
│   sqrt │        04850 │
└────────┴──────────┴─────────┘

julia> @count_ops run_model(Float16,Ndays=5)
Starting ShallowWaters on Wed, 14 Apr 2021 05:50:17 without output.
60% Integration done in 1min, 23s.
Flop Counter: 6919066 flop
┌────────┬─────────┬─────────┐
│        │ Float32 │ Float64 │
├────────┼─────────┼─────────┤
│ muladd │       0333040 │
│    add │       072271 │
│    sub │       0166263 │
│    mul │ 5575128244707 │
│    div │       081624 │
│    rem │       048 │
│    abs │       059320 │
│    neg │       048775 │
│   sqrt │       04850 │
└────────┴─────────┴─────────┘

The second executes every Float32 in Float16, but there are not counted. I'd be happy to test this on A64FX if there's interest to include Float16 support.

Emulated Float16 is counted as Float16 but sqrt(::Float16) isn't

Operations with Float16 are counted as Float16 (although technically LLVM converts them to Float32 and back on most hardware)

julia> @count_ops Float16(1)+Float16(1)
Flop Counter: 1 flop
┌─────┬─────────┐
│     │ Float16 │
├─────┼─────────┤
│ add │       1 │
└─────┴─────────┘

I think this is how we'd want it as most users of this package that want to count Float16 operations probably would want that to identify type stability. However sqrt is counted as Float32, which is technically true, but probably confusing or at least inconsistent.

julia> @count_ops sqrt(Float16(1))
Flop Counter: 1 flop
┌──────┬─────────┐
│      │ Float32 │
├──────┼─────────┤
│ sqrt │       1 │
└──────┴─────────┘

Presumably because sqrt happens at the LLVM level, which I assume from the last line here

GFlops.jl/src/overdub.jl

Lines 10 to 22 in a2ad017

const binops = (
(:add, Core.Intrinsics.add_float, 1),
(:sub, Core.Intrinsics.sub_float, 1),
(:mul, Core.Intrinsics.mul_float, 1),
(:div, Core.Intrinsics.div_float, 1),
(:rem, Core.Intrinsics.rem_float, 1),
)
const unops = (
(:abs, Core.Intrinsics.abs_float, 1),
(:neg, Core.Intrinsics.neg_float, 1),
(:sqrt, Core.Intrinsics.sqrt_llvm, 1),
)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.