Giter Club home page Giter Club logo

libgumath's Introduction

libgumath

C library supporting a general dispatch mechanism for xnd containers as well as a composable, generalized function concept.

gumath

Python wrapper for libgumath.

Authors

libgumath/gumath was created by Stefan Krah. The funding that made this project possible came from Anaconda Inc. and currently from Quansight LLC.

libgumath's People

Contributors

skrah avatar hameerabbasi avatar teoliphant avatar pearu avatar analicia avatar andrewfulton9 avatar saulshanabrook avatar

Stargazers

Joseph Winston avatar Athan avatar Zaki Mughal [sivoais] avatar Jeff Hammerbacher avatar Roarke McNaught avatar Markus Gonser avatar Thomas Roderick avatar Toshiki Teramura avatar Scott Ivey avatar Max Rakitin avatar  avatar Łukasz Kurowski avatar andrea denisse avatar Ian Henriksen avatar  avatar Carlos Andres perez avatar Christopher Ostrouchov avatar Eric Ma avatar Scott Sievert avatar salotz avatar Dave Hirschfeld avatar Chris Fonnesbeck avatar mg20400 avatar Ivan Ogasawara avatar Mark Mikofski avatar

Watchers

Aaron Meurer avatar Ralf Gommers avatar  avatar Ilan Schnell avatar  avatar James Cloos avatar Dave Hirschfeld avatar Thomas Roderick avatar Ian Henriksen avatar Óscar Villellas Guillén avatar Siu Kwan Lam avatar  avatar Ivan Ogasawara avatar Gregory Lee avatar  avatar

libgumath's Issues

[CI] [conda] [xnd/label/dev] Functions don't work with xnd.array

gumath functions don't work on xnd.array currently.

>>> import gumath.functions as fn
>>> import xnd
>>> a = xnd.array(5.0)
>>> a
array(5.0, type='float64')
>>> fn.add(a, a, cls=xnd.array)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: invalid dtype

Some corner-case kernels that crash with segfault

Here follows a list of kernel signatures that cause segfaults:

void -> int64
int64 -> int64, int64

The segfaults occur before calling the kernel function (tested with Xnd kind), hence the issue is inside gumath.

In case the signature is invalid or unrealistic or does not make sense for any reason, gumath ought not to allow registering such kernels, imho.

Questions: Function composition and type hints

Can the map function be overloaded with a kernel from a language like numba? It seems that function composition is a more general form of broadcasting and map.

Also, can kernels be selected based on other host type systems besides ndtypes? @sklam talked about translation, but it seems like 484, protocols etc have more granularity than ndypes, so you'd be losing data.

Otherwise we have yet another array type system in python, when there is some talk on standardizing on mypy stuff: python/typing#516

Thoughts/discussion: Interop with glow compiler

Glow compiler makes matrix math fast, by caring about cache locality: https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0

We could use compiled glow code as gumath kernels. they can compile aot: https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a lot like our story with numba).

To create a glow network you either have to write C++ or compile from an onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.md https://github.com/pytorch/glow/blob/master/docs/IR.md#the-lifetime-of-a-glow-instruction

Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?

If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.

Generate custom kernels with numba

There appears to be some support for generating custom kernels with Numba that use the C calling convention and can therefore be directly inserted into the gumath function table.

This is an example for a Strided kernel:

import gumath as gm
from xnd import xnd
from numba import cfunc, carray
from numba.types import CPointer, float64, int64, int32, void, intp, char
import sys
import numpy as np


@cfunc(int32(CPointer(CPointer(float64)), CPointer(intp), CPointer(intp), CPointer(void)), nopython=True)
def absolute__d_d(args, dimensions, steps, data):
    src = args[0]
    dest = args[1]
    N = dimensions[0]
    step = steps[0] // 8
    i = 0
    for k in range(N):
        dest[k] = abs(src[i])
        i += step
    return 0

# Get function pointer and insert kernel into the lookup table.
ptr = absolute__d_d.address
gm.abs = gm.unsafe_add_numpy_kernel(name="abs", sig="... * float64 -> ... * float64", ptr=ptr)


x = xnd([-1.0, -2e122, 3.0])
y = gm.abs(x)
print(y)

x = xnd([-1.0, -2e122, 3.0])
y = gm.abs(x[::-1])
print(y)

However, this example works because all arguments are float64. Otherwise the first argument should be CPointer(CPointer(char)) and one would need to cast to e.g. s = CPointer(int8(src))) inside the function.

The latter does not appear to be supported -- casting to primitive numpy types works.

Custom types (RFC)

There have been various questions about custom types. This is the current state and an opportunity for concrete feedback.

Custom types are created as follows, an example is the graph custom type:

  1. You need to know at least the abstract typedef that describes the underlying memory. Here we define a weighted graph as an adjacency list. We use a typedef to declare a new custom type:

https://github.com/plures/gumath/blob/daa7bb94730efe163e6129de5998a28d016e538d/libgumath/extending/graph.c#L346

  1. To validate graph invariants, the constraint function graph_constraint is automatically called upon initialization.

  2. Functions that operate on the graph obviously still need to be written in C.

  3. We define a Python class that inherits from xnd, with type checked initialization of values:

https://github.com/plures/gumath/blob/daa7bb94730efe163e6129de5998a28d016e538d/python/extending.py#L11

  1. We use this xnd subclass just like any other Python type:

https://github.com/plures/gumath/blob/daa7bb94730efe163e6129de5998a28d016e538d/python/test_gumath.py#L241

  1. If desired, type hints could look like this:
def f(g : Graph):
    pass

Zero dimensional kernels

As @teoliphant and I were talking today, we were thinking it would make sense to be able to implement a one dimension kernel. Like for example, for sin this could take one float as input and return one float as output. That way someone could write an (unoptimized) sin function with just one kernel, that could then be applied to strided or ragged arrays.

That way, if someone wants to create their own looping, like in numba or some other JIT compiler, then can use this kernel. Also, it would then be pretty easy to create this kind of kernel at runtime using llvmlite to wrap a numba jitted function. @skrah what do you think?

LINK error when trying to "pip install" on windows

initial error seems to be "'ndtypes.h': No such file or directory"

Successfully installed pip-10.0.1

C:\WinP\bd36\bu\winpython-64bit-3.6.x.1old\scripts>pip install gumath
Collecting gumath
  Using cached https://files.pythonhosted.org/packages/17/ef/4225dbcf9cd315489908d08bc908bb8001d52f0796ce021cceab11b51308/gumath-0.2.0dev3.tar.gz
Collecting ndtypes==v0.2.0dev3 (from gumath)
  Using cached https://files.pythonhosted.org/packages/61/3b/68200305e38e74299f48f598d1f585bbf11588969099425b7777a2f7bdcc/ndtypes-0.2.0dev3.tar.gz
Collecting xnd==v0.2.0dev3 (from gumath)
  Using cached https://files.pythonhosted.org/packages/43/45/3eff0d02454f8199e7df8441d55abe0bd211130f6640c166c15e24cd5d31/xnd-0.2.0dev3.tar.gz
Installing collected packages: ndtypes, xnd, gumath
  Running setup.py install for ndtypes ... done
  Running setup.py install for xnd ... done
  Running setup.py install for gumath ... error
    Complete output from command c:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\famille\\AppData\\Local\\Temp\\pip-install-jwxi2f5n\\gumath\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\famille\AppData\Local\Temp\pip-record-arc9_e9z\install-record.txt --single-version-externally-managed --compile:
            1 fichier(s) copi‚(s).
        del /q /f *.exe *.obj *.lib *.dll *.exp *.manifest 2>NUL
        cd .objs && del /q /f *.obj 2>NUL
        if exist "..\build\" rd /q /s "..\build\"
        if exist "..\dist\" rd /q /s "..\dist\"
        if exist "..\MANIFEST" del "..\MANIFEST"
        if exist "..\record.txt" del "..\record.txt"
        cd ..\python\gumath && del *.lib *.dll *.pyd gumath.h 2>NUL
        cl "-I..\ndtypes\libndtypes" "-I..\xnd\libxnd" /nologo /W4 /wd4200 /wd4201 /wd4204 /MT /Ox /GS /EHsc -c apply.c
    apply.c
    apply.c(38): fatal error C1083: Cannot open include file: 'ndtypes.h': No such file or directory
    NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.EXE"' : return code '0x2'
    Stop.
    Le fichier sp‚cifi‚ est introuvable.
    Le fichier sp‚cifi‚ est introuvable.
    Le fichier sp‚cifi‚ est introuvable.
    Le fichier sp‚cifi‚ est introuvable.
            1 fichier(s) copi‚(s).
    running install
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-3.6
    creating build\lib.win-amd64-3.6\gumath
    copying python\gumath\__init__.py -> build\lib.win-amd64-3.6\gumath
    copying python\gumath\pygumath.h -> build\lib.win-amd64-3.6\gumath
    running build_ext
    building 'gumath._gumath' extension
    creating build\temp.win-amd64-3.6
    creating build\temp.win-amd64-3.6\Release
    creating build\temp.win-amd64-3.6\Release\python
    creating build\temp.win-amd64-3.6\Release\python\gumath
    C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ilibgumath -Indtypes/python/ndtypes -Ixnd/python/xnd -Ic:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\Lib\site-packages/ndtypes -Ic:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\Lib\site-packages/xnd -Ic:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\include -Ic:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tcpython/gumath/_gumath.c /Fobuild\temp.win-amd64-3.6\Release\python/gumath/_gumath.obj /DNDT_IMPORT /DXND_IMPORT /DGM_IMPORT
    _gumath.c
    C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:libgumath /LIBPATH:ndtypes/libndtypes /LIBPATH:xnd/libxnd /LIBPATH:c:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\Lib\site-packages/ndtypes /LIBPATH:c:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\Lib\site-packages/xnd /LIBPATH:c:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\libs /LIBPATH:c:\winp\bd36\bu\winpython-64bit-3.6.x.1old\python-3.6.3.amd64\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" libndtypes-0.2.0dev3.dll.lib libxnd-0.2.0dev3.dll.lib libgumath-0.2.0dev3.dll.lib /EXPORT:PyInit__gumath build\temp.win-amd64-3.6\Release\python/gumath/_gumath.obj /OUT:build\lib.win-amd64-3.6\gumath\_gumath.cp36-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.6\Release\python/gumath\_gumath.cp36-win_amd64.lib
    LINK : fatal error LNK1181: cannot open input file 'libgumath-0.2.0dev3.dll.lib'
    error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1181

Flags for gumath kernels

I wanted to propose the need for flags for gumath kernels, as proposed in the meeting today:

  • reorderable, which describes whether the kernel is symmetric in all its inputs.
  • elementwise, which describes whether the kernel is element-wise, or has a signature of the form '(),' * n + '()->()' (this signature is in NumPy format).

Gumath tests fail `ndt_copy_contiguous_dtype` due to new `linear_index`

Tests fail with the following output due to recent changes. This is most likely a simple error from the recent linear_index added to ndtypes and I would like to dive into it tomorrow. Any help understanding the linear index would be appreciated!

test_unify (__main__.TestUnify) ... ok
test_apply (__main__.TestApply) ... ok
test_apply_error (__main__.TestApply) ... ok
gcc -I. -I.. -I../ndtypes/libndtypes -Wall -Wextra -std=c11 -pedantic -O3 -c kernels/common.c
kernels/common.c: In function 'unary_typecheck':
kernels/common.c:192:33: warning: implicit declaration of function 'ndt_dim_at'; did you mean 'ndt_init'? [-Wimplicit-function-declaration]
         if (ndt_is_c_contiguous(ndt_dim_at(t, t->ndim-1))) {
                                 ^~~~~~~~~~
                                 ndt_init
kernels/common.c:192:33: warning: passing argument 1 of 'ndt_is_c_contiguous' makes pointer from integer without a cast [-Wint-conversion]
In file included from kernels/common.c:40:0:
/nix/store/i9ggxgz86w5hwjqcyny33z96srcdva6n-libndtypes/include/ndtypes.h:657:17: note: expected 'const ndt_t * {aka const struct _ndt *}' but argument is of type 'int'
 NDTYPES_API int ndt_is_c_contiguous(const ndt_t *t);
                 ^~~~~~~~~~~~~~~~~~~
kernels/common.c:211:49: warning: passing argument 3 of 'ndt_copy_contiguous_dtype' makes integer from pointer without a cast [-Wint-conversion]
     dtype = ndt_copy_contiguous_dtype(t, dtype, ctx);
                                                 ^~~
In file included from kernels/common.c:40:0:
/nix/store/i9ggxgz86w5hwjqcyny33z96srcdva6n-libndtypes/include/ndtypes.h:678:26: note: expected 'int64_t {aka long int}' but argument is of type 'ndt_context_t * {aka struct _ndt_context *}'
 NDTYPES_API const ndt_t *ndt_copy_contiguous_dtype(const ndt_t *t, const ndt_t *dtype, int64_t linear_index, ndt_context_t *ctx);
                          ^~~~~~~~~~~~~~~~~~~~~~~~~
kernels/common.c:211:13: error: too few arguments to function 'ndt_copy_contiguous_dtype'
     dtype = ndt_copy_contiguous_dtype(t, dtype, ctx);
             ^~~~~~~~~~~~~~~~~~~~~~~~~
In file included from kernels/common.c:40:0:
/nix/store/i9ggxgz86w5hwjqcyny33z96srcdva6n-libndtypes/include/ndtypes.h:678:26: note: declared here
 NDTYPES_API const ndt_t *ndt_copy_contiguous_dtype(const ndt_t *t, const ndt_t *dtype, int64_t linear_index, ndt_context_t *ctx);
                          ^~~~~~~~~~~~~~~~~~~~~~~~~
make[1]: *** [Makefile:114: common.o] Error 1
make[1]: Leaving directory '/build/source/libgumath'
make: *** [Makefile:25: default] Error 2

Interop with Dask

I am opening this issue to track how xnd/gumath could work with Dask based on talking with @mrocklin.

We can use the dask.array.from_array on an xnd object, to have Dask chunk up that array and execute operations in parallel. The requirements dask has for an array like object are listed here.

We were going back and forth on whether the numpy-ish API should be implemented on the xnd object directly or if we should create a wrapper class, like this, that adds numpy methods.

The most basic requirement is to have shape and dtype attributes on the object. We can forward those from the type attribute:

In [1]: from xnd import xnd

In [3]: x = xnd(list(range(10)))

In [4]: import dask.array as da

In [7]: x.shape = x.type.shape

In [15]: x.dtype = str(x.type.hidden_dtype)

In [17]: d = da.from_array(x, chunks=(5,))

In [19]: d.sum()
Out[19]: dask.array<sum-aggregate, shape=(), dtype=int64, chunksize=()>

In [21]: d.sum().compute()
Out[21]: 45

In [23]: np.exp(d).sum().compute()
Out[23]: 12818.308050524603

In [24]: d[::3]
Out[24]: dask.array<getitem, shape=(4,), dtype=int64, chunksize=(2,)>

In [25]: d[::3].compute()
Out[25]: array([0, 3, 6, 9])

We could also implement __array_ufunc__ so that when we call np.sin on an xnd array, we could get back an xnd array by using the gumath.sin function. This is how cupy implements it.

We can also create a concat and register it with dask.array.core.concatenate_lookup. I don't think a concat gufunc exists yet.

Ability to register a non-static type resolution function and a multi-purpose loop

Let's suppose you want to define a gufunc from within Python. We need two parts: The type resolution function, and the actual loop. So for example:

def typeres(*input_types):
    # Perform some operation with input types.
    return output_types

def loop(*all_allocated_arrays):
    # Perform all operations, store in last arrray(s) as output.

For example, divmod:

def typeres(*input_types):
    if len(input_types) != 2 or not all(t.fits('... * int64') for t in input_types):
         raise TypeError('Type resolution error.')
    return 'int64', 'int64'

def loop(in1, in2, out1, out2):
    out1 = in1 / in2
    out2 = in1 % in2

divmod = define_gufunc(typeres, loop)

So, the type resolution function will be called and then all the output arrays allocated. Let's say we pass in two 5 * int64 arrays. And then, loop gets passed in all four 5 * int64 arrays, and there's no looping in Python, only in C.

Then, what happens is: out1, out2 is returned from the gufunc.

Are all combinations of input types really necessary?

I was wondering whether matching all kinds of input types is really necessary. What NumPy does is it casts the input array to the best supported type, instead of defining kernels for all the types.

This is currently only barely manageable with the current setup... Adding an out= or dtype= may make it exponentially worse, as we have to consider that it may not be the type we expected. Not to mention that mixed types might or might not kill vectorization instructions.

I can't help but wonder if there's a better way to do this, if we can somehow define a set of rules and have the type signatures generated, or if supporting different inputs/outputs is really necessary, or if we can cast to the output dtype as NumPy does.

Allow numba to call gumath functions

We should add support for numba to be able to call gumath kernels. The numba code should be able to call the c functions for the kernels directly.

from numba import jit
import gumath as gm
import numpy as np


@jit(nopython=True)
def test_sin(x):
    return gm.sin(gm.sin(x))

test_sin([float(i) for i in range(2000)])
test_sin(np.array([float(i) for i in range(2000)]))

Also, we should be able to have vectorize compile to a gumath kernel, instead of a numpy one, so that it can the function can be executed from C, without a python layer, and so it can be jitted without python involved

@vectorize(gumath=True)
def f(x, y):
    return x * y

@jit(nopython=True)
def test_vectorized(x):
    return f(x, x)

x = np.array([float(i) for i in range(2000)])

test_vectorized(x)

Add MKL kernel module

Add an Intel MKL kernel backend to call on XND arrays.

For example:

from gumath import mkl
from xnd import xnd

mkl.sin(xnd([[1, 2], [3, 4]], type='2 * 2 * float64'))

gumath.reduce doesn't always work

>>> import xnd
>>> import gumath.functions as fn
>>> import gumath as gu
>>> x = xnd.array([5])
>>> x
array([5], type='1 * int64')
>>> gu.reduce(fn.add, x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/hameerabbasi/anaconda/envs/uarray/lib/python3.7/site-packages/gumath/__init__.py", line 136, in reduce
    dtype = maxcast[x.dtype]
KeyError: '=q'
>>> y = xnd.array([5.0])
>>> gu.reduce(fn.add, y)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/hameerabbasi/anaconda/envs/uarray/lib/python3.7/site-packages/gumath/__init__.py", line 136, in reduce
    dtype = maxcast[x.dtype]
KeyError: '=d'
>>> gu.reduce(fn.add, x, dtype='int64')
xnd(5, type='int64')

Import issue

I followed the INSTALL.txt instructions, but I am not able to import gumath:

$ python                                                                                                                      (numbaenv)
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:14:23)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gumath
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/saul/projects/gumath/python/gumath/__init__.py", line 35, in <module>
    from ._gumath import *
ImportError: dlopen(/Users/saul/projects/gumath/python/gumath/_gumath.cpython-36m-darwin.so, 2): Symbol not found: _gm_apply
  Referenced from: /Users/saul/projects/gumath/python/gumath/_gumath.cpython-36m-darwin.so
  Expected in: flat namespace
 in /Users/saul/projects/gumath/python/gumath/_gumath.cpython-36m-darwin.so
>>>

Fusing kernels

I was listening to a talk about PyTorch and one thing they mentioned as important for their performance is the ability to pipeline multiple operations. So that a chunk of the data is only loaded from memory once, then a bunch of operations are performed.

When I look at the definition of the gm_var_sin kernel, I am not sure how it could be pipelined. This is a silly example, but if you wanted to apply sin twice to some data, than I would think you would want that inner sin C function called twice in the inner loop, instead of calling it once on all the data then calling it again on all the data, which would require loading all the data from memory twice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.