Giter Club home page Giter Club logo

Comments (23)

 avatar commented on August 10, 2024 2

a side note on a problem.
libnvidia-container building steps involves fetching its deps (elfutils, libtiirpc, etc) and build them internally and links against them. This is sort of a problem for the source based distribution. libelf, libtirpc, libseccom can be installed by package manager and it would be really nice to have build system aware of such case. It, perhaps, a separate issue for developers, but i interested in gentoo, personally, hence a comment here.

from libnvidia-container.

vowstar avatar vowstar commented on August 10, 2024 1

layman -a vowstar
emerge -av nvidia-container-toolkit
and restart docker service

from libnvidia-container.

vowstar avatar vowstar commented on August 10, 2024 1

Without any link, useable without laymen, this does not really help.
Maybe you could add a link to the underlying repo?

https://github.com/vowstar/vowstar-overlay/blob/master/app-emulation/nvidia-container-toolkit/nvidia-container-toolkit-1.1.1.ebuild

https://forums.gentoo.org/viewtopic-t-1114420-start-0.html

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

Ok, thx to this:
Gentoo archive
I understand on Gentoo rpc.h lives under one more subdir:

venus /tmp/portage/dev-libs/nvidia-container-1.0.0/work/nvidia-container-1.0.0 # locate tirpc/rpc/rpc.h
/usr/include/tirpc/rpc/rpc.h

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

I extended Makefile into following form (last line change):

CFLAGS   := -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fstack-protector -fno-strict-aliasing -fvisibility=hidden \
            -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull \
            -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow \
            -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion \
            -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression \
            -I/usr/include/tirpc -Wl,-ltirpc $(CFLAGS)

Now I can compile.

Patch (I also changed /usr/local to /usr which I think follows the Gentoo standard):

diff --git a/Makefile b/Makefile
index 0af8123..fa9ad69 100644
--- a/Makefile
+++ b/Makefile
@@ -13,7 +13,7 @@ WITH_SECCOMP ?= yes
 
 ##### Global definitions #####
 
-export prefix      = /usr/local
+export prefix      = /usr
 export exec_prefix = $(prefix)
 export bindir      = $(exec_prefix)/bin
 export libdir      = $(exec_prefix)/lib
@@ -111,7 +111,8 @@ CFLAGS   := -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fstack-protec
             -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull \
             -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow \
             -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion \
-            -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression $(CFLAGS)
+            -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression \
+            -I/usr/include/tirpc -Wl,-ltirpc $(CFLAGS)
 LDFLAGS  := -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections $(LDFLAGS)
 LDLIBS   := $(LDLIBS)

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

Ok, I also failed because of default destination /usr/lib, which on gentoo AMD64 is lib64, all 3 small patches including Gentoo ebuild are available here:
https://github.com/archenroot/gentoo-overlay/tree/master/dev-libs/nvidia-container

But I have now question: What to do next to bring full runtime to gentoo?

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

Ok, the build seems broken, when I run client info I get:

venus /opt/cuda/extras/demo_suite # nvidia-container-cli info
nvidia-container-cli: initialization error: cuda error: no cuda-capable device is detected

But when I check my CUDA env it looks good:

venus /opt/cuda/extras/demo_suite # ./bandwidthTest 
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GTX 960M
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			5712.2

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			5705.8

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			51436.6

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

This can be related a fact that I am on laptop running optimus with Intel cpu builtin gpu and Nvidia (dual laptop setup)

Still when I query my device, all looks good... what magic is this library doing as soon as I can run any other CUDA application:

venus /etc # /opt/cuda/sdk/bin/x86_64/linux/release/deviceQuery
/opt/cuda/sdk/bin/x86_64/linux/release/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 960M"
  CUDA Driver Version / Runtime Version          9.2 / 9.2
  CUDA Capability Major/Minor version number:    5.0
  Total amount of global memory:                 2004 MBytes (2101870592 bytes)
  ( 5) Multiprocessors, (128) CUDA Cores/MP:     640 CUDA Cores
  GPU Max Clock rate:                            1098 MHz (1.10 GHz)
  Memory Clock rate:                             2505 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.2, NumDevs = 1
Result = PASS

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

From Strace I can see:

rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
close(4)                                = 0
close(5)                                = -1 EBADF (Bad file descriptor)
close(3)                                = 0
write(2, "nvidia-container-cli: ", 22nvidia-container-cli: )  = 22
write(2, "initialization error: cuda error"..., 68initialization error: cuda error: no cuda-capable device is detected) = 68
write(2, "\n", 1

If I understand this correctly the error returned from CUDA API is error code 68 which points me to:

Error explanation is:

CudaErrorSyncDepthExceeded = 68
This error indicates that a call to cudaDeviceSynchronize made from the device runtime failed because the call was made at grid depth greater than than either the default (2 levels of grids) or user specified device limit cudaLimitDevRuntimeSyncDepth. To be able to synchronize on launched grids at a greater depth successfully, the maximum nested depth at which cudaDeviceSynchronize will be called must be specified with the cudaLimitDevRuntimeSyncDepth limit to the cudaDeviceSetLimit api before the host-side launch of a kernel using the device runtime. Keep in mind that additional levels of sync depth require the runtime to reserve large amounts of device memory that cannot be used for user allocations.

I suspect there is something wrong in general...

from libnvidia-container.

RenaudWasTaken avatar RenaudWasTaken commented on August 10, 2024

Hello !

Is the NVIDIA driver installed on the machine?

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

@RenaudWasTaken - sure, I can run CUDA query device and run glxspehres64 which displays correct device....

from libnvidia-container.

kinzess avatar kinzess commented on August 10, 2024

NVIDIA/nvidia-docker#657 (comment)

it works.

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

@kinzess - thank you for reference, let me check....

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

In general I would like to have Gentoo host on multigpu node where I can run all AI layers all the time within its own docker image, so each can have specific env config.

from libnvidia-container.

kinzess avatar kinzess commented on August 10, 2024

@archenroot env config? no, i think you should change user or groups.

# nvidia-container-cli info
nvidia-container-cli: initialization error: cuda error: no cuda-capable device is detected

# nvidia-container-cli --user=0:33 info 
NVRM version:   390.25
CUDA version:   9.1

Device Index:   0
Device Minor:   0
Model:          GeForce GTX 960M
Brand:          GeForce
GPU UUID:       GPU-a7ba524a-ed1f-4402-1ac8-d562f3cc572f
Bus Location:   00000000:02:00.0
Architecture:   5.0

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

@kinzess - I run under my standard user, but let thx for hints... will look soon into this.

from libnvidia-container.

RenaudWasTaken avatar RenaudWasTaken commented on August 10, 2024

Sorry for the long delay in answering this.

Can you run sudo nvidia-container-cli -k -d /dev/tty list ?
If possible can you also uncomment the debug line in /etc/nvidia-container-runtime/config.toml and paste the output of the log file?

Thanks!

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

@angryvincent @RenaudWasTaken - thx both for response, will look at that during next week

from libnvidia-container.

archenroot avatar archenroot commented on August 10, 2024

Still busy with other stuff....

from libnvidia-container.

lexxxel avatar lexxxel commented on August 10, 2024
I0709 15:22:47.338887 32 nvc.c:281] initializing library context (version=1.0.2, build=773b1954446b73921ce16919248c764ff62d29ad)
I0709 15:22:47.338927 32 nvc.c:255] using root /
I0709 15:22:47.338932 32 nvc.c:256] using ldcache /etc/ld.so.cache
I0709 15:22:47.338934 32 nvc.c:257] using unprivileged user 65534:65534
I0709 15:22:47.340978 33 nvc.c:191] loading kernel module nvidia
I0709 15:22:47.341118 33 nvc.c:203] loading kernel module nvidia_uvm
I0709 15:22:47.341211 33 nvc.c:211] loading kernel module nvidia_modeset
I0709 15:22:47.341428 34 driver.c:133] starting driver service
I0709 15:22:47.343330 32 driver.c:233] driver service terminated with signal 15
nvidia-container-cli: initialization error: cuda error: no cuda-capable device is detected

I wrote my own ebuilds, which you can find here. I build everything from src but still didn't got it working ...

Any ideas?

if I run this with as user root I get:

nvidia-container-cli --user=0:0 -k -d /dev/tty list

-- WARNING, the following logs are for debugging purposes only --

I0709 15:39:54.095764 13458 nvc.c:281] initializing library context (version=1.0.2, build=773b1954446b73921ce16919248c764ff62d29ad)
I0709 15:39:54.095794 13458 nvc.c:255] using root /
I0709 15:39:54.095801 13458 nvc.c:256] using ldcache /etc/ld.so.cache
I0709 15:39:54.095806 13458 nvc.c:257] using unprivileged user 0:0
I0709 15:39:54.097804 13459 nvc.c:191] loading kernel module nvidia
I0709 15:39:54.097949 13459 nvc.c:203] loading kernel module nvidia_uvm
I0709 15:39:54.098033 13459 nvc.c:211] loading kernel module nvidia_modeset
I0709 15:39:54.098394 13460 driver.c:133] starting driver service
I0709 15:39:54.136065 13458 nvc_info.c:434] requesting driver information with ''
I0709 15:39:54.136261 13458 nvc_info.c:148] selecting /usr/lib64/libvdpau_nvidia.so.430.26
I0709 15:39:54.136427 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-tls.so.430.26
I0709 15:39:54.136463 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-ptxjitcompiler.so.430.26
I0709 15:39:54.136501 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-opencl.so.430.26
I0709 15:39:54.136536 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-ml.so.430.26
I0709 15:39:54.136570 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-ifr.so.430.26
I0709 15:39:54.136604 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-glsi.so.430.26
I0709 15:39:54.136638 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-glcore.so.430.26
I0709 15:39:54.136674 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-fbc.so.430.26
I0709 15:39:54.136708 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-fatbinaryloader.so.430.26
I0709 15:39:54.136740 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-encode.so.430.26
I0709 15:39:54.136773 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-eglcore.so.430.26
I0709 15:39:54.136806 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-compiler.so.430.26
I0709 15:39:54.136842 13458 nvc_info.c:148] selecting /usr/lib64/libnvidia-cfg.so.430.26
I0709 15:39:54.136880 13458 nvc_info.c:148] selecting /usr/lib64/libnvcuvid.so.430.26
I0709 15:39:54.137154 13458 nvc_info.c:148] selecting /usr/lib64/libcuda.so.430.26
I0709 15:39:54.137387 13458 nvc_info.c:148] selecting /usr/lib64/opengl/nvidia/lib/libGLX_nvidia.so.430.26
I0709 15:39:54.137432 13458 nvc_info.c:148] selecting /usr/lib64/opengl/nvidia/lib/libGLESv2_nvidia.so.430.26
I0709 15:39:54.137476 13458 nvc_info.c:148] selecting /usr/lib64/opengl/nvidia/lib/libGLESv1_CM_nvidia.so.430.26
I0709 15:39:54.137520 13458 nvc_info.c:148] selecting /usr/lib64/opengl/nvidia/lib/libEGL_nvidia.so.430.26
I0709 15:39:54.137577 13458 nvc_info.c:148] selecting /usr/lib/libvdpau_nvidia.so.430.26
I0709 15:39:54.137640 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-tls.so.430.26
I0709 15:39:54.137677 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-ptxjitcompiler.so.430.26
I0709 15:39:54.137715 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-opencl.so.430.26
I0709 15:39:54.137751 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-ml.so.430.26
I0709 15:39:54.137786 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-ifr.so.430.26
I0709 15:39:54.137818 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-glsi.so.430.26
I0709 15:39:54.137851 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-glcore.so.430.26
I0709 15:39:54.137887 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-fbc.so.430.26
I0709 15:39:54.137926 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-fatbinaryloader.so.430.26
I0709 15:39:54.137963 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-encode.so.430.26
I0709 15:39:54.137997 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-eglcore.so.430.26
I0709 15:39:54.138034 13458 nvc_info.c:148] selecting /usr/lib/libnvidia-compiler.so.430.26
I0709 15:39:54.138073 13458 nvc_info.c:148] selecting /usr/lib/libnvcuvid.so.430.26
I0709 15:39:54.138148 13458 nvc_info.c:148] selecting /usr/lib/libcuda.so.430.26
I0709 15:39:54.138266 13458 nvc_info.c:148] selecting /usr/lib/opengl/nvidia/lib/libGLX_nvidia.so.430.26
I0709 15:39:54.138314 13458 nvc_info.c:148] selecting /usr/lib/opengl/nvidia/lib/libGLESv2_nvidia.so.430.26
I0709 15:39:54.138360 13458 nvc_info.c:148] selecting /usr/lib/opengl/nvidia/lib/libGLESv1_CM_nvidia.so.430.26
I0709 15:39:54.138409 13458 nvc_info.c:148] selecting /usr/lib/opengl/nvidia/lib/libEGL_nvidia.so.430.26
W0709 15:39:54.138441 13458 nvc_info.c:299] missing library libnvidia-opticalflow.so
W0709 15:39:54.138447 13458 nvc_info.c:303] missing compat32 library libnvidia-cfg.so
W0709 15:39:54.138453 13458 nvc_info.c:303] missing compat32 library libnvidia-opticalflow.so
W0709 15:39:54.138592 13458 nvc_info.c:325] missing binary nvidia-smi
W0709 15:39:54.138597 13458 nvc_info.c:325] missing binary nvidia-debugdump
W0709 15:39:54.138603 13458 nvc_info.c:325] missing binary nvidia-persistenced
W0709 15:39:54.138610 13458 nvc_info.c:325] missing binary nvidia-cuda-mps-control
W0709 15:39:54.138617 13458 nvc_info.c:325] missing binary nvidia-cuda-mps-server
I0709 15:39:54.138639 13458 nvc_info.c:366] listing device /dev/nvidiactl
I0709 15:39:54.138645 13458 nvc_info.c:366] listing device /dev/nvidia-uvm
I0709 15:39:54.138651 13458 nvc_info.c:366] listing device /dev/nvidia-uvm-tools
I0709 15:39:54.138658 13458 nvc_info.c:366] listing device /dev/nvidia-modeset
W0709 15:39:54.138680 13458 nvc_info.c:274] missing ipc /var/run/nvidia-persistenced/socket
W0709 15:39:54.138693 13458 nvc_info.c:274] missing ipc /tmp/nvidia-mps
I0709 15:39:54.138700 13458 nvc_info.c:490] requesting device information with ''
I0709 15:39:54.144233 13458 nvc_info.c:520] listing device /dev/nvidia0 (GPU-501c3826-eb08-245d-c054-5a5d63b97971 at 00000000:21:00.0)
/dev/nvidiactl
/dev/nvidia-uvm
/dev/nvidia-uvm-tools
/dev/nvidia-modeset
/dev/nvidia0
/usr/lib64/libnvidia-ml.so.430.26
/usr/lib64/libnvidia-cfg.so.430.26
/usr/lib64/libcuda.so.430.26
/usr/lib64/libnvidia-opencl.so.430.26
/usr/lib64/libnvidia-ptxjitcompiler.so.430.26
/usr/lib64/libnvidia-fatbinaryloader.so.430.26
/usr/lib64/libnvidia-compiler.so.430.26
/usr/lib64/libvdpau_nvidia.so.430.26
/usr/lib64/libnvidia-encode.so.430.26
/usr/lib64/libnvcuvid.so.430.26
/usr/lib64/libnvidia-eglcore.so.430.26
/usr/lib64/libnvidia-glcore.so.430.26
/usr/lib64/libnvidia-tls.so.430.26
/usr/lib64/libnvidia-glsi.so.430.26
/usr/lib64/libnvidia-fbc.so.430.26
/usr/lib64/libnvidia-ifr.so.430.26
/usr/lib64/opengl/nvidia/lib/libGLX_nvidia.so.430.26
/usr/lib64/opengl/nvidia/lib/libEGL_nvidia.so.430.26
/usr/lib64/opengl/nvidia/lib/libGLESv2_nvidia.so.430.26
/usr/lib64/opengl/nvidia/lib/libGLESv1_CM_nvidia.so.430.26
/usr/lib/libnvidia-ml.so.430.26
/usr/lib/libcuda.so.430.26
/usr/lib/libnvidia-opencl.so.430.26
/usr/lib/libnvidia-ptxjitcompiler.so.430.26
/usr/lib/libnvidia-fatbinaryloader.so.430.26
/usr/lib/libnvidia-compiler.so.430.26
/usr/lib/libvdpau_nvidia.so.430.26
/usr/lib/libnvidia-encode.so.430.26
/usr/lib/libnvcuvid.so.430.26
/usr/lib/libnvidia-eglcore.so.430.26
/usr/lib/libnvidia-glcore.so.430.26
/usr/lib/libnvidia-tls.so.430.26
/usr/lib/libnvidia-glsi.so.430.26
/usr/lib/libnvidia-fbc.so.430.26
/usr/lib/libnvidia-ifr.so.430.26
/usr/lib/opengl/nvidia/lib/libGLX_nvidia.so.430.26
/usr/lib/opengl/nvidia/lib/libEGL_nvidia.so.430.26
/usr/lib/opengl/nvidia/lib/libGLESv2_nvidia.so.430.26
/usr/lib/opengl/nvidia/lib/libGLESv1_CM_nvidia.so.430.26
I0709 15:39:54.144345 13458 nvc.c:318] shutting down library context
I0709 15:39:54.144706 13460 driver.c:192] terminating driver service
I0709 15:39:54.172749 13458 driver.c:233] driver service terminated successfully

from libnvidia-container.

lexxxel avatar lexxxel commented on August 10, 2024

.... fixed it after asking ...xD
the config.toml was configured incorrect for gentoo

disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"

[nvidia-container-cli]
#root = "/run/nvidia/driver"
#path = "/usr/bin/nvidia-container-cli"
environment = []
debug = "/var/log/nvidia-container-runtime-hook.log"
#ldcache = "/etc/ld.so.cache"
load-kmods = true
#no-cgroups = false
user = "root:video"
#ldconfig = "@/sbin/ldconfig.real"

from libnvidia-container.

RenaudWasTaken avatar RenaudWasTaken commented on August 10, 2024

Thanks for helping with this @lexxxel !

from libnvidia-container.

gronastech avatar gronastech commented on August 10, 2024

https://github.com/gronastech/nvidia-docker-overlay

from libnvidia-container.

lexxxel avatar lexxxel commented on August 10, 2024

Without any link, useable without laymen, this does not really help.
Maybe you could add a link to the underlying repo?

from libnvidia-container.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.