Giter Club home page Giter Club logo

impl-pruning-tf's Introduction

TensorFlow implementation of "Iterative Pruning"

CAUTION: Out-of-date notices.

Currently, I've checked TF (>1.3) supports sparse_matmul and it seems that this is more correct way to implement iterative pruning. This work is just naively done with quite old versions (0.8.0) and thus, I do not recommend to consider these codes for your serious cases. And there will be no updates or maintenance either.


This work is based on "Learning both Weights and Connections for Efficient Neural Network." Song et al. @ NIPS '15. Note that these works are just for quantifying its effectiveness on latency (within TensorFlow), not a best optimal. Thus, some details are abbreviated for simplicity. (e.g. # of iterations, adjusted dropout ratio, etc.)

I applied Iterative Pruning on a small MNIST CNN model (13MB, originally), which can be accessed from TensorFlow Tutorials. After pruning off some percentages of weights, I've simply retrained two epochs for each case and got compressed models (minimum 2.6MB with 90% off) with minor loss of accuracy. (99.17% -> 98.99% with 90% off and retraining) Again, this is not an optimal.

Issues

Due to lack of supports on SparseTensor and its operations of TensorFlow (0.8.0), this implementation has some limitations. This work uses embedding_lookup_sparse to compute sparse matrix-vector multiplication. It is not solely for the purpose of sparse matrix vector multiplication, and thus its performance may be sub-optimal. (I'm not sure.) Also, TensorFlow uses <index, value> pair for sparse matrix rather than using typical CSR format which is more compact and performant. In summary, because of the following reasons, I think this implementation has some limitations.

  1. embedding_lookup_sparse doesn't support broadcasting, which prohibits users to run test with normal test datasets.
  2. Performance may be somewhat sub-optimal.
  3. Because "Sparse Variable" is not supported, manual dense to sparse and sparse to dense transformation is required.
  4. 4D Convolution Tensor may also be applicable, but bit tricky.
  5. Current embedding_lookup_sparse forces additional matrix transpose, dimension squeeze and dimension reshape.

File descriptions and usages

model_ckpt_dense: original model
model_ckpt_dense_pruned: 90% pruned-only model
model_ckpt_sparse_retrained: 90% pruned and retrained model

Python package requirements

sudo apt-get install python-scipy python-numpy python-matplotlib

To regenerate these sparse model, edit config.py first as your threshold configuration, and then run training with second (pruning and retraining) and third (generate sparse form of weight data) round options.

./train.py -2 -3

To inference single image (seven.png) and measure its latency,

./deploy_test.py -d -m model_ckpt_dense
./deploy_test_sparse.py -d -m model_ckpt_sparse_retrained

To test dense model,

./deploy_test.py -t -m model_ckpt_dense
./deploy_test.py -t -m model_ckpt_dense_pruned
./deploy_test.py -t -m model_ckpt_dense_retrained

To draw histogram that shows the weight distribution,

# After running train.py (it generates .dat files)
./draw_histogram.py

Performance

Results are currently somewhat mediocre or degraded due to indirection and additional storage overhead originated from sparse matrix form. Also, it may because model size is too small. (12.49MB)

Storage overhead

Baseline: 12.49 MB
10 % pruned: 21.86 MB
20 % pruned: 19.45 MB
30 % pruned: 17.05 MB
40 % pruned: 14.64 MB
50 % pruned: 12.23 MB
60 % pruned: 9.83 MB
70 % pruned: 7.42 MB
80 % pruned: 5.02 MB
90 % pruned: 2.61 MB

CPU performance (5 times averaged)

CPU: Intel Core i5-2500 @ 3.3 GHz, LLC size: 6 MB

http://younghwanoh.github.io/images/cpu-desktop.png

Baseline: 0.01118040085 s
10 % pruned: 1.919299984 s
20 % pruned: 0.2325239658 s
30 % pruned: 0.2111079693 s
40 % pruned: 0.1982570648 s
50 % pruned: 0.1691776752 s
60 % pruned: 0.1305227757 s
70 % pruned: 0.116039753 s
80 % pruned: 0.103564167 s
90 % pruned: 0.1058168888 s

GPU performance (5 times averaged)

GPU: Nvidia Geforce GTX650 @ 1.058 GHz, LLC size: 256 KB

http://younghwanoh.github.io/images/gpu-desktop.png

Baseline: 0.1475181845 s
10 % pruned: 0.2954540253 s
20 % pruned: 0.2665398121 s
30 % pruned: 0.2585638046 s
40 % pruned: 0.2090051651 s
50 % pruned: 0.1995279789 s
60 % pruned: 0.1815193653 s
70 % pruned: 0.1436806202 s
80 % pruned: 0.135668993 s
90 % pruned: 0.1218701839 s

impl-pruning-tf's People

Contributors

younghwanoh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

impl-pruning-tf's Issues

Is it possible to prune convolution layers in TF now?

Hi garion9013, your work is a nice try to implement network pruning in TensorFlow.
However, I look through the code and find that you only use SparseVariable to prune fully-connected layers.
I wonder that if it is possible to prune convolution layers in TF now? [With or without modify the source code]

deploy_test_pruned.py

Very nice work! A few tiny issues:

  1. in the readMe, "./deploy_test_pruned.py -d -m model_ckpt_sparse_retrained"
  2. there is a "execfile('/home/yhlinux/.pythonrc')" in deploy_test_pruned.py. Anything important in pythonrc?
    Thanks!

assign operations cause OOM for vgg_16

@garion9013
Hi-
I modified your code to compress vgg_16 model. The assign operations in
'''
def apply_prune(weights):
'''
cause OOM.
My GPU cache is 8GB. It dosen't work even I choose a very small batch size.

Is there any way to make it work for large model like vgg_16? Pytorch is prefer in this aspect:(

The dense_w problem

I find that the dense_w is a dict in your source code . Does it update during the pruning process? especially in the second-round .After all ,we need to prune the trained weight ,not the original one.

“could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM”

Excuse me, I downloaded your code, and changed the path of mnist-dataset , just like:
"mnist = input_data.read_data_sets('/tmp/data/', one_hot=True) " ==> "mnist = input_data.read_data_sets('/home/amax/zhyiwei/tensorflow/data_set/mnist/', one_hot=True)"

Then i ran "python train.py -2 -3",but it looked like something wrong (Python 2.7.6 & Tensorflow '0.8.0' ) :

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
Extracting /home/amax/zhyiwei/tensorflow/data_set/mnist/train-images-idx3-ubyte.gz
Extracting /home/amax/zhyiwei/tensorflow/data_set/mnist/train-labels-idx1-ubyte.gz
Extracting /home/amax/zhyiwei/tensorflow/data_set/mnist/t10k-images-idx3-ubyte.gz
Extracting /home/amax/zhyiwei/tensorflow/data_set/mnist/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Tesla M40 24GB
major: 5 minor: 2 memoryClockRate (GHz) 1.112
pciBusID 0000:02:00.0
Total memory: 22.43GiB
Free memory: 22.32GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla M40 24GB, pci bus id: 0000:02:00.0)
w_fc1 threshold: 0.148615
Non-zero count (w_fc1): 320939
w_fc2 threshold: 0.15566
Non-zero count (w_fc2): 1014
F tensorflow/stream_executor/cuda/cuda_dnn.cc:427] could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM
Aborted (core dumped)

train.py error

Hi,thanks for your codes. When i run the "python train.py -2 -3", i encounter the error:
TypeError: The numpy boolean negative, the - operator, is not supported, use the ~ operator or the logical_not function instead.
and i don't konw how to solver it,thanks for your help.

InvalidArgumentError

using the 'model_ckpt_sparse_retrained' file test is true, but instead of this './sparse_model_extreme/model_ckpt_sparse_retrained' model file will error occurred,'InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1014,2] rhs shape= [1657,2]'.

Pruning convolutional layers

It seems that the current version prunes only the fully connected layers. Is it possible to prune convolutional layers, as well?

memory leaking

Really appreciate your work! I meet an issue when I implement your code. During each iteration of retraining, we will assign the weights in apply_prune_on_grads, which will generate new assign ops every time. Therefore, the graph will growth during training. I'm wondering if there is a way to deal with this issue? Thanks!

Pruning and retraining not working

Q1) Is this the correct behavior?

While running the pruning code the code does not progress further and it stops at Second-round final test accuracy. It does not print the accuracy at each step as is shown in the logfile .

python train.py -2 -m ./model_ckpt_dense_retrained
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting /tmp/data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID dcdb:00:00.0
Total memory: 11.17GiB
Free memory: 11.11GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x3ef0f60
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID f865:00:00.0
Total memory: 11.17GiB
Free memory: 11.11GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y N
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1:   N Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: dcdb:00:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla K80, pci bus id: f865:00:00.0)
w_fc1 threshold:	0.148615
Non-zero count (w_fc1): 319823
w_fc2 threshold:	0.15566
Non-zero count (w_fc2): 989
Second-round prune-only test accuracy 0.9746

WARNING:tensorflow:From train.py:212: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
WARNING:tensorflow:From train.py:214: initialize_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.variables_initializer` instead.
step 0, training accuracy 0.98
Second-round final test accuracy 0.9753

Q2) While running python deploy_test.py -t -m ./model_ckpt_dense_retrained I get the following error which I assume is because the model is not trained properly, as a result of Q1.

Traceback (most recent call last):
  File "deploy_test.py", line 45, in <module>
    accuracy = tf.get_collection("accuracy")[0]
IndexError: list index out of range

Could you please suggest how to fix these. Thank you very much!

the sparsed model is much larger than the purned model ,why?

I done it on my own model and datasets, the results is:

** the first threshold:**

wd3 threshold: 0.1
Non-zero count (wd3): 9447
wd2 threshold: 0.1
Non-zero count (wd2): 965184
wd1 threshold: 0.1
Non-zero count (wd1): 3861108
Second-round prune-only test accuracy 0.7858

step 0, training accuracy 0.68
step 100, training accuracy 0.8
step 200, training accuracy 0.82
Second-round final test accuracy 0.7366

Non-zero count (Sparse wd3): 10193
Non-zero count (Sparse wd2): 1044382
Non-zero count (Sparse wd1): 4177566

the second threshold
wd3 threshold: 0.5
Non-zero count (wd3): 6259
wd2 threshold: 0.5
Non-zero count (wd2): 646845
wd1 threshold: 0.5
Non-zero count (wd1): 2589204
Second-round prune-only test accuracy 0.7858

step 0, training accuracy 0.84
step 100, training accuracy 0.82
step 200, training accuracy 0.8
Second-round final test accuracy 0.7858

Non-zero count (Sparse wd3): 10193
Non-zero count (Sparse wd2): 1044389
Non-zero count (Sparse wd1): 4177563

the question is:
with the different threshold, why the Non-zero count and the size of the model is same ?
and the size of the sparsed model is much larger than the purned model ,why?

How the values in the thspace.py were obained?

How were all the values calculated? For example, did the thresholds to get given level of pruning were obtained by experimenting which value result in a k% pruning and then this value was hardcoded?

Forward pass timing

When providing the timing, the method calculates the time of the total evaluation of the program, including the time to load the weights into memory. The actual timing of the forward pass (test pass) is much shorter but is it possible to measure it and measure the respective speed-up due to pruning?

Applying pruning on gradients

As far as I can tell, before retraining we are manipulating gradients. How does the function "apply_prune_on_grads" do it and what is the result and the effect of applying pruning on gradients on the retraining phase?

How to save graph without pruned weights?

I'm following along with your sample code, and it works really well. Except now I am unsure how to freeze my graph for inference. I perform the following code to save:

    # Save model objects to serialized format
    final_saver = tf.train.Saver(sparse_w)
    final_saver.save(sess, "model_ckpt_sparse_retrained")

    # Save graph
    tf.train.write_graph(sess.graph_def, '.', "my_graph.pb", as_text=False)

We need the graph file for the freeze code:

from tensorflow.python.tools import freeze_graph, optimize_for_inference_lib
import tensorflow as tf

freeze_graph.freeze_graph(input_graph="my_graph.pb",
                              input_saver="",
                              input_binary=True,
                              input_checkpoint="model_ckpt_sparse_retrained",
                              output_node_names="y_",
                              restore_op_name="save/restore_all",
                              filename_tensor_name="save/Const:0",
                              output_graph="frozen_graph.pb",
                              clear_devices=True,
                              initializer_nodes="")

It works on the dense model, but for the sparse model I get an error: Attempting to use uninitialized value Variable_2. I suppose this is expected and comes from weights that aren't set in the imported model. Yet they still exist in the graph.

So I suppose we need a way to construct a new graph (since we can delete nodes?). Do you know how to do this? Could we use graph_util.extract_sub_graph? Thanks!

Accuracy problem with the code

Hi, I just downloaded your code, and run "python train.py -2". And the following is the log printed out automatically.
w_fc1 threshold: 0.148615
Non-zero count (w_fc1): 321130
w_fc2 threshold: 0.15566
Non-zero count (w_fc2): 1024
Second-round prune-only test accuracy 0

step 0, training accuracy 0.06
step 100, training accuracy 0.08
step 200, training accuracy 0.06
step 300, training accuracy 0.1
step 400, training accuracy 0.12
step 500, training accuracy 0.1
step 600, training accuracy 0.1
step 700, training accuracy 0.06
step 800, training accuracy 0.08
step 900, training accuracy 0.1
step 1000, training accuracy 0.02
step 1100, training accuracy 0.24
step 1200, training accuracy 0.06
step 1300, training accuracy 0.08
step 1400, training accuracy 0.08
step 1500, training accuracy 0.1
step 1600, training accuracy 0.02
step 1700, training accuracy 0.08
step 1800, training accuracy 0.12
step 1900, training accuracy 0.16
step 2000, training accuracy 0.04
step 2100, training accuracy 0.08
step 2200, training accuracy 0.06
step 2300, training accuracy 0.06
step 2400, training accuracy 0.04
step 2500, training accuracy 0.12
step 2600, training accuracy 0.14
step 2700, training accuracy 0.1
step 2800, training accuracy 0.06
step 2900, training accuracy 0.1
step 3000, training accuracy 0.12
step 3100, training accuracy 0.1
step 3200, training accuracy 0.12
step 3300, training accuracy 0.12
step 3400, training accuracy 0.18
step 3500, training accuracy 0.16
step 3600, training accuracy 0.08
step 3700, training accuracy 0.1
step 3800, training accuracy 0
step 3900, training accuracy 0.16
step 4000, training accuracy 0.12
step 4100, training accuracy 0.08
step 4200, training accuracy 0.14
step 4300, training accuracy 0.16
Second-round final test accuracy 0.0978

We can see that the training accuracy is very low when using pruning and retraining for MNIST.
Is there any ideas ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.