Giter Club home page Giter Club logo

ssgan-tensorflow's Introduction

Semi-supervised learning GAN in Tensorflow

As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our group github site for other projects.

This project is implemented by Shao-Hua Sun and the codes have been reviewed by Jiayuan Mao before being published.

Descriptions

This project is a Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks proposed in the paper Improved Techniques for Training GANs. The intuition is exploiting the samples generated by GAN generators to boost the performance of image classification tasks by improving generalization.

In sum, the main idea is training a network playing both the roles of a classifier performing image classification task as well as a discriminator trained to distinguish generated samples produced by a generator from the real data. To be more specific, the discriminator/classifier takes an image as input and classified it into n+1 classes, where n is the number of classes of a classification task. True samples are classified into the first n classes and generated samples are classified into the n+1-th class, as shown in the figure below.

The loss of this multi-task learning framework can be decomposed into the supervised loss

,

and the GAN loss of a discriminator

,

During the training phase, we jointly minimize the total loss obtained by simply combining the two losses together.

The implemented model is trained and tested on three publicly available datasets: MNIST, SVHN, and CIFAR-10.

Note that this implementation only follows the main idea of the original paper while differing a lot in implementation details such as model architectures, hyperparameters, applied optimizer, etc. Also, some useful training tricks applied to this implementation are stated at the end of this README.

*This code is still being developed and subject to change.

Prerequisites

Usage

Download datasets with:

$ python download.py --dataset MNIST SVHN CIFAR10

Train models with downloaded datasets:

$ python trainer.py --dataset MNIST
$ python trainer.py --dataset SVHN
$ python trainer.py --dataset CIFAR10

Test models with saved checkpoints:

$ python evaler.py --dataset MNIST --checkpoint ckpt_dir
$ python evaler.py --dataset SVHN --checkpoint ckpt_dir
$ python evaler.py --dataset CIFAR10 --checkpoint ckpt_dir

The ckpt_dir should be like: train_dir/default-MNIST_lr_0.0001_update_G5_D1-20170101-194957/model-1001

Train and test your own datasets:

  • Create a directory
$ mkdir datasets/YOUR_DATASET
  • Store your data as an h5py file datasets/YOUR_DATASET/data.hy and each data point contains
    • 'image': has shape [h, w, c], where c is the number of channels (grayscale images: 1, color images: 3)
    • 'label': represented as an one-hot vector
  • Maintain a list datasets/YOUR_DATASET/id.txt listing ids of all data points
  • Modify trainer.py including args, data_info, etc.
  • Finally, train and test models:
$ python trainer.py --dataset YOUR_DATASET
$ python evaler.py --dataset YOUR_DATASET

Results

MNIST

  • Generated samples (100th epochs)

  • First 40 epochs

SVHN

  • Generated samples (100th epochs)

  • First 160 epochs

CIFAR-10

  • Generated samples (1000th epochs)

  • First 200 epochs

Training details

MNIST

  • The supervised loss

  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

SVHN

  • The supervised loss

  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

CIFAR-10

  • The supervised loss

  • The loss of Discriminator

D_loss_real

D_loss_fake

D_loss (total loss)

  • The loss of Generator

G_loss

  • Classification accuracy

Training tricks

  • To avoid the fast convergence of the discriminator network
    • The generator network is updated more frequently.
    • Higher learning rate is applied to the training of the generator.
  • One-sided label smoothing is applied to the positive labels.
  • Gradient clipping trick is applied to stablize training
  • Reconstruction loss with an annealed weight is applied as an auxiliary loss to help the generator get rid of the initial local minimum.
  • Utilize Adam optimizer with higher momentum.
  • Please refer to the codes for more details.

Related works

Acknowledgement

Part of codes is from an unpublished project with Jongwook Choi

ssgan-tensorflow's People

Contributors

shaohua0116 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ssgan-tensorflow's Issues

How do you test the model accuracy? + Check points are not stored correctly

I see the model outputs the following:
Supervised loss, D loss, G loss, Accuracy, and test loss.
Is this the training accuracy or test accuracy? if it's the training accuracy how could we get the test accuracy?
Is it using evaler.py? because I run the model on a dummy dataset and got an accuracy of ~ 80% but when I run the evaluator I got an accuracy of ~ 1.5% which does not seem correct.

Also, when I try to run the evaler.py from a checkpoint, none of the files saved during training can be used when referencing them. I am attaching a photo of the files generated during the training.
train_dir
Thanks in advance.

why the _groundtruths in the evaler.py is different from the original test label?

Dear Sun,
Thanks for your code,
As for the SVHN datasets, I printed the feeded groundtruth in the evaler.py in a .txt file, and why are they different from the original test labe in the .mat filel? And I run the evaler.py for many times, and I printed the feeded groundtruth for each time, they are different from each other.

ImportError: No module named model_conv

when I run this command [python evaler.py --dataset MNIST --checkpoint ckpt_dir], it's wrong and give a message “ImportError: No module named model_conv".
There are two question:

  1. Did I miss some .py script about model_conv?
  2. Can you help me to change GPU to CPU?
    Note:GPU configuration
    session_config = tf.ConfigProto(
    allow_soft_placement=True,
    gpu_options=tf.GPUOptions(allow_growth=True),
    device_count={'GPU': 1},
    )

detail:
hcq@hcq-To-be-filled-by-O-E-M:~/deep_learning/SSGAN-Tensorflow$ python evaler.py --dataset MNIST --checkpoint ckpt_dir
[2017-06-11 11:14:43,555] Reading ./datasets/mnist/data.hy ...
[2017-06-11 11:14:43,556] Reading Done: ./datasets/mnist/data.hy
[2017-06-11 11:14:43,556] Reading ./datasets/mnist/data.hy ...
[2017-06-11 11:14:43,556] Reading Done: ./datasets/mnist/data.hy
[2017-06-11 11:14:43,557] self.train_dir = None
[2017-06-11 11:14:43,557] input_ops [inputs]: Using 10000 IDs from dataset
Traceback (most recent call last):
File "evaler.py", line 205, in
main()
File "evaler.py", line 199, in main
evaler = Evaler(config, dataset_test)
File "evaler.py", line 81, in init
Model = self.get_model_class(config.model)
File "evaler.py", line 56, in get_model_class
from model_conv import Model
ImportError: No module named model_conv

I am very happy to receive your respond, thank you.

TypeError: argument of type 'NoneType' is not iterable

I get the following error when I run download.py
print(type(args.datasets))
<class 'NoneType'>
Why is this error? How to modify,?Thank you for correcting!
ERROR:
File "", line 1, in
runfile('D:/python/SSGAN-Tensorflow/download.py', wdir='D:/python/SSGAN-Tensorflow')

File "D:\cpsoftware\Ananconda\envs\opencv\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)

File "D:\cpsoftware\Ananconda\envs\opencv\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "D:/python/SSGAN-Tensorflow/download.py", line 184, in
if 'MNIST' in args.datasets:

TypeError: argument of type 'NoneType' is not iterable

Why the g_loss becomes negative?

When I was training, the g_loss sometimes will become negative and then accuracy will suddenly fall down. Does that mean my training has failed?

errors when I try two separate data.hy files

Hi,

When I try to change the script, and add an option to read train and test files from two different filenames, like:
def create_dataset(dataset_1,dataset_2,id_1,id_2,is_train=True):
ids_train = all_ids_train(id_1)
ids_test = all_ids_train(id_2)

dataset_train = Dataset(ids_train, dataset_1, name='train', is_train=False)
dataset_test  = Dataset(ids_test, dataset_2, name='test', is_train=False)
return dataset_train, dataset_test

I got the error:
tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_0_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 16, current size 4)

I tested with single file and split the data.hy solution, it is good, my problem is the single data.hy file is too big to generate.

Any ideas?
Thanks,
Chunlei

Something error in the code tf.train.exponential_decay

I think the first parameter of the funciton [tf.train.exponential_decay] should be a constant(learing_rate_start) instead of self.learning_rate, which is a variable and will lead to unexpected fast decrease, because after running the function you assign the decayed self_learning_rate to self.learning_rate as a initial learning_rate again.
Although this part code was not used in your model, it's better to point out the hidden danger。

image

visualize_training.py is not working

python visualize_training.py --output_file visualized_sample --train_dir train_dir/default-MNIST_lr_0.0001_update_G5_D1-20180707-195643 --h 28 --w 28 --c 1
train_dir/default-MNIST_lr_0.0001_update_G5_D1-20180707-195643/g_img_0.hy
train_dir/default-MNIST_lr_0.0001_update_G5_D1-20180707-195643/g_img_1000.hy
(2, 224, 224, 1)
Traceback (most recent call last):
File "visualize_training.py", line 31, in
imageio.mimsave(args.output_file, II, fps=5)
File "/usr/local/lib/python2.7/dist-packages/imageio/core/functions.py", line 320, in mimwrite
writer = get_writer(uri, format, 'I', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/imageio/core/functions.py", line 170, in get_writer
'in mode %r' % mode)
ValueError: Could not find a format to write the specified file in mode 'I'

About Generator loss

Generator loss

        g_loss = tf.reduce_mean( tf.log(d_fake[:, -1]))

OR
g_loss = tf.reduce_mean(-tf.log(d_fake[:, -1]))

The discriminator should be trained more times ...

Hi @shaohua0116, thanks for your nice job!
However, I think one thing should be noticed, the discriminator should be trained more times than the generator.
After I change your code in the 'trainer.py' as follows:
wechatimg67
the prediction accuracy improves quite a lot!
Hope you can notice~
thank you!

Number of labelled samples per class

Hi, I might be wrong but I didn't find that part of the code where you just select a subset of labelled samples for supervision and the rest are all unlabelled. I think this is the semi-supervised part of the paper. In case I have missed, can you point out where this is implemented in your code? Thanks

Are unlabeled images used in SSGAN?

In paper "Improved techniques for training GANs", models are trained and compared on different number of labeled images. However, in your code, there is no settings about the number of labeled images or unlabeled images. So did you consider to use various number of labeled images in training? @shaohua0116

ValueError: num_outputs should be int or long, got 11.

Need for help:

Win10
Python 3.6
tensorflow-gpu 1.4.0
tensorflow-tensorboard 0.4.0rc3

The error is as below by runing 'python trainer.py --dataset MNIST':


(C:\Anaconda3) F:\PythonWorkSpace\SSGAN-Tensorflow>python trainer.py --dataset MNIST
[2017-11-28 22:25:31,294] Reading ./datasets/mnist\data.hy ...
[2017-11-28 22:25:31,298] Reading Done: ./datasets/mnist\data.hy
[2017-11-28 22:25:31,300] Reading ./datasets/mnist\data.hy ...
[2017-11-28 22:25:31,312] Reading Done: ./datasets/mnist\data.hy
[2017-11-28 22:25:31,329] Train Dir: ./train_dir/default-MNIST_lr_0.0001_update_G5_D1-20171128-222531
[2017-11-28 22:25:31,330] input_ops [inputs]: Using 60000 IDs from dataset
[2017-11-28 22:25:31,628] input_ops [inputs]: Using 10000 IDs from dataset
[2017-11-28 22:25:31,756] Generator
[2017-11-28 22:25:31,814] Generator Tensor("Generator/g_1_deconv/BatchNorm/Identity:0", shape=(64, 2, 2, 100), dtype=float32)
[2017-11-28 22:25:31,870] Generator Tensor("Generator/g_2_deconv/BatchNorm/Identity:0", shape=(64, 5, 5, 25), dtype=float32)
[2017-11-28 22:25:31,927] Generator Tensor("Generator/g_3_deconv/BatchNorm/Identity:0", shape=(64, 12, 12, 6), dtype=float32)
[2017-11-28 22:25:31,962] Generator Tensor("Generator/g_4_deconv/Tanh:0", shape=(64, 28, 28, 1), dtype=float32)
[2017-11-28 22:25:31,963] Discriminator
[2017-11-28 22:25:32,012] Discriminator Tensor("d_1_conv/dropout/mul:0", shape=(64, 14, 14, 32), dtype=float32)
[2017-11-28 22:25:32,061] Discriminator Tensor("d_2_conv/dropout/mul:0", shape=(64, 7, 7, 64), dtype=float32)
[2017-11-28 22:25:32,167] Discriminator Tensor("d_3_conv/dropout/mul:0", shape=(64, 4, 4, 128), dtype=float32)
Traceback (most recent call last):
File "trainer.py", line 253, in
main()
File "trainer.py", line 247, in main
dataset_train, dataset_test)
File "trainer.py", line 50, in init
self.model = Model(config)
File "F:\PythonWorkSpace\SSGAN-Tensorflow\model.py", line 44, in init
self.build(is_train=is_train)
File "F:\PythonWorkSpace\SSGAN-Tensorflow\model.py", line 138, in build
d_real, d_real_logits = D(self.image, scope='Discriminator', reuse=False)
File "F:\PythonWorkSpace\SSGAN-Tensorflow\model.py", line 123, in D
tf.reshape(d_3, [self.batch_size, -1]), n+1, scope='d_4_fc', activation_fn=None)
File "C:\Anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "C:\Anaconda3\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py", line 1616, in fully_connected
'num_outputs should be int or long, got %s.' % (num_outputs,))
ValueError: num_outputs should be int or long, got 11.

Who can help me? Thank you so much.

ValueError: num_outputs should be int or long, got 400.

Traceback (most recent call last):
File "D:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
runfile('G:/Users/lwx/PycharmProjects/untitl2/NeuralDialog-CVAE-master/NeuralDialog-CVAE-master/kgcvae_swda.py', wdir='G:/Users/lwx/PycharmProjects/untitl2/NeuralDialog-CVAE-master/NeuralDialog-CVAE-master')
File "G:\Program Files\JetBrains\PyCharm 2020.1\plugins\python\helpers\pydev_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "G:\Program Files\JetBrains\PyCharm 2020.1\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "G:/Users/lwx/PycharmProjects/untitl2/NeuralDialog-CVAE-master/NeuralDialog-CVAE-master/kgcvae_swda.py", line 169, in
main()
File "G:/Users/lwx/PycharmProjects/untitl2/NeuralDialog-CVAE-master/NeuralDialog-CVAE-master/kgcvae_swda.py", line 72, in main
model = KgRnnCVAE(sess, config, api, log_dir=None if FLAGS.forward_only else log_dir, forward=False, scope=scope)
File "G:\Users\lwx\PycharmProjects\untitl2\NeuralDialog-CVAE-master\NeuralDialog-CVAE-master\models\cvae.py", line 257, in init
activation_fn=tf.tanh, scope="fc1")
File "D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\framework\python\ops\arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py", line 1638, in fully_connected
'num_outputs should be int or long, got %s.' % (num_outputs,))
ValueError: num_outputs should be int or long, got 400.

predicted probabilities not summed to 1

Hi!
Pls, can you see this @shaohua0116 @gitlim @Shaofanl @carpedm20
I tried this
predictions = np.argmax(self._predictions[-1], axis=0)
or this
predictions = np.argmax(prediction_pred, axis=0)
and i get
[39 38 59 45 49 1 8 30 25 44 53]
These probabilities do not summed to 1. Am I doing something wrong?
prediction_pred comes from the softmax Discriminator layer

can't create generator

I use tensorflow 1.0.0 but after many hours I could not repair this Error, could you please help me?
I examined both python2.7 and 3.5
I also used both types of tensorflow,cpu and gpu!

Connected to pydev debugger (build 171.4249.47)
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
[2017-09-28 16:24:44,037] Reading ./datasets/mnist/data.hy ...
[2017-09-28 16:24:44,038] Reading Done: ./datasets/mnist/data.hy
[2017-09-28 16:24:44,039] Reading ./datasets/mnist/data.hy ...
[2017-09-28 16:24:44,039] Reading Done: ./datasets/mnist/data.hy
[2017-09-28 16:24:44,040] Train Dir: ./train_dir/default-MNIST_lr_0.0001_update_G5_D1-20170928-162444
[2017-09-28 16:24:44,040] input_ops [inputs]: Using 60000 IDs from dataset
Backend TkAgg is interactive backend. Turning interactive mode on.
[2017-09-28 16:26:25,310] input_ops [inputs]: Using 10000 IDs from dataset
Generator
Traceback (most recent call last):
File "/home/ahp/pycharm-community-2017.1.2/helpers/pydev/pydevd.py", line 1585, in
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/ahp/pycharm-community-2017.1.2/helpers/pydev/pydevd.py", line 1015, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/trainer.py", line 253, in
main()
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/trainer.py", line 247, in main
dataset_train, dataset_test)
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/trainer.py", line 50, in init
self.model = Model(config)
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/model.py", line 46, in init
self.build(is_train=is_train)
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/model.py", line 137, in build
fake_image = G(z)
File "/home/ahp/PycharmProjects/tensorflow/SSGAN-Tensorflow-master/model.py", line 102, in G
g_1 = deconv2d(z, deconv_info[0], is_train, name='g_1_deconv')
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 1192, in conv2d_transpose
return layer.apply(inputs)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 303, in apply
return self.call(inputs, **kwargs)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 269, in call
self.build(input_shapes[0])
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 1043, in build
dtype=self.dtype)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
custom_getter=custom_getter)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
custom_getter=custom_getter)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 341, in get_variable
validate_shape=validate_shape)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 258, in variable_getter
variable_getter=functools.partial(getter, **kwargs))
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 208, in _add_variable
trainable=trainable and self.trainable)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 628, in _get_single_variable
shape = tensor_shape.as_shape(shape)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 821, in as_shape
return TensorShape(shape)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 457, in init
self._dims = [as_dimension(d) for d in dims_iter]
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 378, in as_dimension
return Dimension(value)
File "/home/ahp/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 33, in init
self._value = int(value)
TypeError: only length-1 arrays can be converted to Python scalars

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.