Giter Club home page Giter Club logo

open_flamingo's Introduction

๐Ÿฆฉ OpenFlamingo

PyPI version

Paper | Blog posts: 1, 2 | Demo

Welcome to our open source implementation of DeepMind's Flamingo!

In this repository, we provide a PyTorch implementation for training and evaluating OpenFlamingo models. If you have any questions, please feel free to open an issue. We also welcome contributions!

Table of Contents

Installation

To install the package in an existing environment, run

pip install open-flamingo

or to create a conda environment for running OpenFlamingo, run

conda env create -f environment.yml

To install training or eval dependencies, run one of the first two commands. To install everything, run the third command.

pip install open-flamingo[training]
pip install open-flamingo[eval]
pip install open-flamingo[all]

There are three requirements.txt files:

  • requirements.txt
  • requirements-training.txt
  • requirements-eval.txt

Depending on your use case, you can install any of these with pip install -r <requirements-file.txt>. The base file contains only the dependencies needed for running the model.

Development

We use pre-commit hooks to align formatting with the checks in the repository.

  1. To install pre-commit, run
    pip install pre-commit
    
    or use brew for MacOS
    brew install pre-commit
    
  2. Check the version installed with
    pre-commit --version
    
  3. Then at the root of this repository, run
    pre-commit install
    

Then every time we run git commit, the checks are run. If the files are reformatted by the hooks, run git add for your changed files and git commit again

Approach

OpenFlamingo is a multimodal language model that can be used for a variety of tasks. It is trained on a large multimodal dataset (e.g. Multimodal C4) and can be used to generate text conditioned on interleaved images/text. For example, OpenFlamingo can be used to generate a caption for an image, or to generate a question given an image and a text passage. The benefit of this approach is that we are able to rapidly adapt to new tasks using in-context learning.

Model architecture

OpenFlamingo combines a pretrained vision encoder and a language model using cross attention layers. The model architecture is shown below.

OpenFlamingo architecture Credit: Flamingo

Usage

Initializing an OpenFlamingo model

We support pretrained vision encoders from the OpenCLIP package, which includes OpenAI's pretrained models. We also support pretrained language models from the transformers package, such as MPT, RedPajama, LLaMA, OPT, GPT-Neo, GPT-J, and Pythia models.

from open_flamingo import create_model_and_transforms

model, image_processor, tokenizer = create_model_and_transforms(
    clip_vision_encoder_path="ViT-L-14",
    clip_vision_encoder_pretrained="openai",
    lang_encoder_path="anas-awadalla/mpt-1b-redpajama-200b",
    tokenizer_path="anas-awadalla/mpt-1b-redpajama-200b",
    cross_attn_every_n_layers=1,
    cache_dir="PATH/TO/CACHE/DIR"  # Defaults to ~/.cache
)

Released OpenFlamingo models

We have trained the following OpenFlamingo models so far.

# params Language model Vision encoder Xattn interval* COCO 4-shot CIDEr VQAv2 4-shot Accuracy Weights
3B anas-awadalla/mpt-1b-redpajama-200b openai CLIP ViT-L/14 1 77.3 45.8 Link
3B anas-awadalla/mpt-1b-redpajama-200b-dolly openai CLIP ViT-L/14 1 82.7 45.7 Link
4B togethercomputer/RedPajama-INCITE-Base-3B-v1 openai CLIP ViT-L/14 2 81.8 49.0 Link
4B togethercomputer/RedPajama-INCITE-Instruct-3B-v1 openai CLIP ViT-L/14 2 85.8 49.0 Link
9B anas-awadalla/mpt-7b openai CLIP ViT-L/14 4 89.0 54.8 Link

* Xattn interval refers to the --cross_attn_every_n_layers argument.

Note: as part of our v2 release, we have deprecated a previous LLaMA-based checkpoint. However, you can continue to use our older checkpoint using the new codebase.

Downloading pretrained weights

To instantiate an OpenFlamingo model with one of our released weights, initialize the model as above and use the following code.

# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch

checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-3B-vitl-mpt1b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)

Generating text

Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.

from PIL import Image
import requests
import torch

"""
Step 1: Load images
"""
demo_image_one = Image.open(
    requests.get(
        "http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
    ).raw
)

demo_image_two = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
        stream=True
    ).raw
)

query_image = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028352.jpg", 
        stream=True
    ).raw
)


"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape 
 batch_size x num_media x num_frames x channels x height x width. 
 In this case batch_size = 1, num_media = 3, num_frames = 1,
 channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)

"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
 We also expect an <|endofchunk|> special token to indicate the end of the text 
 portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
    ["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
    return_tensors="pt",
)


"""
Step 4: Generate text
"""
generated_text = model.generate(
    vision_x=vision_x,
    lang_x=lang_x["input_ids"],
    attention_mask=lang_x["attention_mask"],
    max_new_tokens=20,
    num_beams=3,
)

print("Generated text: ", tokenizer.decode(generated_text[0]))

Training

We provide training scripts in open_flamingo/train. We provide an example Slurm script in open_flamingo/scripts/run_train.py, as well as the following example command:

torchrun --nnodes=1 --nproc_per_node=4 open_flamingo/train/train.py \
  --lm_path anas-awadalla/mpt-1b-redpajama-200b \
  --tokenizer_path anas-awadalla/mpt-1b-redpajama-200b \
  --cross_attn_every_n_layers 1 \
  --dataset_resampled \
  --batch_size_mmc4 32 \
  --batch_size_laion 64 \
  --train_num_samples_mmc4 125000\
  --train_num_samples_laion 250000 \
  --loss_multiplier_laion 0.2 \
  --workers=4 \
  --run_name OpenFlamingo-3B-vitl-mpt1b \
  --num_epochs 480 \
  --warmup_steps  1875 \
  --mmc4_textsim_threshold 0.24 \
  --laion_shards "/path/to/shards/shard-{0000..0999}.tar" \
  --mmc4_shards "/path/to/shards/shard-{0000..0999}.tar" \
  --report_to_wandb

Note: The MPT-1B base and instruct modeling code does not accept the labels kwarg or compute cross-entropy loss directly within forward(), as expected by our codebase. We suggest using a modified version of the MPT-1B models found here and here.

For more details, see our training README.

Evaluation

An example evaluation script is at open_flamingo/scripts/run_eval.sh. Please see our evaluation README for more details.

To run evaluations on OKVQA you will need to run the following command:

import nltk
nltk.download('wordnet')

Future plans

  • Add support for video input

Team

OpenFlamingo is developed by:

Anas Awadalla*, Irena Gao*, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt.

The team is primarily from the University of Washington, Stanford, AI2, UCSB, and Google.

Acknowledgments

This code is based on Lucidrains' flamingo implementation and David Hansmair's flamingo-mini repo. Thank you for making your code public! We also thank the OpenCLIP team as we use their data loading code and take inspiration from their library design.

We would also like to thank Jean-Baptiste Alayrac and Antoine Miech for their advice, Rohan Taori, Nicholas Schiefer, Deep Ganguli, Thomas Liao, Tatsunori Hashimoto, and Nicholas Carlini for their help with assessing the safety risks of our release, and to Stability AI for providing us with compute resources to train these models.

Citing

If you found this repository useful, please consider citing:

@article{awadalla2023openflamingo,
  title={OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models},
  author={Anas Awadalla and Irena Gao and Josh Gardner and Jack Hessel and Yusuf Hanafy and Wanrong Zhu and Kalyani Marathe and Yonatan Bitton and Samir Gadre and Shiori Sagawa and Jenia Jitsev and Simon Kornblith and Pang Wei Koh and Gabriel Ilharco and Mitchell Wortsman and Ludwig Schmidt},
  journal={arXiv preprint arXiv:2308.01390},
  year={2023}
}
@software{anas_awadalla_2023_7733589,
  author = {Awadalla, Anas and Gao, Irena and Gardner, Joshua and Hessel, Jack and Hanafy, Yusuf and Zhu, Wanrong and Marathe, Kalyani and Bitton, Yonatan and Gadre, Samir and Jitsev, Jenia and Kornblith, Simon and Koh, Pang Wei and Ilharco, Gabriel and Wortsman, Mitchell and Schmidt, Ludwig},
  title = {OpenFlamingo},
  month        = mar,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.1.1},
  doi          = {10.5281/zenodo.7733589},
  url          = {https://doi.org/10.5281/zenodo.7733589}
}
@article{Alayrac2022FlamingoAV,
  title={Flamingo: a Visual Language Model for Few-Shot Learning},
  author={Jean-Baptiste Alayrac and Jeff Donahue and Pauline Luc and Antoine Miech and Iain Barr and Yana Hasson and Karel Lenc and Arthur Mensch and Katie Millican and Malcolm Reynolds and Roman Ring and Eliza Rutherford and Serkan Cabi and Tengda Han and Zhitao Gong and Sina Samangooei and Marianne Monteiro and Jacob Menick and Sebastian Borgeaud and Andy Brock and Aida Nematzadeh and Sahand Sharifzadeh and Mikolaj Binkowski and Ricardo Barreira and Oriol Vinyals and Andrew Zisserman and Karen Simonyan},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.14198}
}

open_flamingo's People

Contributors

anas-awadalla avatar chris-alexiuk avatar chris-alexiuk-1 avatar davidmchan avatar elegantlin avatar gabrielilharco avatar hughnew avatar i-gao avatar isaac-chung avatar jpgard avatar kalyani7195 avatar loftusa avatar mitchellnw avatar riotonzuk avatar siddk avatar triakshunn avatar vegb avatar yassouali avatar yonatanbitton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open_flamingo's Issues

Tweak classification evals to not use a "separator" token between input and prompt

Currently, classification evals add a prompt of the form "A photo of a <eos_token> {class name}", and then use the position of the EOS token to find the start of the class name at evaluation time. However, this splitting strategy introduces (1) a new token and (2) a new syntax that the model hasn't seen before.

Instead of this, we should remove the <eos_token> part of the prompt entirely, explicitly compute the {class_name} tokens, and compute the likelihood/probabilities over those tokens.

logging to console

i personally like the logging in open_clip which splits data loading time / batch time & prints loss -- thoughts on switching to this when logging to console

Add support for GPT-JT model + make it easier to plug in other LMs

GPT-JT seems like a great choice for the LM side as it outperforms many other open-source models and is considerably smaller. Currently, we only support OPT models, but in general, it would be great to add more options.

The current design of the Flamingo class makes it difficult to plug in any huggingface model, as you would need to write a class specifically for each model family. We should explore more modular ways to build this.

Things to still add to readme

  • Fix citation for flamingo paper and add proper citation for our codebase
  • Remove extra "models" at end of model list
  • Update blog post/twitter thread links.
  • Upload model to huggingface and add link to repo
    -> Add instructions for loading the checkpoint from HF

Add offline mode

Add a flag to the training script to enable offline mode. This should only used cached huggingface models and use wandb in offline mode.

Build a web demo

Build a web interface (maybe using tools like Gradio or Streamlit) to demo the capabilities of Flamingo checkpoints. Ideally this demo should help us explore the VQA and captioning abilities of the model.

Add validation/test modes for evaluation

Currently, we only support validation evaluations which means we use ~5k from the training/validation set to evaluate model performance. In the future we should enable validation only for COCO, OKVQA, and VQAv2 as those are the datasets used to validate performance in the Flamingo paper. Moreover, we should add test evaluations for COCO, OKVQA, and VQAv2 and ensure that evaluations for all other datasets are in test mode.

Fix Imagenet evaluation

There is some rough code for this but it is not well tested and currently streams samples from huggingface which is very slow. I think all the code should be scrapped and written from scratch.

Related to #30.

Source code license? Separate from Llama license?

I understand that the pre-trained weights are licensed under the Llama license, but what is the source code licensed as? It sounded like you can choose to train a non-Llama backbone. Does the source code under that use case still fall under the Llama license?

Thanks in advance!

gradient checkpointing

good to support so we can start running with opt-1.3b on a40s --- checkout open-clip for reference

Training compute requirements

Hi, great work and really happy to see an open-sourced Flamingo! I was wondering what the compute requirements are, specifically:

  • What setup did you train your 9B model on and how long did it take with that setup?
  • What ballpark might you estimate training bigger models to need for that? (Any timeline for bigger ones in the works too? ๐Ÿ‘€)

Thanks!

ImageNet eval

Evaluation on ImageNet matching Flamingo setup (log likelihood-based eval; RICE + random image selection).

Why following code as readme say not works ?

Hi, i try to use Llama model as your readme say, following is the code:

from open_flamingo import create_model_and_transforms
##llm_path = "../llama-7b-hf"
llm_path = "decapoda-research/llama-7b-hf"

'''
import torch
from PIL import Image
import open_clip

model, _, preprocess = open_clip.\
create_model_and_transforms('ViT-L-14', pretrained='openai')
'''

from transformers import LlamaForCausalLM, LlamaTokenizer
from open_flamingo.src.factory import *

def create_model_and_transforms_llama(
    clip_vision_encoder_path: str,
    clip_vision_encoder_pretrained: str,
    lang_encoder_path: str,
    tokenizer_path: str,
    cross_attn_every_n_layers: int = 1,
    use_local_files: bool = False,
    decoder_layers_attr_name: str = None,
    **flamingo_kwargs,
):
    """
    Initialize a Flamingo model from a pretrained vision encoder and language encoder.
    Appends special tokens to the tokenizer and freezes backbones.

    Args:
        clip_vision_encoder_path (str): path to pretrained clip model (e.g. "ViT-B-32")
        clip_vision_encoder_pretrained (str): name of pretraining dataset for clip model (e.g. "laion2b_s32b_b79k")
        lang_encoder_path (str): path to pretrained language encoder
        tokenizer_path (str): path to pretrained tokenizer
        cross_attn_every_n_layers (int, optional): determines how often to add a cross-attention layer. Defaults to 1.
        use_local_files (bool, optional): whether to use local files. Defaults to False.
        decoder_layers_attr_name (str, optional): name of the decoder layers attribute. Defaults to None.
    Returns:
        Flamingo: Flamingo model from pretrained vision and language encoders
        Image processor: Pipeline to preprocess input images
        Tokenizer: A tokenizer for the language model
    """
    vision_encoder, _, image_processor = open_clip.create_model_and_transforms(
        clip_vision_encoder_path, pretrained=clip_vision_encoder_pretrained
    )
    # set the vision encoder to output the visual features
    vision_encoder.visual.output_tokens = True

    text_tokenizer = LlamaTokenizer.from_pretrained(
        tokenizer_path, local_files_only=use_local_files
    )
    '''
    text_tokenizer = AutoTokenizer.from_pretrained(
        tokenizer_path, local_files_only=use_local_files
    )
    '''
    # add Flamingo special tokens to the tokenizer
    text_tokenizer.add_special_tokens(
        {"additional_special_tokens": ["<|endofchunk|>", "<image>"]}
    )
    if text_tokenizer.pad_token is None:
        # Issue: GPT models don't have a pad token, which we use to
        # modify labels for the loss.
        text_tokenizer.add_special_tokens({"pad_token": "<PAD>"})

    '''
    lang_encoder = AutoModelForCausalLM.from_pretrained(
        lang_encoder_path, local_files_only=use_local_files
    )
    '''
    lang_encoder = LlamaForCausalLM.from_pretrained(
        lang_encoder_path, local_files_only=use_local_files
    )
    extend_instance(lang_encoder, FlamingoLMMixin)

    if decoder_layers_attr_name is None:
        decoder_layers_attr_name = _infer_decoder_layers_attr_name(lang_encoder)
    lang_encoder.set_decoder_layers_attr_name(decoder_layers_attr_name)
    lang_encoder.resize_token_embeddings(len(text_tokenizer))

    model = Flamingo(
        vision_encoder,
        lang_encoder,
        text_tokenizer.encode("<|endofchunk|>")[-1],
        text_tokenizer.encode("<image>")[-1],
        vis_dim=open_clip.get_model_config(clip_vision_encoder_path)["vision_cfg"][
            "width"
        ],
        cross_attn_every_n_layers=cross_attn_every_n_layers,
        **flamingo_kwargs,
    )

    # Freeze all parameters
    model.requires_grad_(False)
    assert sum(p.numel() for p in model.parameters() if p.requires_grad) == 0

    # Unfreeze perceiver, gated_cross_attn_layers, and LM input embeddings
    model.perceiver.requires_grad_(True)
    model.lang_encoder.gated_cross_attn_layers.requires_grad_(True)
    model.lang_encoder.get_input_embeddings().requires_grad_(True)

    print(
        f"Flamingo model initialized with {sum(p.numel() for p in model.parameters() if p.requires_grad)} trainable parameters"
    )

    return model, image_processor, text_tokenizer

def _infer_decoder_layers_attr_name(model):
    for k in __KNOWN_DECODER_LAYERS_ATTR_NAMES:
        if k.lower() in model.__class__.__name__.lower():
            return __KNOWN_DECODER_LAYERS_ATTR_NAMES[k]

    raise ValueError(
        f"We require the attribute name for the nn.ModuleList in the decoder storing the transformer block layers. Please supply this string manually."
    )

__KNOWN_DECODER_LAYERS_ATTR_NAMES = {
    "opt": "model.decoder.layers",
    "gptneo": "transformer.h",
    "gptj": "transformer.h",
    "gpt-j": "transformer.h",
    "pythia": "gpt_neox.layers",
    "llama": "model.layers",
}

model, image_processor, tokenizer = create_model_and_transforms_llama(
    clip_vision_encoder_path="ViT-L-14",
    clip_vision_encoder_pretrained="openai",
    lang_encoder_path = llm_path,
    tokenizer_path =llm_path,
    cross_attn_every_n_layers=4
)

# grab model checkpoint from huggingface hub
#### login
from huggingface_hub import hf_hub_download
import torch

checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-9B", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)

model = model.half().cuda()

from PIL import Image
import requests

"""
Step 1: Load images
"""
demo_image_one = Image.open(
    requests.get(
        "http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
    ).raw
)

demo_image_two = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
        stream=True
    ).raw
)

query_image = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
        stream=True
    ).raw
)


"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
 batch_size x num_media x num_frames x channels x height x width.
 In this case batch_size = 1, num_media = 3, num_frames = 1
 (this will always be one expect for video which we don't support yet),
 channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)

"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
 We also expect an <|endofchunk|> special token to indicate the end of the text
 portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
    ["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
    return_tensors="pt",
)


"""
Step 4: Generate text
"""
generated_text = model.generate(
    vision_x=vision_x.half().cuda(),
    lang_x=lang_x["input_ids"].cuda(),
    attention_mask=lang_x["attention_mask"].cuda(),
    max_new_tokens=20,
    num_beams=3,
)

print("Generated text: ", tokenizer.decode(generated_text[0]))
'''
Generated text:    <image>  An image of two cats. <|endofchunk|> <image>  An image of a bathroom sink. <|endofchunk|>
<image>  An image of a Thanksgiving table. An image of a Christmas tree. An image of a Christmas tree.
'''

The third image is a picture of dishes, Why the prediction is "An image of a Christmas tree."
Can you help me ?

Expand the evaluation suite

Add more test sets to the evaluation suite. Currently, we only support COCO, OKVQA, VQAv2. We should add the rest of the test sets in Flamingo paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.