Giter Club home page Giter Club logo

gfocal's Introduction

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

See more comments in 大白话 Generalized Focal Loss(知乎)

[2020.11] GFocal has been adopted in NanoDet, a super efficient object detector on mobile devices, achieving same performance but 2x faster than YoLoV4-Tiny! More details are in YOLO之外的另一选择,手机端97FPS的Anchor-Free目标检测模型NanoDet现已开源~.

[2020.10] Good News! GFocal has been accepted in NeurIPs 2020 and GFocal V2 is on the way.

[2020.9] The winner (1st) of GigaVision (object detection and tracking) in ECCV 2020 workshop from DeepBlueAI team adopt GFocal in their solutions.

[2020.7] GFocal is officially included in MMDetection V2, many thanks to @ZwwWayne and @hellock for helping migrating the code.

Introduction

One-stage detector basically formulates object detection as dense classification and localization (i.e., bounding box regression). The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an \emph{individual} prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the \emph{representations} of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference (i.e., separately trained but compositely used in test) and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty which is often the case in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain \emph{continuous} labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the \emph{continuous} version for successful optimization. On COCO {\tt test-dev}, GFL achieves 45.0% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5%) and ATSS (43.6%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2%, at 10 FPS on a single 2080Ti GPU.

For details see GFocal. The speed-accuracy trade-off is as follows:

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Get Started

Please see GETTING_STARTED.md for the basic usage of MMDetection.

Train

# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
# and with COCO dataset in 'data/coco/'

./tools/dist_train.sh configs/gfl_r50_1x.py 8 --validate

Inference

./tools/dist_test.sh configs/gfl_r50_1x.py work_dirs/gfl_r50_1x/epoch_12.pth 8 --eval bbox

Speed Test (FPS)

CUDA_VISIBLE_DEVICES=0 python3 ./tools/benchmark.py configs/gfl_r50_1x.py work_dirs/gfl_r50_1x/epoch_12.pth

Models

For your convenience, we provide the following trained models. All models are trained with 16 images in a mini-batch with 8 GPUs.

Model Multi-scale training AP (minival) AP (test-dev) FPS Link
GFL_R_50_FPN_1x No 40.2 40.3 19.4 Google
GFL_R_50_FPN_2x Yes 42.8 43.1 19.4 Google
GFL_R_101_FPN_2x Yes 44.9 45.0 14.6 Google
GFL_dcnv2_R_101_FPN_2x Yes 47.2 47.3 12.7 Google
GFL_X_101_32x4d_FPN_2x Yes 45.7 46.0 12.2 Google
GFL_dcnv2_X_101_32x4d_FPN_2x Yes 48.3 48.2 10.0 Google

[1] 1x and 2x mean the model is trained for 90K and 180K iterations, respectively.
[2] All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..
[3] dcnv2 denotes deformable convolutional networks v2. Note that for ResNe(X)t based models, we apply deformable convolutions from stage c3 to c5 in backbones.
[4] Refer to more details in config files in config/.
[5] FPS is tested with a single GeForce RTX 2080Ti GPU, using a batch size of 1.

Acknowledgement

Thanks MMDetection team for the wonderful open source project!

Citation

If you find GFL useful in your research, please consider citing this project.

@inproceedings{li2020generalized,
  title={Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection},
  author={Li, Xiang and Wang, Wenhai and Wu, Lijun and Chen, Shuo and Hu, Xiaolin and Li, Jun and Tang, Jinhui and Yang, Jian},
  booktitle={NeurIPS},
  year={2020}
}
@article{li2020generalizedv2,
  title={Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object Detection},
  author={Li, Xiang and Wang, Wenhai and Hu, Xiaolin and Li, Jun and Tang, Jinhui and Yang, Jian},
  journal={arXiv preprint},
  year={2020}
}

gfocal's People

Contributors

implus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gfocal's Issues

How to use model by C++?

If i have a C++ code,and i want use GFL ,how can i use GFL(mmdetection) by C++.
Using Pytorch? or?

fds

Can somebody answer something on this

如何快速使用GFL

我目前使用的是focal loss,如何更改能直接使用您提出的GFL,来进行实验?

IndexError: list index out of range at the end of the first epcoch

Dear team,
At the end of the first epoch I get:

2020-07-06 18:17:56,734 - mmdet - INFO - Epoch [1][300/359]     lr: 0.00732, eta: 1:58:24, time: 0.539, data_time: 0.006, memory: 4519, loss_qfl: 0.1841, loss_bbox: 0.4076, loss_dfl: 0.2519, loss: 0.8436
2020-07-06 18:18:23,969 - mmdet - INFO - Epoch [1][350/359]     lr: 0.00799, eta: 1:57:23, time: 0.545, data_time: 0.006, memory: 4519, loss_qfl: 0.1798, loss_bbox: 0.4058, loss_dfl: 0.2517, loss: 0.8373
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 

**508/506, 29.7 task/s, elapsed: 17s, ETA:     0s**

Traceback (most recent call last):
  File "./tools/train.py", line 151, in <module>
    main()
  File "./tools/train.py", line 147, in main
    meta=meta)
  File "/root/sharedfolder/production/GFocal/mmdet/apis/train.py", line 165, in train_detector
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 371, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 285, in train
    self.call_hook('after_train_epoch')
  File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 238, in call_hook
    getattr(hook, fn_name)(self)
  File "/root/sharedfolder/production/GFocal/mmdet/core/evaluation/eval_hooks.py", line 76, in after_train_epoch
    self.evaluate(runner, results)
  File "/root/sharedfolder/production/GFocal/mmdet/core/evaluation/eval_hooks.py", line 33, in evaluate
    results, logger=runner.logger, **self.eval_kwargs)
  File "/root/sharedfolder/production/GFocal/mmdet/datasets/coco.py", line 326, in evaluate
    result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
  File "/root/sharedfolder/production/GFocal/mmdet/datasets/coco.py", line 287, in format_results
    result_files = self.results2json(results, jsonfile_prefix)
  File "/root/sharedfolder/production/GFocal/mmdet/datasets/coco.py", line 217, in results2json
    json_results = self._det2json(results)
  File "/root/sharedfolder/production/GFocal/mmdet/datasets/coco.py", line 155, in _det2json
    data['category_id'] = self.cat_ids[label]
IndexError: list index out of range
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in <module>
    main()
  File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main

This happens only when I pass the --validate option

./tools/dist_train.sh configs/gfl_x101_ms2x.py 4 --validate

Any advice about what may be wrong here?

Thanks,

讨论下,GFL对比FL在误报率上有什么变化

如题,GFL可能会带来更高的AP,但是误报率呢
我用Nanodet训练自己的数据集,AP确实比yolov3高很多,但是误报率很高,猜想和GFL的置信度表达有关,可否解下疑惑,多谢

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Support of Incremental Object Detection

Hello
How are you?
Thanks for contributing this project.
I have a question.
Does this project support class-incremental training for saving training time without catastrophic forgetting?
Let's suppose I've trained a model on the dataset with K classes for 7 days.
If a new class is added into this dataset, should we train a model with the expanded dataset (K+1 classes) from begin?
If so, it is so expensive, especially in case of object detection in retail store.
That's because a new class of good is added frequently in retail store.
Can we train a new model in short time with the original weight on the expanded dataset?
I think that this is a very important function.
I send some recommended papers for this project.
https://arxiv.org/pdf/1708.06977.pdf
https://arxiv.org/pdf/2003.04668.pdf
https://arxiv.org/pdf/2003.06957.pdf
https://arxiv.org/pdf/2003.07304.pdf
Please let me know if you have a willing to implement this.
Thanks

FP16 Training

When I try to train with fp16 precision I get the following errors:

Traceback (most recent call last):
File "tools/train.py", line 153, in
main()
File "tools/train.py", line 149, in main
meta=meta)
File "/home/mmdetection/mmdet/apis/train.py", line 128, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 32, in train
**kwargs)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 31, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/home/schlmi/mmdetection/mmdet/models/detectors/base.py", line 236, in train_step
losses = self(**data)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/schlmi/mmdetection/mmdet/core/fp16/decorators.py", line 77, in new_func
output = old_func(*new_args, **new_kwargs)
File "/home/schlmi/mmdetection/mmdet/models/detectors/base.py", line 171, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/home/schlmi/research/dudoof/mmdetection/mmdet/models/detectors/single_stage.py", line 93, in forward_train
gt_labels, gt_bboxes_ignore)
File "/home/schlmi/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
File "/home/schlmi/mmdetection/mmdet/core/fp16/decorators.py", line 156, in new_func
output = old_func(*new_args, **new_kwargs)
File "/home/schlmi/mmdetection/mmdet/models/dense_heads/gfl_head.py", line 369, in loss
num_total_samples=num_total_samples)
File "/home/schlmi/mmdetection/mmdet/core/utils/misc.py", line 54, in multi_apply
return tuple(map(list, zip(*map_results)))
File "/home/schlmi/mmdetection/mmdet/models/dense_heads/gfl_head.py", line 266, in loss_single
pos_bbox_pred_corners = self.integral(pos_bbox_pred)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/schlmi/mmdetection/mmdet/models/dense_heads/gfl_head.py", line 55, in forward
x = F.linear(x, self.project).reshape(-1, 4)
File "/home/schlmi/miniconda2/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'vec' in call to _th_mv

Is there a way to fix this?

在计算iou_target的一点疑问?

norm_anchor_center = self.anchor_center(pos_anchors) / stride

pos_bbox_pred_distance = self.distribution_project(pos_bbox_pred)

pos_decode_bbox_pred = distance2bbox(norm_anchor_center,
                             pos_bbox_pred_distance)
pos_decode_bbox_targets = pos_bbox_targets / stride

target_ltrb = bbox2distance(norm_anchor_center,
                            pos_decode_bbox_targets, 
                            self.reg_max).reshape(-1)
score[pos_inds] = self.iou_target(pos_decode_bbox_pred.detach(),
                            pos_decode_bbox_targets)

请问在计算iou的时候,为什么norm_anchor_centerpos_bbox_targets要除以stride呢?计算iou通常不都是在原图上计算吗?

Unstable training

image

I set the number of GPUs to 1, and then the predicted quality values of iou all become 0, making the training unable to proceed normally. Whether it can be improved without increasing the GPUs
Thanks!

Question about QFL in FCOS

Hi, I am really interested in the GFL work. I would want to know your implementation detail about GFL in FCOS. I have tested the QFL in FCOS, but I couldn't get the results in paper. FCOS_R_50_1x with it's ctr. sampling, GIOU loss, Normalization improvements can get 39.2 mAP in Detectron2. It's the same as ATSS with centerness. But when I remove the centerness branch and multiply the centerness ground truth with class score as soft label, using QFL, the result is 38.59 mAP. Could you help me to find out the issue?

image

Thanks!

About the alpha in focal loss.

Thanks for your wonderful job. I find that you donot use alpha(which is 0.25/0.75 for pos/negative samples in focal loss), have you tried it in QFL?

Difference from the APs of FCOS and ATSS from mmdetection

Hi,

I'm very impressed by your excellent work.

When I read and compared your experiments about FCOS and ATSS, I was confused by those values you mentioned in Table 2 (a).

FCOS ATSS
mmdet 38.6 39.4
yours(Dirac delta) 38.5 39.2

Could you explain what I missed?

Thanks

Runtime Error

I am training model on custom data and using config file gfl_dcn_r101_ms2x.py. After 4 epochs I am getting following error -

File "tools/train.py", line 151, in <module> main() File "tools/train.py", line 147, in main meta=meta) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/apis/train.py", line 165, in train_detector runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 359, in run epoch_runner(data_loaders[i], **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/runner.py", line 263, in train self.model, data_batch, train_mode=True, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/apis/train.py", line 75, in batch_processor losses = model(**data) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/core/fp16/decorators.py", line 49, in new_func return old_func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/models/detectors/base.py", line 147, in forward return self.forward_train(img, img_metas, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/models/detectors/single_stage.py", line 71, in forward_train *loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/core/fp16/decorators.py", line 127, in new_func return old_func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/models/anchor_heads/gfl_head.py", line 254, in loss cfg=cfg) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/core/utils/misc.py", line 24, in multi_apply return tuple(map(list, zip(*map_results))) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/models/anchor_heads/gfl_head.py", line 186, in loss_single avg_factor=1.0) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/mmdet-1.1.0+32863f2-py3.6-linux-x86_64.egg/mmdet/models/losses/iou_loss.py", line 200, in forward return (pred * weight).sum() # 0 RuntimeError: The size of tensor a (4) must match the size of tensor b (529) at non-singleton dimension 1
How to resolve this?

QFL 代码的位置

你好,作者:
QFL是一个很好的研究,是否可以指出QFL 在代码中的具体位置,感谢!

How to draw the distribution, like Fig.3

I want to check the improved representation of the trained model with GFL, so I want to draw the distribution of feature map like Fig.3.

What should I do?

By the way, I think the plot is just for visulization to make readers understand better. Am I right?

How to get the ground truth of classification-IoU score?

It's easy to get the classification-IoU score in inference, but how can I get its ground truth in training? while generating the ground truth, I do not has a model to get predict bbox, but only has the ground truth of bbox, so can not get the iou to be the ground truth of classification-IoU score.

Questions about QFL

Hi, thanks a lot for your great work and I am sorry to bother you but there is a question that nags me a lot.
As mentioned in your paper, the supervision of the localization quality estimation is currently assigned for positive samples only which is unreliable as negatives may get chances to have uncontrollably higher quality predictions.
Merging quality prediction into classification score is fantastic. But here is the thing: Have your tried to keep the quality branch and train it with the inclusion of negative samples with proposed QFL?
I've tried this in FCOS and it turned out that centerness predictions for positive sample in minival were pretty bad. And I've taken those super parameters like initial bias prior for centerness branch into consideration but results were always bad.
Hope to hear your insights.
Thanks again.
@implus

test.py

Why I set the task in "test", it doesnot get the test results.

Some questions about the implementation

Hi,
Thank you for sharing the great work.
I have few questions listed below.

  1. Table 1(a) in paper, you replaced the centerness branch with iou branch in FCOS, and got some quality gain. I'm curious about the details about this part. I do some experiment myself and found the training is not working. In the original FCOS, the regression loss is weighted by centerness target, and if I simply replace the centerness target with the iou target, the regression loss and iou loss will all be zero, so the training failed. It will be very nice if you share some code or details about this experiment.
  2. In gfl_head.py, the loss_dfl and loss_bbox is weighted by "predicted score" which is different in FCOS and ATSS. Is there any reason about this part?
    Thank you in advance.

Cannot converge in centernet

Hi~I have done an experiment on centernet by changing original penalty focal loss to QFL. But my training is continued oscillating and cannot converge. I have checked my code and cannot find any obvious bugs. Do you have any suggestions? Thanks a lot.

About \Delta of Distribution Focal Loss

the article says \Delta means the interval between y_i and y_{i+1}.

In glocal_loss.py
dis_left = label.long() dis_right = dis_left + 1

I consider that 1 means the interval. Because F.cross_entropy needs an integer number to be the true label.
I find that there has a \Delta equals to 0.5 in your ablation study.(Table 2 (c))
So what is the setting of your CE loss when \Delta equals to 0.5?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.