Giter Club home page Giter Club logo

mmrazor's Introduction

 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

PyPI docs badge codecov license open issues issue resolution

📘Documentation | 🛠️Installation | 👀Model Zoo | 🤔Reporting Issues

⭐ MMRazor for Large Models is Available Now! Please refer to MMRazorLarge

Introduction

MMRazor is a model compression toolkit for model slimming and AutoML, which includes 4 mainstream technologies:

  • Neural Architecture Search (NAS)
  • Pruning
  • Knowledge Distillation (KD)
  • Quantization

It is a part of the OpenMMLab project.

Major features:

  • Compatibility

    MMRazor can be easily applied to various projects in OpenMMLab, due to the similar architecture design of OpenMMLab as well as the decoupling of slimming algorithms and vision tasks.

  • Flexibility

    Different algorithms, e.g., NAS, pruning and KD, can be incorporated in a plug-n-play manner to build a more powerful system.

  • Convenience

    With better modular design, developers can implement new model compression algorithms with only a few codes, or even by simply modifying config files.

About MMRazor's design and implementation, please refer to tutorials for more details.

Latest Updates

The default branch is now main and the code on the branch has been upgraded to v1.0.0. The old master branch code now exists on the 0.x branch

MMRazor v1.0.0 was released in 2023-4-24, Major updates from 1.0.0rc2 include:

  1. MMRazor quantization is released.
  2. Add a new pruning algorithm named GroupFisher.
  3. Support distilling rtmdet with MMRazor.

To know more about the updates in MMRazor 1.0, please refer to Changelog for more details!

Benchmark and model zoo

Results and models are available in the model zoo.

Supported algorithms:

Neural Architecture Search
Pruning
Knowledge Distillation
Quantization

Installation

MMRazor depends on PyTorch, MMCV and MMEngine.

Please refer to installation.md for more detailed instruction.

Getting Started

Please refer to user guides for the basic usage of MMRazor. There are also advanced guides:

Contributing

We appreciate all contributions to improve MMRazor. Please refer to CONTRUBUTING.md for the contributing guideline.

Acknowledgement

MMRazor is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new model compression methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{2021mmrazor,
    title={OpenMMLab Model Compression Toolbox and Benchmark},
    author={MMRazor Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmrazor}},
    year={2021}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.

mmrazor's People

Contributors

allentdan avatar aptsunny avatar cape-zck avatar dreamer121121 avatar freakiehuang avatar gaoyang07 avatar hhaandroid avatar hit-cwh avatar hiwyl avatar humu789 avatar hunto avatar kitecats avatar lkjacky avatar lxtccc avatar nickyangmin avatar pppppm avatar pprp avatar spynccat avatar sunnyxiaohu avatar syswyl avatar tinytigerpan avatar tkhe avatar twmht avatar vansin avatar weiyun1025 avatar wilxy avatar wutongshenqiu avatar xinxinxinxu avatar yivona08 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmrazor's Issues

[Bug] Can't use RepVGG with AutoSlim

It's very similar to #79, it's a bug of the parser, except the exception is that it reached maximum recursion when calling trace_bn_conv_links (https://github.com/open-mmlab/mmrazor/blob/master/mmrazor/models/pruners/structure_pruning.py#L124).

To reproduce, simply replace the model (https://github.com/open-mmlab/mmrazor/blob/master/configs/pruning/autoslim/autoslim_mbv2_supernet_8xb256_in1k.py) with RepVGG.

I am refactoring the parser to make it more general and easily to debug. Stay tuned.

mmsegmentation dataset

My data set maintains the same format as voc12, but my data set only contains three types of objects. When I tested, I found that there were still 19 types of voc12 names. Where can I modify the category names of the voc12 data set? Thanks!

Question about MMRazor QAT function

Checklist

Describe the question you meet

Is there any roadmap for support torch QAT? Which Mode mmrazor will use? FX Graph Mode or eager mode quantization?

啥时候出MMStereo,大佬们,双目立体匹配

推荐使用英语模板 Feature request,以便你的问题帮助更多人。

描述这个功能

[填写这里]

动机

请简要说明以下为什么需要添加这个新功能
例 1. 现在进行 xxx 的时候不方便
例 2. 最近的论文中提出了有一个很有帮助的 xx

[填写这里]

相关资源

是否有相关的官方实现或者第三方实现?这些会很有参考意义。

[填写这里]

其他相关信息

其他和这个功能相关的信息或者截图,请放在这里。
另外如果你愿意参与实现这个功能并提交 PR,请在这里说明,我们将非常欢迎。

[填写这里]

use autoslim for yolox

I make a config for yolox to use autoslim, but get an error:

error:

Traceback (most recent call last):
File "tools/mmdet/train_mmdet.py", line 199, in
main()
File "tools/mmdet/train_mmdet.py", line 175, in main
datasets = [build_dataset(cfg.data.train)]
File "/home/yangmin/share/openmmlab/mmdetection/mmdet/datasets/builder.py", line 77, in build_dataset
dataset = MultiImageMixDataset(**cp_cfg)
TypeError: init() got an unexpected keyword argument 'ann_file'

config

###########################################
base = [
'../../base/datasets/mmdet/coco_detection.py',
'../../base/schedules/mmdet/schedule_1x.py',
'../../base/mmdet_runtime.py'
]

img_scale = (640, 640)

model = dict(
type='mmdet.YOLOX',
input_size=img_scale,
random_size_range=(15, 25),
random_size_interval=10,
backbone=dict(type='CSPDarknet', deepen_factor=0.33, widen_factor=0.5),
neck=dict(
type='YOLOXPAFPN',
in_channels=[128, 256, 512],
out_channels=128,
num_csp_blocks=1),
bbox_head=dict(
type='YOLOXHead', num_classes=80, in_channels=128, feat_channels=128),
train_cfg=dict(assigner=dict(type='SimOTAAssigner', center_radius=2.5)),
# In order to align the source code, the threshold of the val phase is
# 0.01, and the threshold of the test phase is 0.001.
test_cfg=dict(score_thr=0.01, nms=dict(type='nms', iou_threshold=0.65)))

data_root = 'data/coco/'
dataset_type = 'CocoDataset'

train_pipeline = [
dict(type='Mosaic', img_scale=img_scale, pad_val=114.0),
dict(
type='RandomAffine',
scaling_ratio_range=(0.1, 2),
border=(-img_scale[0] // 2, -img_scale[1] // 2)),
dict(
type='MixUp',
img_scale=img_scale,
ratio_range=(0.8, 1.6),
pad_val=114.0),
dict(type='YOLOXHSVRandomAug'),
dict(type='RandomFlip', flip_ratio=0.5),
# According to the official implementation, multi-scale
# training is not considered here but in the
# 'mmdet/models/detectors/yolox.py'.
dict(type='Resize', img_scale=img_scale, keep_ratio=True),
dict(
type='Pad',
pad_to_square=True,
# If the image is three-channel, the pad value needs
# to be set separately for each channel.
pad_val=dict(img=(114.0, 114.0, 114.0))),
dict(type='FilterAnnotations', min_gt_bbox_wh=(1, 1), keep_empty=False),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]

train_dataset = dict(
type='MultiImageMixDataset',
dataset=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True)
],
filter_empty_gt=False,
),
pipeline=train_pipeline)

test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=img_scale,
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Pad',
pad_to_square=True,
pad_val=dict(img=(114.0, 114.0, 114.0))),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img'])
])
]

data = dict(
samples_per_gpu=8,
workers_per_gpu=4,
persistent_workers=True,
train=train_dataset,
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline))

optimizer = dict(
type='SGD',
lr=0.01,
momentum=0.9,
weight_decay=5e-4,
nesterov=True,
paramwise_cfg=dict(norm_decay_mult=0., bias_decay_mult=0.))
optimizer_config = dict(grad_clip=None)

max_epochs = 300
num_last_epochs = 15
resume_from = None
interval = 10

lr_config = dict(
delete=True,
policy='YOLOX',
warmup='exp',
by_epoch=False,
warmup_by_epoch=True,
warmup_ratio=1,
warmup_iters=5, # 5 epoch
num_last_epochs=num_last_epochs,
min_lr_ratio=0.05)

runner = dict(type='EpochBasedRunner', max_epochs=max_epochs)

custom_hooks = [
dict(
type='YOLOXModeSwitchHook',
num_last_epochs=num_last_epochs,
priority=48),
dict(
type='SyncNormHook',
num_last_epochs=num_last_epochs,
interval=interval,
priority=48),
dict(
type='ExpMomentumEMAHook',
resume_from=resume_from,
momentum=0.0001,
priority=49)
]
checkpoint_config = dict(interval=interval)
evaluation = dict(
save_best='auto',
# The evaluation interval is 'interval' when running epoch is
# less than ‘max_epochs - num_last_epochs’.
# The evaluation interval is 1 when running epoch is greater than
# or equal to ‘max_epochs - num_last_epochs’.
interval=interval,
dynamic_intervals=[(max_epochs - num_last_epochs, 1)],
metric='bbox')
log_config = dict(interval=50)

algorithm = dict(
type='AutoSlim',
architecture=dict(type='MMDetArchitecture', model=model),
#distiller=dict(
# type='SelfDistiller',
# components=[
# dict(
# student_module='bbox_head.cls_score',
# teacher_module='bbox_head.cls_score',
# losses=[
# dict(
# type='KLDivergence',
# name='loss_kd',
# tau=1,
# loss_weight=1,
# )
# ]),
# ]),
pruner=dict(
type='RatioPruner',
ratios=(2 / 12, 3 / 12, 4 / 12, 5 / 12, 6 / 12, 7 / 12, 8 / 12, 9 / 12,
10 / 12, 11 / 12, 1.0)),
retraining=False,
bn_training_mode=True,
input_shape=None)

runner = dict(type='EpochBasedRunner', max_epochs=50)

use_ddp_wrapper = True
###############################

The magic of DDPWrapper

Hi,

I found out when doing accumulating gradient (https://github.com/open-mmlab/mmrazor/blob/master/mmrazor/models/algorithms/autoslim.py#L177), it must use DDPWrapper (https://github.com/open-mmlab/mmrazor/blob/master/mmrazor/core/distributed_wrapper.py).

Otherwise it would throw

https://github.com/pytorch/pytorch/blob/ccab142197d4647e397fc492ba4e7d857fa07a6d/torch/csrc/distributed/c10d/reducer.cpp#L421

 "initialize_buckets must NOT be called during autograd execution."

DDPWrapper wraps the top module as DDP, What is the magic behind that? Why it does not throw the exception?

thank you.

miou=0 when running the code on voc2012 dataset

I downloaded the voc2012 dataset, and downloaded the weights(pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003-4aef3c9a.pth ) in mmsegmentation, and ran the following code.

python tools/mmseg/train_mmseg.py configs/distill/cwd/cwd_cls_head_pspnet_r101_d8_pspnet_r18_d8_512x1024_voc2012_80k.py --cfg-options algorithm.distiller.teacher.init_cfg.type=Pretrained algorithm.distiller.teacher.init_cfg.checkpoint=pretrained/pspnet_r101-d8_512x512_20k_voc12aug_20200617_102003-4aef3c9a.pth

the validation results are as follows:
+-------------+-------+-------+
| Class | IoU | Acc |
+-------------+-------+-------+
| background | 73.31 | 99.89 |
| aeroplane | 0.0 | 0.0 |
| bicycle | 0.0 | 0.0 |
| bird | 0.0 | 0.0 |
| boat | 0.0 | 0.0 |
| bottle | 0.0 | 0.0 |
| bus | 0.0 | 0.0 |
| car | 0.0 | 0.0 |
| cat | 0.0 | 0.0 |
| chair | 0.0 | 0.0 |
| cow | 0.0 | 0.0 |
| diningtable | 0.0 | 0.0 |
| dog | 0.0 | 0.0 |
| horse | 0.0 | 0.0 |
| motorbike | 0.0 | 0.0 |
| person | 0.32 | 0.33 |
| pottedplant | 0.0 | 0.0 |
| sheep | 0.0 | 0.0 |
| sofa | 0.0 | 0.0 |
| train | 0.0 | 0.0 |
| tvmonitor | 0.0 | 0.0 |
+-------------+-------+-------+
2021-12-24 11:09:10,633 - mmseg - INFO - Summary:
2021-12-24 11:09:10,633 - mmseg - INFO -
+-------+------+------+
| aAcc | mIoU | mAcc |
+-------+------+------+
| 73.25 | 3.51 | 4.77 |
+-------+------+------+

can you help me sove this? Thanks

Error in visualization of CWD test results.

I get an error when I execute the following command:

python tools/mmseg/test_mmseg.py /home/sunshiding/mmrazor/work_dirs/cwd_cls_head_pspnet_r50_d8_pspnet_r18_d8_512x512_voc12/cwd_cls_head_pspnet_r50_d8_pspnet_r18_d8_512x512_voc12.py /home/sunshiding/mmrazor/work_dirs/cwd_cls_head_pspnet_r50_d8_pspnet_r18_d8_512x512_voc12/iter_4000.pth --show-dir ./results/cwd_cls_head_pspnet_r50_d8_pspnet_r18_d8_512x512_voc12

The error is as follows:

load checkpoint from local path: /home/sunshiding/mmrazor/work_dirs/cwd_cls_head_pspnet_r50_d8_pspnet_r18_d8_512x512_voc12/iter_4000.pth
[ ] 0/115, elapsed: 0s, ETA:Traceback (most recent call last):
File "tools/mmseg/test_mmseg.py", line 248, in
main()
File "tools/mmseg/test_mmseg.py", line 210, in main
format_args=eval_kwargs)
File "/home/sunshiding/mmsegmentation-master/mmseg/apis/test.py", line 140, in single_gpu_test
opacity=opacity)
File "/home/sunshiding/.conda/envs/razor/lib/python3.7/site-packages/mmrazor/models/algorithms/base.py", line 170, in show_result
return self.architecture.show_result(img, result, **kwargs)
File "/home/sunshiding/.conda/envs/razor/lib/python3.7/site-packages/mmrazor/models/architectures/base.py", line 41, in show_result
return self.model.show_result(img, result, **kwargs)
File "/home/sunshiding/mmsegmentation-master/mmseg/models/segmentors/base.py", line 242, in show_result
assert palette.shape[0] == len(self.CLASSES)
File "/home/sunshiding/.conda/envs/razor/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in getattr
type(self).name, name))
torch.nn.modules.module.ModuleAttributeError: 'EncoderDecoder' object has no attribute 'CLASSES'

[Bug] Config error of wsld_cls_head_resnet34_resnet18_8xb32_in1k

Describe the bug

A clear and concise description of what the bug is.

In configs/distill/wsld/wsld_cls_head_resnet34_resnet18_8xb32_in1k.py,

algorithm = dict(
    type='GeneralDistill',
    architecture=dict(
        type='MMDetArchitecture',
        model=student,
    ),

The architecture should be MMClsArchitecture not MMDetArchitecture

Why not recalibrate BN stats with autoslim?

Hi,

I found out the original implementation recalibrate BN stats after training.

https://github.com/JiahuiYu/slimmable_networks/blob/5dc14d0357ccfc596d706281acdc8a5b0b66c6d6/models/slimmable_ops.py#L204
https://github.com/JiahuiYu/slimmable_networks/blob/master/train.py#L758

But in mmrazor you don't do that but still get a good result

We do not recalibrating BN statistics after training

this is not consistent with the paper.

any idea? is recalibration not necessary?

[Feature] The design of the loss name field is controversial

Describe the feature

losses=[
     dict(
             type='ChannelWiseDivergence',
             name='loss_cwd_logits', # ---------------- this---------------
             tau=1,
             loss_weight=5,
          )
  ]

From the OpenMMLab development model, there should be no additional loss name field, which will bring the following problems:

  1. Registrar mode generally does not have this way of writing, which does not conform to the way of class registration.
  2. The user needs to set this parameter every time, is it necessary? This will increase the burden on users.

I am not sure about your specific needs, I hope to discuss this. Looking forward to your feedback.

[Bug] AttributeError: AutoSlim: 'SliceBackward' object has no attribute 'variable'

Describe the bug

A clear and concise description of what the bug is.

when i running cls-AutoSlim code by step 1, training a cls supernet on my own dataset, it appeared a AttributeError.

To Reproduce

The command you executed.

Traceback (most recent call last):
  File "tools/mmcls/train_mmcls.py", line 170, in <module>
    main()
  File "tools/mmcls/train_mmcls.py", line 140, in main
    algorithm = build_algorithm(cfg.algorithm)
  File "/home/ubuntu/project/hzw/mmrazor-master/mmrazor/models/builder.py", line 20, in build_algorithm
    return ALGORITHMS.build(cfg)
  File "/home/ubuntu/miniconda/envs/mmseg/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
    return self.build_func(*args, **kwargs, registry=self)
  File "/home/ubuntu/miniconda/envs/mmseg/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/ubuntu/miniconda/envs/mmseg/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
AttributeError: AutoSlim: 'SliceBackward' object has no attribute 'variable'

[Bug] RuntimeError: The size of tensor a (576) must match the size of tensor b (432) at non-singleton dimension 1

Dear Sir,

When I try to follow the guide to do a pruning with an exist model, below error is occured.

cmd:
python ./tools/model_converters/split_checkpoint.py
configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py
/my_local_path/mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth
--channel-cfgs configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml

note: mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth is downloading from (https://download.openmmlab.com/mmclassification/v0/mobilenet_v2/mobilenet_v2_batch256_imagenet_20200708-3b2dc3af.pth)

after run above command, some error is printed.

Traceback (most recent call last):
File "/my_local_path/workspace/mmrazor/mmrazor/models/algorithms/autoslim.py", line 76, in _init_pruner
pseudo_architecture.forward_dummy(pseudo_img)
File "/my_local_path/workspace/mmrazor/mmrazor/models/architectures/mmcls.py", line 20, in forward_dummy
output = child(output)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcls/models/backbones/mobilenet_v2.py", li
x = layer(x)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/container.py", line 141, i
input = module(input)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcls/models/backbones/mobilenet_v2.py", li
out = _inner_forward(x)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcls/models/backbones/mobilenet_v2.py", li
return x + self.conv(x)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/container.py", line 141, i
input = module(input)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/cnn/bricks/conv_module.py", line 201,
x = self.conv(x)
File "/my_local_path/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in
return forward_call(*input, **kwargs)
File "/my_local_path/workspace/mmrazor/mmrazor/models/pruners/structure_pruning.py", line 371, in modified_forward
feature = feature * self.in_mask
RuntimeError: The size of tensor a (576) must match the size of tensor b (432) at non-singleton dimension 1

Could you please help to guide for this, thanks?

Please provide use cases or unit tests for align_methods in BaseDistiller

When I was reading the code, I found that the align_methods description was too few, which is not convenient to understand the code. Can you provide use cases or configurations or unit tests for align_methods?

class BaseDistiller(BaseModule, metaclass=ABCMeta):
    """Base Distiller.

    In the distillation algorithm, some intermediate results of the teacher
    need to be obtained and passed to the student.

    For nn.Module's outputs, obtained by pytorch forward hook.
    For python function's outputs, obtained by a specific context manager.

    Args:
        align_functions (dict): The details of the functions which outputs need
        to be obtained.
    """

    def __init__(self, align_methods=None, **kwargs):
        super(BaseDistiller, self).__init__(**kwargs)

And the align_methods parameter name and docstr do not match.

Is mmrszor based on MMdetection?

Hi, is this codebase based on MMdetection or anyelse? In other words, does this code base support distillation of detectors (e.g. detection column)?

[Feature] Support a SOTA channel pruning method, ResRep.

Describe the feature

A SOTA channel pruning method, ResRep.

Motivation

This method is the SOTA static channel pruning method. It prunes a standard Res50 with over 50% FLOPs reduction and zero accuracy drop.

MMRazor is an awesome framework, but I am too busy these days so I would really appreciate it you could implement it in MMRazor.

Related resources

https://openaccess.thecvf.com/content/ICCV2021/html/Ding_ResRep_Lossless_CNN_Pruning_via_Decoupling_Remembering_and_Forgetting_ICCV_2021_paper.html

https://github.com/DingXiaoH/ResRep

Additional context

Autoslim on object detection?

Hi,

I have tried to prune object detection model with autoslim.

But the accuracy is always low, even the training loss is normal.

I have checked the original repo (https://github.com/JiahuiYu/slimmable_networks/tree/detection) has done the detection. But there are some details they don't mentioned.

For example, do we need to pretrain slimmable model in ImageNet and then transfer to object detection?

There are also the related work (https://arxiv.org/abs/2103.13258) they use the pretrained slimmable model in imagenet. But I am not sure if this is the requirement.

Any idea?

use autoslim for yolof

I make a config for yolof to use autoslim, but get an error:

error:

File "/xxx/mmrazor/mmrazor/models/pruners/structure_pruning.py", line 625, in trace_bn_conv_links
visited)
[Previous line repeated 1 more time]
File "/xxx/mmrazor/mmrazor/models/pruners/structure_pruning.py", line 603, in trace_bn_conv_links
bn_var = grad_fn.next_functions[1][0].variable
AttributeError: 'NoneType' object has no attribute 'variable'

myconfig:

#######################################
base = [
'../../base/datasets/mmdet/coco_detection.py',
'../../base/schedules/mmdet/schedule_1x.py',
'../../base/mmdet_runtime.py'
]

img_scale = (640, 640)

model = dict(
type='mmdet.YOLOF',
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(
type='ResNet',
depth=18,
num_stages=4,
out_indices=(3, ),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='pytorch'),
neck=dict(
type='DilatedEncoder',
in_channels=512,
out_channels=512,
block_mid_channels=128,
num_residual_blocks=4),
bbox_head=dict(
type='YOLOFHead',
num_classes=20,
in_channels=512,
reg_decoded_bbox=True,
anchor_generator=dict(
type='AnchorGenerator',
ratios=[1.0],
scales=[1, 2, 4, 8, 16],
strides=[32]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[.0, .0, .0, .0],
target_stds=[1., 1., 1., 1.],
add_ctr_clamp=True,
ctr_clamp=32),
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='GIoULoss', loss_weight=1.0)),
#init_cfg=dict(
# type='Pretrained',
# checkpoint='./work_dirs/yolof_res18_8x8_300e_voc/yolof_res18_voc0712_58.6map.pth',
#),
# training and testing settings
train_cfg=dict(
assigner=dict(
type='UniformAssigner', pos_ignore_thr=0.15, neg_ignore_thr=0.7),
allowed_border=-1,
pos_weight=-1,
debug=False),
test_cfg=dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.6),
max_per_img=100))

inp_size = (416, 416)
dataset_type = 'VOCDataset'
data_root = 'data/VOCdevkit/'
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)

train_pipeline = [
dict(type='LoadImageFromFile', to_float32=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
to_rgb=img_norm_cfg['to_rgb'],
ratio_range=(1, 4)),
dict(
type='MinIoURandomCrop',
min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
min_crop_size=0.3),
dict(type='Resize', img_scale=inp_size, keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=inp_size,
flip=False,
transforms=[
dict(type='Resize', keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=16,
workers_per_gpu=4,
train=dict(
type='RepeatDataset',
times=3,
dataset=dict(
type=dataset_type,
ann_file=[
data_root + 'VOC2007/ImageSets/Main/trainval.txt',
data_root + 'VOC2012/ImageSets/Main/trainval.txt'
],
img_prefix=[data_root + 'VOC2007/', data_root + 'VOC2012/'],
pipeline=train_pipeline)),
val=dict(
type=dataset_type,
ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
img_prefix=data_root + 'VOC2007/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'VOC2007/ImageSets/Main/test.txt',
img_prefix=data_root + 'VOC2007/',
pipeline=test_pipeline))

optimizer = dict(
type='SGD',
lr=0.025,
momentum=0.9,
weight_decay=0.0001,
paramwise_cfg=dict(
norm_decay_mult=0., custom_keys={'backbone': dict(lr_mult=1. / 3)}))
lr_config = dict(warmup_iters=1500, warmup_ratio=0.00066667)

algorithm = dict(
type='AutoSlim',
architecture=dict(type='MMDetArchitecture', model=model),
#distiller=dict(
# type='SelfDistiller',
# components=[
# dict(
# student_module='bbox_head.cls_score',
# teacher_module='bbox_head.cls_score',
# losses=[
# dict(
# type='KLDivergence',
# name='loss_kd',
# tau=1,
# loss_weight=1,
# )
# ]),
# ]),
pruner=dict(
type='RatioPruner',
ratios=(2 / 12, 3 / 12, 4 / 12, 5 / 12, 6 / 12, 7 / 12, 8 / 12, 9 / 12,
10 / 12, 11 / 12, 1.0)),
retraining=False,
bn_training_mode=True,
input_shape=None)

runner = dict(type='EpochBasedRunner', max_epochs=50)

use_ddp_wrapper = True
########################

[Feature] More details in README

Describe the feature

Could you provied more details (method name, reference etc.) about the algorithms at pruning, KD and NAS ?

AttributeError: 'NoneType' object has no attribute 'parent' when implementig kd framework for mmfewshot

Hi! i was trying to apply kd framework in mmrazor for few shot algorithms.

Some basic suggestions has been provided in wechat group several days ago. Following the MMDetArchitecture, i created MMFewShotArchitecture in file ./mmrazor/models/architectures/mmfew.py. Codes are below.

from ..builder import ARCHITECTURES
from .base import BaseArchitecture
@ARCHITECTURES.register_module()
class MMFewShotArchitecture(BaseArchitecture):
 """Architecture based on MMFewShot."""
    def __init__(self, **kwargs):
        super(MMFewShotArchitecture, self).__init__(**kwargs)

But i got stuck when build models in architecture. codes, bug reports and config file are listed below.

codes:

from mmrazor.models import build_algorithm
from mmcv import Config
from mmrazor.models.architectures import MMFewShotArchitecture
config_path = "./Model/Model_Distill/distillers/TFA_distiller/tfa_distiller.py"
cfg = Config.fromfile(config_path)
arch = cfg.algorithm.architecture
del arch["type"]
a = MMFewShotArchitecture(**arch)

bug reports:

File "/home/user/sun_chen/Projects/FSOD/FsMMdet/test/test_build.py", line 17, in <module>
 a = MMFewShotArchitecture(**arch)
File "/home/user/sun_chen/Packages/OpenMMLab/mmrazor/mmrazor/models/architectures/mmfew.py", line 12, in __init__
 super(MMFewShotArchitecture, self).__init__(**kwargs)
File "/home/user/sun_chen/Packages/OpenMMLab/mmrazor/mmrazor/models/architectures/base.py", line 15, in __init__
 self.model = MODELS.build(model)
File "/home/user/anaconda3/envs/sc_mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 212, in build
 return self.build_func(*args, **kwargs, registry=self)
File "/home/user/anaconda3/envs/sc_mmlab/lib/python3.7/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
 return build_from_cfg(cfg, registry, default_args)
File "/home/user/anaconda3/envs/sc_mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 42, in build_from_cfg
 obj_cls = registry.get(obj_type)
File "/home/user/anaconda3/envs/sc_mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 207, in get
 while parent.parent is not None:
AttributeError: 'NoneType' object has no attribute 'parent'

model cfg code:

model = dict(
 type='mmfewshot.TFA',
 pretrained='open-mmlab://detectron2/resnet50_caffe',
 backbone=dict(
     type='ResNet',
     depth=50,
     num_stages=4,
     out_indices=(0, 1, 2, 3),
     frozen_stages=1,
     norm_cfg=dict(type='BN', requires_grad=False),
     norm_eval=True,
     style='caffe'),
 neck=dict(
     type='FPN',
     in_channels=[256, 512, 1024, 2048],
     out_channels=256,
     num_outs=5,
     init_cfg=[
         dict(
             type='Caffe2Xavier',
             override=dict(type='Caffe2Xavier', name='lateral_convs')),
         dict(
             type='Caffe2Xavier',
             override=dict(type='Caffe2Xavier', name='fpn_convs'))
     ]),
 rpn_head=dict(
     type='RPNHead',
     in_channels=256,
     feat_channels=256,
     anchor_generator=dict(
         type='AnchorGenerator',
         scales=[8],
         ratios=[0.5, 1.0, 2.0],
         strides=[4, 8, 16, 32, 64]),
     bbox_coder=dict(
         type='DeltaXYWHBBoxCoder',
         target_means=[.0, .0, .0, .0],
         target_stds=[1.0, 1.0, 1.0, 1.0]),
     loss_cls=dict(
         type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
     loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
 roi_head=dict(
     type='StandardRoIHead',
     bbox_roi_extractor=dict(
         type='SingleRoIExtractor',
         roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
         out_channels=256,
         featmap_strides=[4, 8, 16, 32]),
     bbox_head=dict(
         type='Shared2FCBBoxHead',
         in_channels=256,
         fc_out_channels=1024,
         roi_feat_size=7,
         num_classes=80,
         bbox_coder=dict(
             type='DeltaXYWHBBoxCoder',
             target_means=[0., 0., 0., 0.],
             target_stds=[0.1, 0.1, 0.2, 0.2]),
         reg_class_agnostic=False,
         loss_cls=dict(
             type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
         loss_bbox=dict(type='L1Loss', loss_weight=1.0),
         init_cfg=[
             dict(
                 type='Caffe2Xavier',
                 override=dict(type='Caffe2Xavier', name='shared_fcs')),
             dict(
                 type='Normal',
                 override=dict(type='Normal', name='fc_cls', std=0.01)),
             dict(
                 type='Normal',
                 override=dict(type='Normal', name='fc_reg', std=0.001))
         ])),

 train_cfg=dict(
     rpn=dict(
         assigner=dict(
             type='MaxIoUAssigner',
             pos_iou_thr=0.7,
             neg_iou_thr=0.3,
             min_pos_iou=0.3,
             match_low_quality=True,
             ignore_iof_thr=-1),
         sampler=dict(
             type='RandomSampler',
             num=256,
             pos_fraction=0.5,
             neg_pos_ub=-1,
             add_gt_as_proposals=False),
         allowed_border=-1,
         pos_weight=-1,
         debug=False),
     rpn_proposal=dict(
         nms_pre=2000,
         max_per_img=1000,
         nms=dict(type='nms', iou_threshold=0.7),
         min_bbox_size=0),
     rcnn=dict(
         assigner=dict(
             type='MaxIoUAssigner',
             pos_iou_thr=0.5,
             neg_iou_thr=0.5,
             min_pos_iou=0.5,
             match_low_quality=False,
             ignore_iof_thr=-1),
         sampler=dict(
             type='RandomSampler',
             num=512,
             pos_fraction=0.25,
             neg_pos_ub=-1,
             add_gt_as_proposals=True),
         pos_weight=-1,
         debug=False)),
 test_cfg=dict(
     rpn=dict(
         nms_pre=1000,
         max_per_img=1000,
         nms=dict(type='nms', iou_threshold=0.7),
         min_bbox_size=0),
     rcnn=dict(
         score_thr=0.05,
         nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05),
         max_per_img=100)))

student = dict(model)

teacher = dict(model)

distiller = dict(
     type='SingleTeacherDistiller',
     teacher=teacher,
     teacher_trainable=False,
     components=[
         dict(
             student_module='bbox_head.gfl_cls',
             teacher_module='bbox_head.gfl_cls',
             losses=[
                 dict(
                     type='ChannelWiseDivergence',
                     name='loss_cwd_cls_head',
                     tau=1,
                     loss_weight=5,
                 )]
             )]
     )
algorithm = dict(
 type='GeneralDistill',
 architecture=dict(
     type='MMFewShotArchitecture',
     model=student),
 distiller=distiller
)

I‘m not familiar with Registry and Build mechanism in MMCV, so any suggestions?

[Bug]autoslim, use splitted checkpoint to test subnet, accuracy is wrong

Describe the bug

I followed the 5-step instruction to use autoslim, use the default config file.
In step 3, I retrain the subnet at 199M Flops, whose top-1 accuracy is 70.13%.
then I Split checkpoint like step 4.

In step 5, if I use the splitted checkpoint, the accuracy is completely wrong. But if I use the checkpoint in step 3, the accuracy is right.
The log is as follows:

The logs for splitted checkpoint

/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rankargument to be set, please change it to read fromos.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/split_mbv2_subnet_199mflops/checkpoint_1.pth
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
./tools/mmcls/test_mmcls.py:134: UserWarning: Class names are not saved in the checkpoint's meta data, use imagenet by default.
warnings.warn('Class names are not saved in the checkpoint's '
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 50000/50000, 1094.2 task/s, elapsed: 46s, ETA: 0s
accuracy_top-1 : 0.10

accuracy_top-5 : 0.50`

The logs for subnet trained checkpoint

/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rankargument to be set, please change it to read fromos.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
load checkpoint from local path: /root/work/mmrazor/work_dirs/autoslim_mbv2_subnet_8xb256_in1k/epoch_300.pth
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 50000/50000, 1085.0 task/s, elapsed: 46s, ETA: 0s
accuracy_top-1 : 70.13

accuracy_top-5 : 89.47`

Default process group has not been initialized, please make sure to call init_process_group

Describe the bug

A clear and concise description of what the bug is.

When I running detnas by step 3 : use evolution search do training in a det supernet ,it appeared a distributed trainning problem

To Reproduce

The command you executed.

Traceback (most recent call last):
  File "./tools/mmdet/search_mmdet.py", line 197, in <module>
    main()
  File "./tools/mmdet/search_mmdet.py", line 193, in main
    searcher.search()
  File "/home/chenzhixuan/project/mmrazor/mmrazor/core/searcher/evolution_search.py", line 145, in search
    broadcast_candidate_pool)
  File "/home/chenzhixuan/project/mmrazor/mmrazor/core/utils/broadcast.py", line 38, in broadcast_object_list
    dist.broadcast(dir_tensor, src)
  File "/home/chenzhixuan/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1075, in broadcast
    default_pg = _get_default_group()
  File "/home/chenzhixuan/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
    raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

'missing keys in source state_dict: architecture.model...'

when I load a pretrained model from mmdetection. It will say "missing keys in source state_dict: architecture.model.backbone.conv1.weight...."

when I load a pretrained model from mmdetection, and change the parameter name by "OrderedDict({'architecture.'+k: v for k, v in checkpoint.items()})". It will say "unexpected keys in source state_dict: architecture.model.backbone.conv1.weight...."

......can anyone help me?

KD

Can the knowledge distillation algorithm be used for classification, detection and segmentation tasks by modifying the distiller components and the teacher model student model?For example, the wsld algorithm used for object detection and image segmentation, is it possible?
Thanks!

the gap between your realization and the paper in autoslim

In the paper, AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs),but your realization is only 72.73% at 320M FLOPs。

What do you mean by "Note that we ran the official code and the Top-1 Acc of the models with official channel cfg are 73.8%, 72.5% and 71.1%"? It should be 74.2%. I test it using paper's official code and official channel cfg(https://github.com/JiahuiYu/slimmable_networks). The result is 74.2% not 72.5%

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.