Giter Club home page Giter Club logo

pagcp's Introduction

arXiv GitHub issues PRs Welcome

T-PAMI-2023: Performance-aware Approximation of Global Channel Pruning for Multitask CNNs

Introduction

This is the official implementation of PAGCP for YOLOv5 compression in the paper, Performance-aware Approximation of Global Channel Pruning for Multitask CNNs. PAGCP is a novel pruning paradigm containing a sequentially greedy channel pruning algorithm and a performance-aware oracle criterion, to approximately solve the objective problem of GCP. The developed pruning strategy dynamically computes the filter saliency in a greedy fashion based on the pruned structure at the previous step, and control each layer’s pruning ratio by the constraint of the performance-aware oracle criterion.

Abstract

Global channel pruning (GCP) aims to remove a subset of channels (filters) across different layers from a deep model without hurting the performance. Previous works focus on either single task model pruning or simply adapting it to multitask scenario, and still face the following problems when handling multitask pruning: 1) Due to the task mismatch, a well-pruned backbone for classification task focuses on preserving filters that can extract category-sensitive information, causing filters that may be useful for other tasks to be pruned during the backbone pruning stage; 2) For multitask predictions, different filters within or between layers are more closely related and interacted than that for single task prediction, making multitask pruning more difficult. Therefore, aiming at multitask model compression, we propose a Performance-Aware Global Channel Pruning (PAGCP) framework. We first theoretically present the objective for achieving superior GCP, by considering the joint saliency of filters from intra- and inter-layers. Then a sequentially greedy pruning strategy is proposed to optimize the objective, where a performance-aware oracle criterion is developed to evaluate sensitivity of filters to each task and preserve the globally most task-related filters. Experiments on several multitask datasets show that the proposed PAGCP can reduce the FLOPs and parameters by over 60% with minor performance drop, and achieves 1.2x~3.3x acceleration on both cloud and mobile platforms

Main Results on COCO2017

Model size
(pixels)
mAPval
0.5:0.95
mAPval
0.5
params
(M)
FLOPs
640 (B)
YOLOv5m 640 43.6 62.7 21.4 51.3
YOLOv5m_pruned 640 41.5 60.7 7.7 23.5
YOLOv5l 640 47.0 66.0 46.7 115.4
YOLOv5l_pruned 640 45.5 64.5 16.1 49.1
YOLOv5x 640 48.8 67.7 87.4 218.8
YOLOv5x_pruned 640 47.2 66.1 29.3 81.0
Table Notes
  • AP values are for single-model single-scale. Reproduce mAP by python val.py --data coco.yaml --img 640 --weights /path/to/model/checpoints
  • All pre-trained and pruned models are trained with hyp.scratch.yaml to align the setting.
Install

Python>=3.6.0 is required with all requirements.txt installed including PyTorch>=1.7:

$ git clone https://github.com/HankYe/PAGCP
$ cd PAGCP
$ conda create -n pagcp python==3.8 # (>=3.6)
$ pip install -r requirements.txt
Compression

Repeatedly run the command below to prune models on COCO dataset, in which hyper-parameters can be tuned to get better compression performance.

$ python compress.py --model $model name$ --dataset COCO --data coco.yaml --batch 64 --weights /path/to/to-prune/model --initial_rate 0.06 --initial_thres 6. --topk 0.8 --exp --device 0
Export

We have tested the effectiveness of ONNX-format conversion. The command is as follows:

$ python export.py --weights $weight_path$ --include onnx --dynamic

Citation

If you find this work helpful in your research, please cite.

@article{ye23pagcp,
  title={Performance-aware Approximation of Global Channel Pruning for Multitask CNNs},
  author={Hancheng Ye and Bo Zhang and Tao Chen and Jiayuan Fan and Bin Wang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023}
}

Acknowledgement

We greatly acknowledge the authors of YOLOv5 and Torch_pruning for their open-source codes. Visit the following links to access more contributions of them.

pagcp's People

Contributors

bobrown avatar hankye avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pagcp's Issues

如何提高剪枝率

我利用源码在自己的数据集上进行剪枝,但是剪枝率很低只有24%,请问修改哪些参数可以提高模型的剪枝率。

为何没有YOLOv5s的剪枝测试?

你好,我看了论文,好像没做YOLOv5s的剪枝测试,是因为s模型已经Scaling后冗余没大模型多,然后剪枝性能提升不大吗?

Γ?

作者,您好。在框图中有FLOPs<Γ?,Γ是保留计算量或者参数比例,请问文中的这个参数值是设为多少,对应在代码里面是哪个变量?

compress.py文件运行不成功

作者您好!请问一下我在colab上运行这条命令!python compress.py --model yolov5s.yaml --dataset COCO --data coco.yaml --batch 64 --weights /content/PAGCP-main/yolov5s.pt --initial_rate 0.06 --initial_thres 6. --topk 0.8 --exp --device 0后,
pruning 0/51: group3, base_loss:539.222351, base_b:215.046875, base_o:243.176361, base_c:80.999107, ratio:0.05, thres:0.06
10% 4/40 [00:02<00:22, 1.61it/s]一直卡在这里,请问是什么原因。

报错

作者你好,我这边有点疑问想请问一下,在使用您的代码的时候,我在电脑上调通并且已经跑完了,但是第二天醒来再继续跑的时候就发生了报错。想请问一下这是怎么回事,我也并没有动过任何地方。
raceback (most recent call last):
File "C:/YOLO/PAGCP-main/compress.py", line 612, in
main(opt)
File "C:/YOLO/PAGCP-main/compress.py", line 576, in main
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
File "C:\YOLO\PAGCP-main\utils\general.py", line 804, in increment_path
matches = [re.search(rf"%s{sep}(\d+)" % path, d) for d in dirs]
File "C:\YOLO\PAGCP-main\utils\general.py", line 804, in
matches = [re.search(rf"%s{sep}(\d+)" % path, d) for d in dirs]
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\re.py", line 201, in search
return _compile(pattern, flags).search(string)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 525, in _parse
code = _escape(source, this, state)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 426, in _escape
raise source.error("bad escape %s" % escape, len(escape))
re.error: bad escape \e at position 10

Compression result of Faster Rcnn on COCO

thx for your great job! i wonder about the compression result of Faster Rcnn on larger dataset like coco, which isn't seen in your paper. could you share some information about this matter?

Selecting the channels

Is it possible to use this algorithm to find the most inappropriate channels for our task without deleting them?!
(We can select the less suitable channels without deleting them.)
Thank you for your guidance in this regard.

剪枝后推理速度

作者您好,我使用您的框架进行剪枝,参数量、计算量都砍了一半左右,但是剪枝后模型推理时间变化不大,请问这正常吗?

opt.part?

有个部分代码没有看明白,想咨询一下
if name == "main":
opt = parse_opt()
if opt.compression.lower() == 'backbone':
opt.part = [f'model.{i}.' for i in range(10)]
else:
opt.part = [f'model.{i}.' for i in range(24)]
main(opt)
在compress.py文件中opt.part代表什么含义?如果修改这个值会有什么影响吗?

问题咨询

你好,感谢开源,有个部分代码没有看明白,想咨询一下
def set_group(self, model):
bottleneck_index = [2, 4, 6]
self.groups = [[f'model[{i}].m[{n}].cv2.conv' for n in
range(len(model.module[i].m if hasattr(model, 'module') else model[i].m))] + [
f'model[{i}].cv1.conv'] for i in bottleneck_index]
在sensitivity.py中这个函数的group是有什么意思, bottleneck_index 表示是要剪枝的层在model中的索引吗?模型在剪枝时时只剪枝这些层吗

Error while replicating the flow suggested in Readme.md

I git cloned the repository and tried to run the flow suggested in the README.md and run the compress.py file, but I get an error as "Start Pruning" step begins.
Attaching the image to show the corresponding error.
Error: "RuntimeError: result type Float can't be cast to the desired output type long int"
Command: python compress.py --model test1 --dataset COCO --data coco.yaml --batch 64 --weights yolov5s.pt --initial_rate 0.06 --initial_thres 6. --topk 0.8 --exp --device 0

image

yolov8 pruning

Hi, will you consider coming out with yolov8 pruning as a follow-up?

Code for NYUv2 Dataset

Hello, I see from your paper that you used PAGCP on the NYUv2 dataset, but in the code here you only have implementation for VOC and COCO, is it possible to get the implementation for NYUv2? If not, any tips would be appreciated. Thanks!

如何重複剪枝

剪枝過一次的模型,想再剪枝一次壓縮參數量,目前是將剪枝過後的模型先載入,再載入權重,但會遇到以下問題,想請問這作法是否是對的?

RuntimeError: Given groups=1, expected weight to be at least 1 at dimension 0, but got weight of size [0, 2, 1, 1] instead

Cannot successfully run compress.py

Hi, thanks for ur work. However, I failed to run the compressor.py following your commands written in readme. I met quite a lot errors in the importing part, for example,
ImportError: cannot import name 'color_list' from 'utils.plots' (/PRBNet_PyTorch/prb_PAGCP/utils/plots.py)
ImportError: cannot import name 'fitness' from 'utils.metrics' (
/PRBNet_PyTorch/prb_PAGCP/utils/metrics.py)
compress.py: error: unrecognized arguments: --sequential

Could u give me some advice on these? Thanks for your time and patience.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.