Giter Club home page Giter Club logo

Comments (24)

talebolano avatar talebolano commented on August 17, 2024

这个在coco的集上还没有进行测试过,在我目前参与的项目的集上,yolov2剪枝50%的flops下降了2/3,实际去跑快了2~3倍,项点的Precision和未减枝模型保持一致,recall上升了18%

from yolov3-network-slimming.

lucheng07082221 avatar lucheng07082221 commented on August 17, 2024

@talebolano 这个厉害了,再结合下tensorRT 优化引擎,reference一张图大概就1ms左右。对了你的yolov2剪枝的什么时候可以开源?

from yolov3-network-slimming.

talebolano avatar talebolano commented on August 17, 2024

现在的版本已经可以对yolov2剪枝了

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

@talebolano 使用稀疏训练,recall算正常,但是precision下降得厉害,虚框太多是怎么回事?是要剪枝以后就正常了?recall和precision如下
image

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

batch:32,epoch练了70次了。

from yolov3-network-slimming.

talebolano avatar talebolano commented on August 17, 2024

@hewumars 剪枝之后呢,我觉得稀疏化训练应该到达loss平稳而且γ系数也没有变化为止

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

γ没太注意,60%以下的值比较稳定都很小了,80~100%:1.5的样子。loss平稳了。

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

剪枝之后准确率稍微有一点提升。
我看你提到Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers,感觉和Network Slimming区别不是很大。主要是惩罚项有变化。它的效果会更好一些吗?

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

另外shortcut有没有办法剪枝,很大计算量在shortcut前后的conv

from yolov3-network-slimming.

talebolano avatar talebolano commented on August 17, 2024

@hewumars Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers这篇文章除了替换了惩罚项以外,还想办法保留了bn层的被减通道的偏置,这样剪枝之后的精度损失就很小,可以减少微调的epoch。关于shortcut我暂时没有好的想法。。所以就直接省掉不减

from yolov3-network-slimming.

hewumars avatar hewumars commented on August 17, 2024

恩恩,谢谢。我再研究下。确实是两篇paper区别还是很大的。之前看得不仔细。

from yolov3-network-slimming.

Ariel-JUAN avatar Ariel-JUAN commented on August 17, 2024

您好,可以请问一下您是在什么环境下跑的yolo呢?我在titan x上剪枝完测试,发现时间并没有提高。。。或许是因为剪枝在GPU提升的不明显?

from yolov3-network-slimming.

talebolano avatar talebolano commented on August 17, 2024

@Ariel-JUAN gtx1060,请问你剪枝后的总bflops有没有减少,另外是否是一边播放一边测试的

from yolov3-network-slimming.

Ariel-JUAN avatar Ariel-JUAN commented on August 17, 2024

@talebolano 有减少,不是一边播放一边测试的,我就是在测试集上测试的图片。我决定再剪枝试试效果!
还有一个问题,就是我看到您分享了RETHINKING THE SMALLER-NORM-LESSINFORMATIVE代码,您有写这个论文的阅读博客嘛?因为我看这个论文的时候看的比较费力,而且没有好的参考资料。。。不知道您有写博文吗?

from yolov3-network-slimming.

talebolano avatar talebolano commented on August 17, 2024

@Ariel-JUAN 并没有写过。。不过我认为这篇论文相比于networkslimming来说,主要提出在剪bn的γ时也会减去bn的bias,于是将bn层减去的参数的bias作为常量添加到下一层卷积的bias或bn层running mean中,相比于之前的算法减小了剪枝造成的误差;更换了惩罚项,感觉他的惩罚项类似于梯度截断法(实际用起来有γ发散的问题);此外还有考虑各层运算量的大小,对不同bn层梯度下降时施以不同的惩罚,有些区分层的重要性的感觉。还有一个小trick,在稀疏化训练前将bn层γ权重缩小α倍,同时将同层的卷积权重放大α倍,这样输出的feature map没有变,方便快速稀疏化γ权重(但感觉这个trick实际没有什么用)

from yolov3-network-slimming.

Ariel-JUAN avatar Ariel-JUAN commented on August 17, 2024

@talebolano 您好,谢谢您的回复。我后来又训练了一次,剪枝前的参数量是48.4M,flops是22.59G;剪枝后的参数量是34M,flops是16.10G。模型体积有减小,但是inference时间还是没有变化,是因为flops减少的小吗?还想请教一个问题,就是在额外惩罚BN的时候,不会对原来的任务有影响吗?

from yolov3-network-slimming.

nationalflag avatar nationalflag commented on August 17, 2024

剪枝了之后precision没变,recall竟然上升了那么多,,是因为yolo3本身对你的项目过拟合么

from yolov3-network-slimming.

EtheneXiang avatar EtheneXiang commented on August 17, 2024

batch:32,epoch练了70次了。

请问你啥子显卡,我8G显存只能开16的batchsize,然后subbatchsize=1,cfg中长宽设置416,开了random=1。

from yolov3-network-slimming.

EtheneXiang avatar EtheneXiang commented on August 17, 2024

剪枝之后准确率稍微有一点提升。
我看你提到Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers,感觉和Network Slimming区别不是很大。主要是惩罚项有变化。它的效果会更好一些吗?

请问训练时候,这些信息是啥意思:我在widerface上训练的,只有人脸这一类。
020%:0.754621,2040%:0.993606,4060%:0.999061,6080%:1.000931,80~100%:2.887091
[Epoch 1/2000, Batch 82/776] [Losses: x 0.107047, y 0.065480, w 0.076614, h 0.095350, conf 0.150828, cls 0.000000, total 0.495318, recall: 0.89583, precision: 0.21140

from yolov3-network-slimming.

EtheneXiang avatar EtheneXiang commented on August 17, 2024

@talebolano 这个厉害了,再结合下tensorRT 优化引擎,reference一张图大概就1ms左右。对了你的yolov2剪枝的什么时候可以开源?

求问咋个断点训练了?第一步中的

from yolov3-network-slimming.

zhouying12 avatar zhouying12 commented on August 17, 2024

对yolov3-tiny进行剪枝的时候 recall和precision都特别低,请问这是怎么回事?是哪个环节出问题了么?

[Epoch 50/2000, Batch 53/72] [Losses: x 0.148869, y 0.140013, w 0.234697, h 0.129271, conf 1.023471, cls 0.046769, total 1.723090, recall: 0.01673, precision: 0.00075]

[Epoch 50/2000, Batch 54/72] [Losses: x 0.154306, y 0.131487, w 0.207371, h 0.152046, conf 1.002264, cls 0.046794, total 1.694268, recall: 0.01505, precision: 0.00068]

[Epoch 50/2000, Batch 55/72] [Losses: x 0.166411, y 0.153774, w 0.255958, h 0.165886, conf 1.022801, cls 0.046864, total 1.811693, recall: 0.02252, precision: 0.00064]

[Epoch 50/2000, Batch 56/72] [Losses: x 0.169226, y 0.151126, w 0.234970, h 0.181641, conf 1.006080, cls 0.046636, total 1.789679, recall: 0.02556, precision: 0.00120]

from yolov3-network-slimming.

AntoineGerardeaux avatar AntoineGerardeaux commented on August 17, 2024

对yolov3-tiny进行剪枝的时候 recall和precision都特别低,请问这是怎么回事?是哪个环节出问题了么?

[Epoch 50/2000, Batch 53/72] [Losses: x 0.148869, y 0.140013, w 0.234697, h 0.129271, conf 1.023471, cls 0.046769, total 1.723090, recall: 0.01673, precision: 0.00075]

[Epoch 50/2000, Batch 54/72] [Losses: x 0.154306, y 0.131487, w 0.207371, h 0.152046, conf 1.002264, cls 0.046794, total 1.694268, recall: 0.01505, precision: 0.00068]

[Epoch 50/2000, Batch 55/72] [Losses: x 0.166411, y 0.153774, w 0.255958, h 0.165886, conf 1.022801, cls 0.046864, total 1.811693, recall: 0.02252, precision: 0.00064]

[Epoch 50/2000, Batch 56/72] [Losses: x 0.169226, y 0.151126, w 0.234970, h 0.181641, conf 1.006080, cls 0.046636, total 1.789679, recall: 0.02556, precision: 0.00120]

Hello,

I think it was because the tiny model is already pruned. Thus, when you prune the network, you are pruning important connections. Someone agrees?

Best regards,
Antoine

from yolov3-network-slimming.

MrWangg1992 avatar MrWangg1992 commented on August 17, 2024

我在oxfordhandcoco data 上都是训练到某个数据之后就会全部变成nan, 有遇到过同样问题的吗

[Epoch 1/2, Batch 4310/4807] [Losses: x 0.559568, y 0.449969, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4311/4807] [Losses: x 1.351404, y 0.073808, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4312/4807] [Losses: x 0.565127, y 0.776471, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4313/4807] [Losses: x 1.911261, y 1.173486, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4314/4807] [Losses: x 1.864917, y 1.443722, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4315/4807] [Losses: x 0.734070, y 0.451903, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4316/4807] [Losses: x 1.529638, y 0.927384, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4317/4807] [Losses: x 1.410421, y 0.737190, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4318/4807] [Losses: x 0.984508, y 1.299099, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4319/4807] [Losses: x 0.562218, y 0.542996, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4320/4807] [Losses: x 0.705390, y 0.320980, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4321/4807] [Losses: x 0.771150, y 0.596184, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4322/4807] [Losses: x 1.503293, y 1.102003, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4323/4807] [Losses: x 0.398452, y 0.758132, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4324/4807] [Losses: x 0.979589, y 0.932716, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4325/4807] [Losses: x 1.462005, y 0.916303, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4326/4807] [Losses: x 1.056415, y 1.085148, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4327/4807] [Losses: x 0.869335, y 0.908485, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4328/4807] [Losses: x 1.306761, y 1.133485, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4329/4807] [Losses: x 0.574308, y 0.927573, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4330/4807] [Losses: x 0.821825, y 0.987420, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4331/4807] [Losses: x 0.923663, y 0.000309, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4332/4807] [Losses: x 1.362158, y 1.890960, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4333/4807] [Losses: x 1.202949, y 1.423668, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4334/4807] [Losses: x 1.040447, y 1.417180, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

from yolov3-network-slimming.

yyx1107 avatar yyx1107 commented on August 17, 2024

我在oxfordhandcoco data 上都是训练到某个数据之后就会全部变成nan, 有遇到过同样问题的吗

[Epoch 1/2, Batch 4310/4807] [Losses: x 0.559568, y 0.449969, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4311/4807] [Losses: x 1.351404, y 0.073808, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4312/4807] [Losses: x 0.565127, y 0.776471, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4313/4807] [Losses: x 1.911261, y 1.173486, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4314/4807] [Losses: x 1.864917, y 1.443722, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4315/4807] [Losses: x 0.734070, y 0.451903, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4316/4807] [Losses: x 1.529638, y 0.927384, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4317/4807] [Losses: x 1.410421, y 0.737190, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4318/4807] [Losses: x 0.984508, y 1.299099, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4319/4807] [Losses: x 0.562218, y 0.542996, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4320/4807] [Losses: x 0.705390, y 0.320980, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4321/4807] [Losses: x 0.771150, y 0.596184, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4322/4807] [Losses: x 1.503293, y 1.102003, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4323/4807] [Losses: x 0.398452, y 0.758132, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4324/4807] [Losses: x 0.979589, y 0.932716, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4325/4807] [Losses: x 1.462005, y 0.916303, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4326/4807] [Losses: x 1.056415, y 1.085148, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4327/4807] [Losses: x 0.869335, y 0.908485, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4328/4807] [Losses: x 1.306761, y 1.133485, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4329/4807] [Losses: x 0.574308, y 0.927573, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4330/4807] [Losses: x 0.821825, y 0.987420, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4331/4807] [Losses: x 0.923663, y 0.000309, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4332/4807] [Losses: x 1.362158, y 1.890960, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4333/4807] [Losses: x 1.202949, y 1.423668, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

[Epoch 1/2, Batch 4334/4807] [Losses: x 1.040447, y 1.417180, w nan, h nan, conf 82.893064, cls 13.146080, total nan, recall: 0.00000, precision: 1.00000]

请问你的结果怎么样呀,速度有没有提升呀

from yolov3-network-slimming.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.