Comments (4)
Hi Thanks for reaching out, the MobileNetV2 that we are using is from PytorchCV which has a baseline of 73.03. Please see the link below:
https://pypi.org/project/pytorchcv/
We specifically wanted to test against the higher baseline accuracy to see if there is any accuracy degradation with the strong baseline.
from zeroq.
Hi Thanks for reaching out, the MobileNetV2 that we are using is from PytorchCV which has a baseline of 73.03. Please see the link below:
https://pypi.org/project/pytorchcv/
We specifically wanted to test against the higher baseline accuracy to see if there is any accuracy degradation with the strong baseline.
Thankyou for the quick response.
If you chose to use a higher accuracy baseline different from DFQ, In such case do the comparison results against DFQ and other methods are valid.? I am curious to know whether on this baseline also DFQ resulted in same accuracy as they claimed in their work.?
Sorry to ask you again. I would like to experiment more on quantization starting with your approach, hence putting forward all my queries and collecting valuable suggestions from you.
Thank you.
from zeroq.
The reason we decided to test a strong baseline for all of the tasks is simple. If we use a lower baseline accuracy, then it is not possible to test quantization's impact. For example, it is actually possible to get higher accuracy after quantization if you use a weak baseline. This can be misleading, because the reason for accuracy improvement is not quantization, but that the FP32 baseline was a weak baseline. To avoid such confusion, we decided to test with the strongest accuracy for each of the models. Recovering the accuracy of the stronger baseline is harder and that is why we targeted that.
Now you can use three methods to compare ZeroQ with other methods in the literature (including DFQ which I believe is interesting):
1/ see how much the accuracy drops from each respective baseline. This gives you some estimate but I have to caution you that an accuracy drop of say 0.1 with a weak baseline is not the same as with a strong baseline (accuracy gains become exponentially harder as accuracy increases).
2/ Our code is available online and you can simply load the MobiletNetV2 with any pretrained accuracy and test our performance and directly compare with other results in the literature.
3/ If the other paper in the literature has their code available online, repeat step 2 with the stronger baseline.
Hope this helps
from zeroq.
The reason we decided to test a strong baseline for all of the tasks is simple. If we use a lower baseline accuracy, then it is not possible to test quantization's impact. For example, it is actually possible to get higher accuracy after quantization if you use a weak baseline. This can be misleading, because the reason for accuracy improvement is not quantization, but that the FP32 baseline was a weak baseline. To avoid such confusion, we decided to test with the strongest accuracy for each of the models. Recovering the accuracy of the stronger baseline is harder and that is why we targeted that.
Now you can use three methods to compare ZeroQ with other methods in the literature (including DFQ which I believe is interesting):
1/ see how much the accuracy drops from each respective baseline. This gives you some estimate but I have to caution you that an accuracy drop of say 0.1 with a weak baseline is not the same as with a strong baseline (accuracy gains become exponentially harder as accuracy increases).
2/ Our code is available online and you can simply load the MobiletNetV2 with any pretrained accuracy and test our performance and directly compare with other results in the literature.
3/ If the other paper in the literature has their code available online, repeat step 2 with the stronger baseline.
Hope this helps
Yes that makes it very clear. Thank you so much.
from zeroq.
Related Issues (20)
- Reproduction and Auto-Mixed Quantization? HOT 2
- How much calibration data is needed? HOT 3
- Where could I find low bit quantization code.
- increased inference latency for quantized model HOT 4
- The accuracy of pytorch official model is 0.3% lower than full precision HOT 1
- bitwidth of each layer (discussion of MP) HOT 2
- Export quantized model into pth file
- How to Train A Quantized SSD Detector?
- 生成数据时,提取的网络激活位置是否有bug? HOT 1
- Backpropagation function for quantized model
- is the initialized data from a uniform distribution instead of a gaussian distribution?
- Could you share the slides of the oral report? Thanks
- The result of using original ImageNet dataset to calibrate 4bit model is worse than ZeroQ, why is that?
- Why do the weights need to be dequantized after one quantization? HOT 1
- Runtime error when running uniform_test.py
- Welcome update to OpenMMLab 2.0
- Model remains float32 type after quantization HOT 1
- How is the Mixed Precision bit setting is automated? HOT 1
- Is the proposed method a Offline quantization or Run-time quantization? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from zeroq.