This project is the official implementation of our accepted NeurIPS 2023 (spotlight) paper QuantSR: Accurate Low-bit Quantization for Efficient Image Super-Resolution.
Congratulations on the acceptance of your paper! I've read the initial draft of your paper on OpenReview, and I have three questions to ask you:
Have you applied other SR quantization methods, such as PAMS, to Transformer and compared the results with your QuantSR-T? It seems that the results were not compared in your paper.
In Table 2, the results of the DoReFa 2-bit quantization on the x4 SR network show higher accuracy than the results with a 4-bit quantization. Is there an issue with this?
The introduction of DAQ in the paper is not detailed enough, and it's unclear how it skips certain layers. The corresponding code for this part has also not been made available.