levipereira / yolov9-qat Goto Github PK
View Code? Open in Web Editor NEWImplementation of YOLOv9 QAT optimized for deployment on TensorRT platforms.
License: Apache License 2.0
Implementation of YOLOv9 QAT optimized for deployment on TensorRT platforms.
License: Apache License 2.0
We're currently working on improving our YOLOv9 model by implementing quantization in the ADown
downsampling class. We've observed that the current implementation is generating reformat operations and increasing the latency of the model.
To address this issue, we plan to implement quantization in two steps:
Creating a Quantized Version of ADown:
We will create a new class called QuantADown
. This class will contain a method named adown_quant_forward(self, x)
to handle the quantized forward pass.
Integration into the Model:
We will integrate the QuantADown
class into our model by modifying the replace_custom_module_forward
function in our quantization script. This function is responsible for replacing custom modules with their quantized counterparts during the quantization process.
We believe that implementing quantization in the ADown
class will help optimize the model's performance and reduce latency. We welcome any feedback or suggestions from the community regarding this approach.
Thank you for your support and collaboration!
Hi, thx for your great jobs.
I have used your code to quantize these yolov9 models. But I can't valuate qat model and export end2end onnx.
The error logs and information of my device are below.
However, I can use export_qat.py to export normal onnx and export.py to export end2end onnx. It seems kind of weird.
=================================
root@Ryan-Windows-PC:/yolov9# python3 qat.py eval --weights runs/qat/yolov9-c-converted-qat/weights/qat_best_yolov9-c-converted.pt --name eval_qat_yolov9
Namespace(batch_size=10, cmd='eval', conf_thres=0.001, data='data/coco.yaml', device='cuda:0', exist_ok=False, imgsz=640, iou_thres=0.7, name='eval_qat_yolov9', project=PosixPath('runs/qat_eval'), save_dir='runs/qat_eval/eval_qat_yolov9', weights='runs/qat/yolov9-c-converted-qat/weights/qat_best_yolov9-c-converted.pt')
Traceback (most recent call last):
File "qat.py", line 542, in
run_eval(opt.weights, opt.device, opt.data,
TypeError: run_eval() missing 1 required positional argument: 'eval_pycocotools'
=================================
root@Ryan-Windows-PC:/yolov9# python3 export_qat.py --weights runs/qat/yolov9-c-converted-qat/weights/qat_best_yolov9-c-converted.pt --include onnx_end2end
export_qat: data=data/coco.yaml, weights=['runs/qat/yolov9-c-converted-qat/weights/qat_best_yolov9-c-converted.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=True, keras=False, optimize=False, int8=False, dynamic=True, simplify=True, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx_end2end']
YOLO π v0.1-85-gbad4f4b Python-3.8.10 torch-1.14.0a0+44dac51 CPU
Fusing layers...
gelan-c summary: 676 layers, 25288768 parameters, 25288768 gradients, 0.4 GFLOPs
PyTorch: starting from runs/qat/yolov9-c-converted-qat/weights/qat_best_yolov9-c-converted.pt with output shape (1, 84, 8400) (99.8 MB)
ONNX END2END: starting export with onnx 1.16.0...
ONNX END2END: export failure β 0.0s: 'DetectionModel' object is not subscriptable
=== Device Information ===
Available Devices:
Device 0: "NVIDIA GeForce RTX 4080 SUPER" UUID: GPU-91544520-3ee2-efb6-e8ca-c6e6536f93f7
Device 1: "NVIDIA GeForce GTX 1080 Ti" UUID: GPU-c20a7a00-79b8-e596-d431-59989dc5c026
Selected Device: NVIDIA GeForce RTX 4080 SUPER
Selected Device ID: 0
Selected Device UUID: GPU-91544520-3ee2-efb6-e8ca-c6e6536f93f7
Compute Capability: 8.9
SMs: 80
Device Global Memory: 16375 MiB
Shared Memory per SM: 100 KiB
Memory Bus Width: 256 bits (ECC disabled)
Application Compute Clock Rate: 2.55 GHz
Application Memory Clock Rate: 11.501 GHz
TensorRT version: 10.0.0
I'm following the steps exactly as detailed in the README.md. When reaching the quantization step, the line
python3 qat.py quantize --weights yolov9-c-converted.pt --name yolov9_qat --exist-ok
fails with error "Dataset not found
It seems that the default coco.yaml
comes with the path path: ../datasets/coco # dataset root dir
, while the dataset is actually in /yolov9/coco
Replacing with path: /yolov9/coco # dataset root dir
in coco.yaml
fixes this.
We're currently working on improving our YOLOv9 model by implementing quantization in the class SPPELAN(nn.Module): . We've observed that the current implementation is generating reformat operations.
Any contributions, suggestions, or shared experiences will be valued.
I have deployed yolov9-qat model using C++ tensort in RTX 3090, But I find the inference time is same comparing with fp16.
The modification I made on the fp16 code was simply to add a this:
config->setFlag(nvinfer1::BuilderFlag::kINT8);
i.e. I set both fp16 and int8:
config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setFlag(nvinfer1::BuilderFlag::kFP16);
And I tested infer-yolov9-c-qat-end2end.onnx, There is still no reduction in inference time for this model.
Hi, levipereira:
Thank you for your contribution, I want to deploy yolov9-qat model using C++ TensorRT.
Can you provide some reference materials?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.