🔭 I’m currently working on object detection.
lutingwang / oadp Goto Github PK
View Code? Open in Web Editor NEWObject-Aware Distillation Pyramid for Open-Vocabulary Object Detection
License: Apache License 2.0
Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection
License: Apache License 2.0
Please I run‘python -m oadp.build_annotations’.But appear AttributeError: module 'todd' has no attribute 'StoreMeta'.Why is that, hope to get answered, thank you.
What is the difference between ml_coco.pth and vild.pth? How to generate ml_coco.pth file? Looking forward to your reply
非常启发的工作。但个人有一些疑惑的地方。
I would like to ask about coco training and test performance on DRY_RUN mode. I use DRY_RUN on each command you mentioned, including extract globals, objects and blocks features. When I run:
DRY_RUN=True TRAIN_WITH_VAL_DATASET=True torchrun --nproc_per_node=4 -m oadp.dp.train oadp_ov_coco configs/dp/oadp_ov_coco.py --override .validator.dataloader.dataset.ann_file::data/coco/annotations/instances_val2017.48.json
The DP training for coco, the result be like:
2023-11-11 23:43:33,261 - mmdet - INFO - Iter(val) [1] COCO_48_17_bbox_mAP_: 0.8614, COCO_48_17_bbox_mAP_50: 0.8614, COCO_48_17_bbox_mAP_75: 0.8614, COCO_48_17_bbox_mAP_s: 0.7921, COCO_48_17_bbox_mAP_m: 1.0000, COCO_48_17_bbox_mAP_l: 1.0000, COCO_48_17_bbox_mAP_copypaste: 0.8614 0.8614 0.8614 0.7921 1.0000 1.0000, COCO_48_bbox_mAP_: 0.8614, COCO_48_bbox_mAP_50: 0.8614, COCO_48_bbox_mAP_75: 0.8614, COCO_48_bbox_mAP_s: 0.7921, COCO_48_bbox_mAP_m: 1.0000, COCO_48_bbox_mAP_l: 1.0000, COCO_48_bbox_mAP_copypaste: 0.8614 0.8614 0.8614 0.7921 1.0000 1.0000, COCO_17_bbox_mAP_: -1.0000, COCO_17_bbox_mAP_50: -1.0000, COCO_17_bbox_mAP_75: -1.0000, COCO_17_bbox_mAP_s: -1.0000, COCO_17_bbox_mAP_m: -1.0000, COCO_17_bbox_mAP_l: -1.0000, COCO_17_bbox_mAP_copypaste: -1.0000 -1.0000 -1.0000 -1.0000 -1.0000 -1.0000
2023-11-11 23:43:33,606 - mmdet - INFO - Saving checkpoint at 40000 iterations
2023-11-11 23:43:35,052 - mmdet - INFO - Iter [40000/40000.0] lr: 2.000e-03, eta: 0:00:00, time: 4.103, data_time: 2.348, memory: 2148, loss_rpn_cls: 0.0000, loss_rpn_bbox: 0.0011, loss_cls: 0.0021, acc: 99.9512, loss_bbox: 0.0073, loss_global: 0.0002, recall_global: 69.3125, loss_block: 0.0016, recall_block: 11.1172, loss_clip_objects: 0.6160, loss_clip_global: 0.2377, loss_clip_blocks: 0.5239, loss_clip_block_relations: 0.0503, loss: 1.4403
I got a ridiculous result: 0.8614 mAP, there must be something wrong, but I check my process, data structure and commands, all these are following your steps. So is DRY_RUN makes this unreal result and I should turn to run without DRY_RUN?(By the way, extract globals, objects and blocks features on DRY_RUN seems to be smaller than without DRY_RUN's, so I should download from Baidu disk?)
Thanks for your attention and impressive work!
Thank you for sharing the great work.
I tried to reproduce the results(20.6 bbox APr) on OV LVIS. The command I used is torchrun --nproc_per_node=8 -m oadp.dp.train oadp_ov_lvis configs/dp/oadp_ov_lvis.py --override .trainer.evaluation.interval:24
. The results are as follows:
OD | APr | APc | APf | AP |
---|---|---|---|---|
checkpoint | 20.7 | 28.2 | 32.3 | 28.5 |
reproduce | 18.2 | 27.3 | 32.1 | 27.6 |
However, there was a difference in performance, especially in APr. The 'checkpoint' word means the test result with the checkpoint you provided.
I checked the train log of LVIS in Baidu to see what the problem is. And I couldn't see the configs related to Global KD. So I tried reproducing again without Global KD like the log. The results are as follows:
OD | APr | APc | APf | AP |
---|---|---|---|---|
checkpoint | 20.7 | 28.2 | 32.3 | 28.5 |
reproduce without Global KD | 19.1 | 27.9 | 32.4 | 28.1 |
The performance of APr is slightly improved, but still lower than 20.7.
In here, I got two questions.
Q1. Is it correct that the final model of OV LVIS doesn't require the Global KD?
Q2. Did I do something wrong when I do reproduce?
These are my experiment settings:
Thanks for your excellent work and for sharing the code. Even though it is not the main task in this paper, this paper also reports the performance of the instance segmentation task. I am interested in the OV-IS too, could you provide the config and corresponding code of OADP for training and evaluation on the OV-IS task?
Look forward to your reply, thank you!
Thank you for outstanding work, I got some problems when I try to reproduce the training of coco. Firstly I use your checkpoint and successfully got the same result 31.3 mAP, it proves that the dataset and python environment is correctly set.
And I use the command to train vild first: torchrun --nproc_per_node=2 -m oadp.dp.train vild_ov_coco configs/dp/vild_ov_coco.py
, and then formattly train coco: torchrun --nproc_per_node=2 -m oadp.dp.train oadp_ov_coco configs/dp/oadp_ov_coco.py
, but I don't get correct result when I use the training checkpoint, here is my full result:
{'COCO_17_bbox_mAP_': '0.1495',
'COCO_17_bbox_mAP_50': '0.2830',
'COCO_17_bbox_mAP_75': '0.1398',
'COCO_17_bbox_mAP_copypaste': '0.1495 0.2830 0.1398 0.1060 0.1788 0.1816',
'COCO_17_bbox_mAP_l': '0.1816',
'COCO_17_bbox_mAP_m': '0.1788',
'COCO_17_bbox_mAP_s': '0.1060',
'COCO_48_17_bbox_mAP_': '0.2673',
'COCO_48_17_bbox_mAP_50': '0.4436',
'COCO_48_17_bbox_mAP_75': '0.2798',
'COCO_48_17_bbox_mAP_copypaste': '0.2673 0.4436 0.2798 0.1750 0.2916 0.3488',
'COCO_48_17_bbox_mAP_l': '0.3488',
'COCO_48_17_bbox_mAP_m': '0.2916',
'COCO_48_17_bbox_mAP_s': '0.1750',
'COCO_48_bbox_mAP_': '0.3090',
'COCO_48_bbox_mAP_50': '0.5005',
'COCO_48_bbox_mAP_75': '0.3293',
'COCO_48_bbox_mAP_copypaste': '0.3090 0.5005 0.3293 0.1994 0.3316 0.4080',
'COCO_48_bbox_mAP_l': '0.4080',
'COCO_48_bbox_mAP_m': '0.3316',
'COCO_48_bbox_mAP_s': '0.1994'}
By the way, I noticed that some abnormal data was output during the training process, the mAP result of coco_17_bbox is -1!!!, here I randomly cut partly of output during training, it is during iteration of 26000/40000:
2023-11-29 19:26:42,471 - mmdet - INFO - Iter(val) [2500] COCO_48_17_bbox_mAP_: 0.1982, COCO_48_17_bbox_mAP_50: 0.3539, COCO_48_17_bbox_mAP_75: 0.1999, COCO_48_17_bbox_mAP_s: 0.1101, COCO_48_17_bbox_mAP_m: 0.2075, COCO_48_17_bbox_mAP_l: 0.2655, COCO_48_17_bbox_mAP_copypaste: 0.1982 0.3539 0.1999 0.1101 0.2075 0.2655, COCO_48_bbox_mAP_: 0.1982, COCO_48_bbox_mAP_50: 0.3539, COCO_48_bbox_mAP_75: 0.1999, COCO_48_bbox_mAP_s: 0.1101, COCO_48_bbox_mAP_m: 0.2075, COCO_48_bbox_mAP_l: 0.2655, COCO_48_bbox_mAP_copypaste: 0.1982 0.3539 0.1999 0.1101 0.2075 0.2655, COCO_17_bbox_mAP_: -1.0000, COCO_17_bbox_mAP_50: -1.0000, COCO_17_bbox_mAP_75: -1.0000, COCO_17_bbox_mAP_s: -1.0000, COCO_17_bbox_mAP_m: -1.0000, COCO_17_bbox_mAP_l: -1.0000, COCO_17_bbox_mAP_copypaste: -1.0000 -1.0000 -1.0000 -1.0000 -1.0000 -1.0000
And when I add --override to command like: torchrun --nproc_per_node=2 -m oadp.dp.train vild_ov_coco configs/dp/vild_ov_coco.py --override .validator.dataloader.dataset.ann_file::data/coco/annotations/instances_val2017.48.json
, the checkpoint becomes unuseful:
why it makes this situation?
It seems like some parts of my experiment is wrong, how can I fixed it? And can you tell me how to use training command correctly? Appreciated!
您好!您能在百度云盘上分享objects.tar.gz吗?谢谢!
没矿的,跑这部分真的真的太耗时。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.