Giter Club home page Giter Club logo

promptdet's Introduction

PromptDet: Towards Open-vocabulary Detection using Uncurated Images (ECCV 2022)

Paper     Website

Introduction

The goal of this work is to establish a scalable pipeline for expanding an object detector towards novel/unseen categories, using zero manual annotations. To achieve that, we make the following four contributions: (i) in pursuit of generalisation, we propose a two-stage open-vocabulary object detector, where the class-agnostic object proposals are classified with a text encoder from pre-trained visual-language model; (ii) To pair the visual latent space (of RPN box proposals) with that of the pre-trained text encoder, we propose the idea of regional prompt learning to align the textual embedding space with regional visual object features; (iii) To scale up the learning procedure towards detecting a wider spectrum of objects, we exploit the available online resource via a novel self-training framework, which allows to train the proposed detector on a large corpus of noisy uncurated web images. Lastly, (iv) to evaluate our proposed detector, termed as PromptDet, we conduct extensive experiments on the challenging LVIS and MS-COCO dataset. PromptDet shows superior performance over existing approaches with fewer additional training images and zero manual annotations whatsoever.

Training framework

method overview

updates

  • July 20, 2022: add the code for LAION-novel and self-training
  • March 28, 2022: initial release

Prerequisites

  • MMDetection version 2.16.0.

  • Please see get_started.md for installation and the basic usage of MMDetection.

Regional Prompt Learning (RPL)

We learn the prompt vectors in an off-line manner using RPL. For your convenience, we also provide the learned prompt vectors and the category embeddings.

LAION-novel dataset

The LAION-novel dataset based on the learned category embeddings can be generated by using the PromptDet tools as follows:

# stege-I: install the dependencies, download the laion400m 64GB image.index and metadata.hdf5 (https://the-eye.eu/public/AI/cah/), and then retrival the LAION images (urls)
pip install faiss-cpu==1.7.2 img2dataset==1.12.0 fire==0.4.0 h5py==3.6.0
python tools/promptdet/retrieval_laion_image.py --indice-folder [laion400m-64GB-index] --metadata [metadata.hdf5] --text-features promptdet_resources/lvis_category_embeddings.pt --output-folder data/laion_lvis/images --num-images 500

# stege-II: download the LAION images
python tools/promptdet/download_laion_image.py --output-folder data/laion_lvis/images --num-thread 10

# stege-III: convert the LAION images to mmdetection format
python tools/promptdet/laion_dataset_converter.py --data-path data/laion_lvis/images --out-file data/laion_lvis/laion_train.json --topK 300

For your convenience, we also provide the image urls of our LAION-novel dataset.

Inference

# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed,
# and with LVIS v1.0 dataset in 'data/lvis_v1'.

./tools/dist_test.sh configs/promptdet/promptdet_r50_fpn_sample1e-3_mstrain_1x_lvis_v1_self_train.py work_dirs/promptdet_r50_fpn_sample1e-3_mstrain_1x_lvis_v1_self_train.pth 4 --eval bbox segm

Train

# download 'lvis_v1_train_seen.json' to 'data/lvis_v1/annotations'.

# train detector without self-training
./tools/dist_train.sh configs/promptdet/promptdet_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py 4

# train detector with self-training
./tools/dist_train.sh configs/promptdet/promptdet_r50_fpn_sample1e-3_mstrain_1x_lvis_v1_self_train.py 4

[0] Annotation file of base categories: lvis_v1_train_seen.json.
[1] Note that we provide a EpochPromptDetRunner to fetch the data from mutilple datasets alternately.

Models

For your convenience, we provide the following trained models (PromptDet) with mask AP.

Model RPL Self-training Epochs Scale Jitter Input Size APnovel APc APf AP Download
Baseline (manual prompt) 12 640~800 800x800 7.4 17.2 26.1 19.0 google
PromptDet_R_50_FPN_1x 12 640~800 800x800 11.5 19.4 26.7 20.9 google
PromptDet_R_50_FPN_1x 12 640~800 800x800 19.5 18.2 25.6 21.3 google
PromptDet_R_50_FPN_6x 72 100~1280 800x800 21.7 23.2 29.6 25.5 google

[0] All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..
[1] Refer to more details in config files in config/promptdet/.

Acknowledgement

Thanks MMDetection team for the wonderful open source project!

Citation

If you find PromptDet useful in your research, please consider citing:

@inproceedings{feng2022promptdet,
    title={PromptDet: Towards Open-vocabulary Detection using Uncurated Images},
    author={Feng, Chengjian and Zhong, Yujie and Jie, Zequn and Chu, Xiangxiang and Ren, Haibing and Wei, Xiaolin and Xie, Weidi and Ma, Lin},
    journal={Proceedings of the European Conference on Computer Vision},
    year={2022}
}

promptdet's People

Contributors

fcjian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

promptdet's Issues

COCO embeddings

Hi,
Thank you for sharing your amazing work.

Can you please share the embeddings used for COCO evaluation ? The LVIS-v1 has only 59 categories common with COCO. Otherwise could you share the learned 1 + 1 prompt vectors so it may be used in any dataset.

Thank you.

关于inference提供的config文件的疑问

请问现在提供的config文件来训练的话,是否就对应论文里 不加self-training的部分呢,也就是table2的regional prompt learning的实验结果?
谢谢

Train log

Can you publish the logs of the model training?

Time cost of training

Hi, thanks for your great work. I would like to ask how long it takes to train a model, and how many GPUs do you use? Thank you.

Category and Descroption issues

I must to change the PromptBBoxHead if I want to train my own dataset.
But the (category_embeddings.pt) need a (category_and_description.txt),
So I want to know how you generated category descriptions for your dataset?

code for regional prompt learning

Hi, I'm currently reproducing your work, but cannot find the code related to regional prompt learning.
Can u tell me where the code for preprocessing and training of regional prompt learning is? ( Sorry I'm new to mmdetection so it's hard to search ..)
Thanks!

MMCV-full version issue

when try to install mmdet==2.16.0., It will be automatically installed mmcv-full==1.7.1. It‘s impossible to satisfy mmcv-full<1.4.0.
Can you release all your version information in your conda environment?

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

singe image inference

In the process of reproducing your work, I found that there were only inference code of lvis validation dataset in the inference section. I would like to ask if there are any scripts to implement single image inference or single video inference?

How to train the model?

Thanks for your nice work and precious time!
Could you give some examples on how to train the model using existing config files in the configs/promptdet?

Baseline training configs

Hi,

Thank you for sharing your work. I would to like know the training configurations used in your baseline reported in Table 2 in your paper. The implementation details in the paper specifies 1x schedule with lr of 0.02. However, the samples_per_gpu is set to 4 in the shared configuration,

However, the default training config in mmdet, for Mask-RCNN with FPN for 1x schedule is 8 GPUs and 2 samples per GPU, for effective batch size of 16, and lr of 0.02.

Could you please specify the the number of GPU's and the batch size and corresponding lr used in your baseline.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.