Giter Club home page Giter Club logo

kgdet's Introduction

KGDet: Keypoint-Guided Fashion Detection (AAAI 2021)

This is an official implementation of the AAAI-2021 paper "KGDet: Keypoint-Guided Fashion Detection".

Architecture

Installation

To avoid problems, please install this repo in a pure conda virtual environment.

First, enter the root directory of this repo. Install CUDA and PyTorch with conda.

conda install -c pytorch -c conda-forge pytorch==1.4.0 torchvision==0.5.0 cudatoolkit-dev=10.1 

Then, install other dependencies with pip.

pip install -r requirements.txt

DeepFashion2API

cd deepfashion2_api/PythonAPI
pip install -e .

main code

Our code is based on mmdetection, which is a clean open-sourced project for benchmarking object detection methods.

cd ../../mmdetection
python setup.py develop

Now the repo is ready, let's go back to the root directory.

cd ..

Data Preparation

DeepFashion2

If you need to run experiments on the entire DeepFashion2 dataset, please refer to DeepFashion2 for detailed guidance. Otherwise, you can skip to the Demo dataset subsection.

After downloading and unpacking the dataset, please create a soft link from the code repository to the dataset's root directory.

ln -s <root dir of DeepFashion2> data/deepfashion2

Demo dataset

We provide a subset (32 images) of DeepFashion2 to enable quick-experiment.

Checkpoints

The checkpoints can be fetched from this OneDrive link.

Experiments

Demo

Test with 1 gpu

./mmdetection/tools/dist_test.sh configs/kgdet_moment_r50_fpn_1x-demo.py checkpoints/KGDet_epoch-12.pth 1 --json_out work_dirs/demo_KGDet.json --eval bbox keypoints
  • Results files will be stored as work_dirs/demo_KGDet.json.
  • If you only need the prediction results, you can drop --eval and its arguments.

DeepFashion2

Train with 4 gpus

./mmdetection/tools/dist_train.sh configs/kgdet_moment_r50_fpn_1x-deepfashion2.py 4 --validate --work_dir work_dirs/TRAIN_KGDet
  • The running log and checkpoints will be stored in the work_dirs/TRAIN_KGDet directory according to the argument --work_dir.
  • --validate evokes a validation section after each training epoch.

Test with 4 gpus

./mmdetection/tools/dist_test.sh configs/kgdet_moment_r50_fpn_1x-deepfashion2.py checkpoints/KGDet_epoch-12.pth 4 --json_out work_dirs/result_KGDet.json --eval bbox keypoints
  • Results files will be stored as work_dirs/result_KGDet.json.

Customization

If you would like to run our model on your own data, you can imitate the structure of the demo_dataset (an image directory plus a JSON file), and adjust the arguments in the configuration file.

Acknowledgment

This repo is built upon RepPoints and mmdetection.

@inproceedings{qian2021kgdet,
  title={KGDet: Keypoint-Guided Fashion Detection},
  author={Qian, Shenhan and Lian, Dongze and Zhao, Binqiang and Liu, Tong and Zhu, Bohui and Li, Hai and Gao, Shenghua},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={3},
  pages={2449--2457},
  year={2021}
}

kgdet's People

Contributors

shenhanqian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

kgdet's Issues

error

当我运行 ./mmdetection/tools/dist_test.sh configs/kgdet_moment_r50_fpn_1x-demo.py checkpoints/KGDet_epoch-12.pth 1 --json_out work_dirs/demo_KGDet.json --eval bbox keypoints

会出现下面的报错
bash: ./mmdetection/tools/dist_test.sh: /usr/bin/env: bad interpreter: Permission denied

请问这个可能是什么原因?

The keypoints output file generated after testing has a v value of 1 for all 294 key points

Following your steps exactly, why is each key point in the generated JSON file 1.0? As shown in the figure:
图片
Will the visualization result in all the key points appearing because I haven't set them? Is there a problem with the prediction model?
图片
There is no problem with the test, but it is strange to see 294 key points in the output file with all v being 1!!!
图片

测试数据怎么制作?

不好意思,请问一下,测试数据的json文件里为什么还包含keypoints, segementation等数据,如果我想用自己的数据测试的话,具体说的话 我需要怎么制作json文件呢? 有什么资料可以参考吗?

谢谢。

How to visualise the bbox?

I was going through the visual script in the repo but not able to understand how exactly should I visualise the bounding boxes.

Error occurs when running the Test with 1 gpu.

I Followed all the instructions to install this repo but when i run Test with 1 gpu demo, follow error occurs.

Traceback (most recent call last): File "./mmdetection/tools/test.py", line 13, in from mmdet.apis import init_dist File "/home/revolveai/projects/KGDet/mmdetection/mmdet/apis/init.py", line 2, in from .inference import (inference_detector, init_detector, show_result, File "/home/revolveai/projects/KGDet/mmdetection/mmdet/apis/inference.py", line 10, in from mmdet.core import get_classes File "/home/revolveai/projects/KGDet/mmdetection/mmdet/core/init.py", line 3, in from .evaluation import * # noqa: F401, F403 File "/home/revolveai/projects/KGDet/mmdetection/mmdet/core/evaluation/init.py", line 5, in from .eval_hooks import (CocoDistEvalmAPHook, CocoDistEvalRecallHook, File "/home/revolveai/projects/KGDet/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 13, in from mmdet import datasets File "/home/revolveai/projects/KGDet/mmdetection/mmdet/datasets/init.py", line 7, in from .loader import DistributedGroupSampler, GroupSampler, build_dataloader File "/home/revolveai/projects/KGDet/mmdetection/mmdet/datasets/loader/init.py", line 1, in from .build_loader import build_dataloader File "/home/revolveai/projects/KGDet/mmdetection/mmdet/datasets/loader/build_loader.py", line 8, in from .sampler import DistributedGroupSampler, DistributedSampler, GroupSampler File "/home/revolveai/projects/KGDet/mmdetection/mmdet/datasets/loader/sampler.py", line 6, in from mmcv.runner.utils import get_dist_info ImportError: cannot import name 'get_dist_info' Traceback (most recent call last): File "/home/revolveai/miniconda3/envs/pft_test/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/revolveai/miniconda3/envs/pft_test/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/revolveai/miniconda3/envs/pft_test/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in main() File "/home/revolveai/miniconda3/envs/pft_test/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/revolveai/miniconda3/envs/pft_test/bin/python', '-u', './mmdetection/tools/test.py', '--local_rank=0', 'configs/kgdet_moment_r50_fpn_1x-demo.py', 'checkpoints/KGDet_epoch-12.pth', '--launcher', 'pytorch', '--json_out', 'work_dirs/demo_KGDet.json', '--eval', 'bbox', 'keypoints']' returned non-zero exit status 1.

The demo experiment cannot be run successfully

Hello.

Thanks you for the great work.

I used your checkpoints to test the deepfashon2 images, but I encountered some problems. I tried the following modifications:

  1. The file reppoints_detector_kp_gt.py is missing, so I comment out import RepPointsDetectorKpGT. I'm not sure if this will affect the model accuracy.
  2. In the file https://github.com/ShenhanQian/KGDet/blob/master/configs/kgdet_moment_r50_fpn_1x-deepfashion2.py, the neck type is FPN2, but there is no FPN2 module. I change the FPN2 type into FPN and comment out select_out=[2].

Finally, I can run the demo. However, the output is not correct. Can you give some advises so that I can run the demo experiment successfully? Thanks.

Support for higher Cuda version

Thank you for sharing such an amazing repo. I'm testing it but having trouble installing dependencies because Colab and also my own laptop have Cuda 11.2 or higher so I cannot install pytorch and mmcv like your README instruction. Do you have plan for it?

Question about dataset for training ?

Thank you for great research. I have a question about field keypoints in dataset. Why number keypoint very large 882 while max keypoint in deepfastion is 38 .

error: unrecognized arguments: --json_out work_dirs/demo_KGDet.json

So I am trying to run this notebook on colab and I tried to run the line for demo on test images

  • I tried this
    # single-gpu testing !python tools/test.py /content/drive/MyDrive/KGDet/KGDet/configs/kgdet_moment_r50_fpn_1x-demo.py /content/drive/MyDrive/KGDet/KGDet/checkpoints/KGDet_epoch-12.pth --json_out work_dirs/demo_KGDet.json --eval bbox keypoints

and got the error test.py: error: unrecognized arguments: --json_out work_dirs/demo_KGDet.json

If I remove json from json_out then it raises a pkl file error

  • Alternatively, I tried ! ./tools/dist_test.sh /content/drive/MyDrive/KGDet/KGDet/configs/kgdet_moment_r50_fpn_1x-demo.py /content/drive/MyDrive/KGDet/KGDet/checkpoints/KGDet_epoch-12.pth 1 --json_out work_dirs/demo_KGDet.json --eval bbox keypoints

and got /bin/bash: ./tools/dist_test.sh: /usr/bin/env: bad interpreter: Permission denied

Is there a tutorial to run this on colab ?

I also tried following mmdetection's colab notebook tutorial and tried this
from mmdet.apis import inference_detector, init_detector, show_result_pyplot config = '/content/drive/MyDrive/KGDet/KGDet/configs/kgdet_moment_r50_fpn_1x-demo.py' checkpoint = '/content/drive/MyDrive/KGDet/KGDet/checkpoints/KGDet_epoch-12.pth' model = init_detector(config, checkpoint, device='cuda:0')

but ended up getting KeyError: 'RepPointsDetectorKp is not in the models registry'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.