Giter Club home page Giter Club logo

fashionformer's People

Contributors

lxtgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fashionformer's Issues

About project license

Hello folks, that’s a really nice project! I would like to use it in one of my projects, but I saw that this repository don’t have any license file. Do you folks intend to add some open source licensing in this project, or should we assume that the code published here can’t be used in other places?

Thank you 😄

Documentation of the result

How does the mapping between fashion class and fashion attributes work?
I see the bounding boxes belonging to the fashion class prediction.
And I see 100 attribute arrays.
However, the attributes do not correspond to the fashion classes when mapped over the bounding boxes.

Output of the model is not clear

The raw model output structure is not clear.

Let's say I run this code:

from mmdet.apis import (async_inference_detector, inference_detector,
                        init_detector, show_result_pyplot)

config="configs/fashionformer/fashionpedia/fashionformer_swin_b_mlvl_feat_6x.py"
checkpoint="fashionformer_swin_b_3x.pth"

# build the model from a config file and a checkpoint file
model = init_detector(config, checkpoint, device="cuda")
# test a single image
result = inference_detector(model, "test.jpg")

I am trying to plot segmentation, classes, and attributes myself. However the result is does not make sense to me.

result is a list of length 3, why?

In : len(result)                                                                                                                                                   
Out: 3

All 3 of them are of length 46, I assume because there are 46 classes?

In : for res in result: 
...:     print(len(res))                                                                                                                                                          
46
46
46

Now looking at each element of the result.
result[0] is a list of Nx5 matrices? Why 5?

    In : for r in result[0]: 
    ...:     print(r.shape) 
    ...:                                                                                                                                                               
(3, 5)
(9, 5)
(2, 5)
...
(0, 5)
(0, 5)
(0, 5)

result[1] and result[2] are the lists of the same lengths:

    In : for r in result[1]: # or result[1]
    ...:     print(len(r)) 
    ...:      
    ...:                                                                                                                                                               
3
9
2
...
0
0

Each sub element of result[1] is (1500, 1500) matrix, while each sub element of result[2] is vector if (294,)
What are these?

hello,I run the demo code.But all the coordinate of BBOX is zero

hello,I run the demo code.But all the coordinate of BBOX is zero,like this:
[array([[0. , 0. , 0. , 0. , 0.79686284]],
dtype=float32),
array([[0. , 0. , 0. , 0. , 0.05989303]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.0524485]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.02910043],
[0. , 0. , 0. , 0. , 0.02114534],
[0. , 0. , 0. , 0. , 0.01676908],
[0. , 0. , 0. , 0. , 0.01669254]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.02301841],
[0. , 0. , 0. , 0. , 0.01924651]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.03298811]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.01731287]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.89752316]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.06933805],
[0. , 0. , 0. , 0. , 0.05986979],
[0. , 0. , 0. , 0. , 0.02281323],
[0. , 0. , 0. , 0. , 0.01794562]],
dtype=float32),
array([[0. , 0. , 0. , 0. , 0.90336406],
[0. , 0. , 0. , 0. , 0.5355902 ],
[0. , 0. , 0. , 0. , 0.33188647],
[0. , 0. , 0. , 0. , 0.03402795],
[0. , 0. , 0. , 0. , 0.02057427],
[0. , 0. , 0. , 0. , 0.01929876]],
dtype=float32),
array([[0. , 0. , 0. , 0. , 0.77538824],
[0. , 0. , 0. , 0. , 0.7699898 ],
[0. , 0. , 0. , 0. , 0.04109826],
[0. , 0. , 0. , 0. , 0.03557413],
[0. , 0. , 0. , 0. , 0.03185431],
[0. , 0. , 0. , 0. , 0.03024532],
[0. , 0. , 0. , 0. , 0.02898962],
[0. , 0. , 0. , 0. , 0.02549006],
[0. , 0. , 0. , 0. , 0.02261182],
[0. , 0. , 0. , 0. , 0.02258267],
[0. , 0. , 0. , 0. , 0.02063485],
[0. , 0. , 0. , 0. , 0.02014134],
[0. , 0. , 0. , 0. , 0.01998736],
[0. , 0. , 0. , 0. , 0.01909333],
[0. , 0. , 0. , 0. , 0.01787589],
[0. , 0. , 0. , 0. , 0.01767974],
[0. , 0. , 0. , 0. , 0.01759748],
[0. , 0. , 0. , 0. , 0.01683186]],
dtype=float32),
array([[0. , 0. , 0. , 0. , 0.02470406],
[0. , 0. , 0. , 0. , 0.0189336 ],
[0. , 0. , 0. , 0. , 0.01676438]],
dtype=float32),
array([], shape=(0, 5), dtype=float32),
array([[0. , 0. , 0. , 0. , 0.03116137]],
dtype=float32),

Model training duration

Hi, thanks for sharing your code! I spent more than 3 days to train the Fashionformer with swinb backbone. The training config is configs/fashionformer/fashionpedia/fashionformer_swin_b_mlvl_feat_6x.py and I use 8 V100 GPUs to train fashionpedia 12 epochs. Is the training time normal? And I wonder to know how long when you train the above configuration with 3x schedule? Thanks a lot!

questions about Attribute Recognition

Hi, sorry to bother you. Your work is really magnificiant. But here I get some questions about the attribute recognition part. In this project, is there any code that can output the attribute recognition of each image? Or if there isn't, could you please teach me how to get the output of attribute recognition? I'm stuck in achieving the cloth attribute recognition, running out of time,and your project is the only channce for me now. So this is really important to me, and if this is solved, I can pay you back with money if you want. Thank you!!!

About the data in ModaNet

Hi,

I find ModaNet hard to download. I would like to know whether there is some easy way to get the ModaNet datasets. Or could you provide a zip file of ModaNet?

Best regards,

TANG, shixiang

Run on customize data

Thanks for your excellent work. Could you please provide some instructions on how to infer on customize inputs?

KeyError: 'FashionFormer is not in the models registry'

I used your trained model with config file ‘fashionformer_r50_modanet.py’ to demo my own img, but face the error below. How could I fix the issus?

(open-mmlab) /mnt/workspace/lyc/FashionFormer-main> PYTHONPATH='.' python demo/image_demo.py /mnt/workspace/lyc/mmdet/demo/demo1.jpg /mnt/workspace/lyc/FashionFormer-main/configs/fashionformer/modanet/fashionformer_r50_modanet.py /mnt/workspace/lyc/FashionFormer-main/fashionformer_r50_3x.pth /home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/misc.py:77: UserWarning: projects.KFashion failed to import and is ignored. UserWarning) /home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/misc.py:77: UserWarning: datasets failed to import and is ignored. UserWarning) Traceback (most recent call last): File "demo/image_demo.py", line 68, in <module> main(args) File "demo/image_demo.py", line 34, in main model = init_detector(args.config, args.checkpoint, device=args.device) File "/home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmdet/apis/inference.py", line 44, in init_detector model = build_detector(config.model, test_cfg=config.get('test_cfg')) File "/home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmdet/models/builder.py", line 59, in build_detector cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 237, in build return self.build_func(*args, **kwargs, registry=self) File "/home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/home/pai/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 62, in build_from_cfg f'{obj_type} is not in the {registry.name} registry') KeyError: 'FashionFormer is not in the models registry'

Can't understand with checkpoint with model combination ir right.

Hello can you tell model checkpoints right configuration

Im trying like this but with custom image I got empty image
model = init_detector(
config='configs/fashionformer/modanet/mask_rcnn_swin_b_modanet.py',
checkpoint='/data/GarmentQC-dataset/fashionformer_r50_3x.pth',device='cpu'
)

mmdet==2.18.0
mmcv==1.3.18
mmcv-full==1.3.8 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
python 3.7

Also is the mmdet==2.18.0 correct version because looking from demo/image_demo.py in function show_result_pyplot() param out_file and palette is only in model version starting from 2.24.0, just thinking how to replicate your test if im installing mmdet==2.18.0 then I got error TypeError: show_result_pyplot() got an unexpected keyword argument 'palette'
if im updating mmdet to 2.24.0 I got different errors ;)

Colab or hugging face version?

Thanks for your great work, is there a colab or hugging face version available for us to test our images? ? I'm really into your work but I don't have any PC/GPU server that can run this code.

segmentation and label results of my own data

I'm trying to separate the clothes from the body, so I want to get the split mask (without overlaying the original image) and the corresponding labels on my own data. I tried to modify the relevant function in image_demo.py, but found it failed.

train on deepfashion & fashionpedia

Dear author,thanks for your contribute for this work and it really inspires me a lot. To extend this work, I want to first train this model on deepfashion and fashionpedia datasets to see its efficiency, but I dont't know how to do. Could you release more instruction on this? Thanks a lot

Can't run demo code

hello when i run demo code there was some error

python demo/image_demo.py ../dataset/test/03fc99c4deaf14724ef6277dec16b8e8.jpg work_dirs/attribute_mask_rcnn_swin_b_3x/attribute_mask_rcnn_swin_b_3x.py work_dirs/attribute_mask_rcnn_swin_b_3x/latest.pth

how can i solve this error?

Traceback (most recent call last):
File "/home/kshan/anaconda3/envs/fashionformer/lib/python3.8/site-packages/mmcv/utils/misc.py", line 73, in import_modules_from_strings
imported_tmp = import_module(imp)
File "/home/kshan/anaconda3/envs/fashionformer/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'datasets'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo/image_demo.py", line 69, in
main(args)
File "demo/image_demo.py", line 35, in main
model = init_detector(args.config, args.checkpoint, device=args.device)
File "/home/kshan/taehyung/mmdetection/mmdet/apis/inference.py", line 33, in init_detector
config = mmcv.Config.fromfile(config)
File "/home/kshan/anaconda3/envs/fashionformer/lib/python3.8/site-packages/mmcv/utils/config.py", line 343, in fromfile
import_modules_from_strings(**cfg_dict['custom_imports'])
File "/home/kshan/anaconda3/envs/fashionformer/lib/python3.8/site-packages/mmcv/utils/misc.py", line 80, in import_modules_from_strings
raise ImportError
ImportError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.