Giter Club home page Giter Club logo

fast_human_pose_estimation_pytorch's Introduction

fast_human_pose_estimation_pytorch's People

Contributors

dependabot[bot] avatar yli150 avatar yuanyuanli85 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast_human_pose_estimation_pytorch's Issues

train at mpii,acc so small

I try to run "python example/mpii.py -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/hg_s8_b1/ " in your code , compare to your log.txt,loss can descend as same with you,but acc samll too mush,just "python example/mpii.py -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/hg_s8_b1/ " is whole?
log

mobile=false

when I try to set --mobile=false, it cou;dn't work.

Unable to export to onnx

When I try to export like described on the README it gives an error on the model from Google Drive:

python tools/mpii_export_to_onxx.py -a hg -s 2 -b 1 --num-classes 16 --mobile True --in_res 256  --checkpoint checkpoint/mpii_hg_s2_b1_mobile_fpd/model_best.pth.tar --out_onnx checkpoint/mpii_hg_s2_b1_mobile_fpd/model_best.onnx
==> creating model 'hg', stacks=2, blocks=1
=> loading checkpoint 'checkpoint/mpii_hg_s2_b1_mobile_fpd/model_best.pth.tar'
=> loaded checkpoint 'checkpoint/mpii_hg_s2_b1_mobile_fpd/model_best.pth.tar' (epoch 90)
Traceback (most recent call last):
  File "tools/mpii_export_to_onxx.py", line 72, in <module>
    main(parser.parse_args())
  File "tools/mpii_export_to_onxx.py", line 48, in main
    torch.onnx.export(model, dummy_input, args.out_onnx)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/__init__.py", line 25, in export
    return utils.export(*args, **kwargs)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/utils.py", line 131, in export
    strip_doc_string=strip_doc_string)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/utils.py", line 363, in _export
    _retain_param_name, do_constant_folding)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/utils.py", line 278, in _model_to_graph
    _disable_torch_constant_prop=_disable_torch_constant_prop)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/utils.py", line 188, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/__init__.py", line 50, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/utils.py", line 589, in _run_symbolic_function
    return fn(g, *inputs, **attrs)
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/symbolic.py", line 130, in wrapper
    args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
  File "/home/ubuntu/anaconda2/envs/fast-human/lib/python2.7/site-packages/torch/onnx/symbolic.py", line 90, in _parse_arg
    raise RuntimeError("Failed to export an ONNX attribute, "
RuntimeError: Failed to export an ONNX attribute, since it's not constant, please try to make things (e.g., kernel size) static if possible

check the result of demo with the image sample.jpg

sorry to interrupt u. I ran your demo and got an awful result which makes me confused, i'm here to check were you get the same ans pic as i got below or did i do sth wrong?Looking forward to your reply
sample_with_hg

Leeds jason file?

Hi I was hoping if you could shed some light on where I can find the json file for the leeds dataset, I can only find the mpii one?

Thanks,

Training another student model(s4b2/s4b1) will drop just several epoches

Hi, I am reproducing your work and think about changing the target student model, such as stack=4, block=2 or stack=4, block=1, and I manually set the num_features and inplanes:

python example/mpii_kd.py -a hg --stacks 4 --blocks 2 --features 32 --inplanes 8 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar

this s4b2 model KD training will drop val acc just on 8th epoch from 56% to %17%,

and used:
python example/mpii_kd.py -a hg --stacks 4 --blocks 1 --features 128 --inplanes 32 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar

this command will cause val acc drop at 4th epoch from 42% to 1.6%.

But use :
python example/mpii_kd.py -a hg --stacks 2 --blocks 1 --features 64 --inplanes 8 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar
It looks normal.

Does the model structure input effect a lot? Do the num_features and inplane have the suitable stack and block structure?

loss function

Why do you define two loss function?
total_loss = loss_labeled + unkdloss_alpha * kdloss_unlabeled
total_loss = kdloss_alpha * tsloss + (1 - kdloss_alpha)*gtloss

Student model overfits really early training

When I trained the student model under supervision of the teacher model downloaded from the link in readme as well as the labelled data, the validation accuracy drops quite quickly on the 8th epoch, and the best validation accuracy was only about 60%, way low from the paper result. Why does this happen and how to solve this problem? If I train to more epochs will the result be better?

A simple question about the paper

As is mentioned in the paper, the teacher network is choosen as the original Hourglass Network (ECCV 2016), then a student network is trained on a customized network, but the result shows a better perfomance than the result of Newell et al., ECCV’16(the teacher network). I can't understand, could anyone explain?

unlabel dataset

Did unlabel dataset need annotation file?
How arrange the unlabel dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.