Giter Club home page Giter Club logo

alphapose's Introduction

News!

  • Nov 2022: AlphaPose paper is released! Checkout the paper for more details about this project.
  • Sep 2022: Jittor version of AlphaPose is released! It achieves 1.45x speed up with resnet50 backbone on the training stage.
  • July 2022: v0.6.0 version of AlphaPose is released! HybrIK for 3D pose and shape estimation is supported!
  • Jan 2022: v0.5.0 version of AlphaPose is released! Stronger whole body(face,hand,foot) keypoints! More models are availabel. Checkout docs/MODEL_ZOO.md
  • Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! Colab now available.
  • Dec 2019: v0.3.0 version of AlphaPose is released! Smaller model, higher accuracy!
  • Apr 2019: MXNet version of AlphaPose is released! It runs at 23 fps on COCO validation set.
  • Feb 2019: CrowdPose is integrated into AlphaPose Now!
  • Dec 2018: General version of PoseFlow is released! 3X Faster and support pose tracking results visualization!
  • Sep 2018: v0.2.0 version of AlphaPose is released! It runs at 20 fps on COCO validation set (4.6 people per image on average) and achieves 71 mAP!

AlphaPose

AlphaPose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!


COCO 17 keypoints

Halpe 26 keypoints + tracking

SMPL + tracking

Results

Pose Estimation

Results on COCO test-dev 2015:

Method AP @0.5:0.95 AP @0.5 AP @0.75 AP medium AP large
OpenPose (CMU-Pose) 61.8 84.9 67.5 57.1 68.2
Detectron (Mask R-CNN) 67.0 88.0 73.1 62.2 75.6
AlphaPose 73.3 89.2 79.1 69.0 78.6

Results on MPII full test set:

Method Head Shoulder Elbow Wrist Hip Knee Ankle Ave
OpenPose (CMU-Pose) 91.2 87.6 77.7 66.8 75.4 68.9 61.7 75.6
Newell & Deng 92.1 89.3 78.9 69.8 76.2 71.6 64.7 77.5
AlphaPose 91.3 90.5 84.0 76.4 80.3 79.9 72.4 82.1

More results and models are available in the docs/MODEL_ZOO.md.

Pose Tracking

Please read trackers/README.md for details.

CrowdPose

Please read docs/CrowdPose.md for details.

Installation

Please check out docs/INSTALL.md

Model Zoo

Please check out docs/MODEL_ZOO.md

Quick Start

  • Colab: We provide a colab example for your quick start.

  • Inference: Inference demo

./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional

Inference SMPL (Download the SMPL model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here and put it in model_files/).

./scripts/inference_3d.sh ./configs/smpl/256x192_adam_lr1e-3-res34_smpl_24_3d_base_2x_mix.yaml ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional

For high level API, please refer to ./scripts/demo_api.py. To enable tracking, please refer to this page.

  • Training: Train from scratch
./scripts/train.sh ${CONFIG} ${EXP_ID}
  • Validation: Validate your model on MSCOCO val2017
./scripts/validate.sh ${CONFIG} ${CHECKPOINT}

Examples:

Demo using FastPose model.

./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
#or
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
#or if you want to use yolox-x as the detector
python scripts/demo_inference.py --detector yolox-x --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/

Train FastPose on mscoco dataset.

./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose

More detailed inference options and examples, please refer to GETTING_STARTED.md

Common issue & FAQ

Check out faq.md for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is maintained by Jiefeng Li*, Hao-shu Fang*, Haoyi Zhu, Yuliang Xiu and Chao Xu.

The main contributors are listed in doc/contributors.md.

TODO

  • Multi-GPU/CPU inference
  • 3D pose
  • add tracking flag
  • PyTorch C++ version
  • Add model trained on mixture dataset (Check the model zoo)
  • dense support
  • small box easy filter
  • Crowdpose support
  • Speed up PoseFlow
  • Add stronger/light detectors (yolox is now supported)
  • High level API (check the scripts/demo_api.py)

We would really appreciate if you can offer any help and be the contributor of AlphaPose.

Citation

Please cite these papers in your publications if it helps your research:

@article{alphapose,
  author = {Fang, Hao-Shu and Li, Jiefeng and Tang, Hongyang and Xu, Chao and Zhu, Haoyi and Xiu, Yuliang and Li, Yong-Lu and Lu, Cewu},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title = {AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time},
  year = {2022}
}

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@inproceedings{li2019crowdpose,
    title={Crowdpose: Efficient crowded scenes pose estimation and a new benchmark},
    author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
    booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
    pages={10863--10872},
    year={2019}
}

If you used the 3D mesh reconstruction module, please also cite:

@inproceedings{li2021hybrik,
    title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
    author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={3383--3393},
    year={2021}
}

If you used the PoseFlow tracking module, please also cite:

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.

alphapose's People

Contributors

afewthings avatar ameydesai7 avatar fang-haoshu avatar frankyzy avatar haoyizhu avatar hjpotter92 avatar jasonyzhang avatar jeff-sjtu avatar jellyfish1456 avatar jmwill86 avatar maxreimann avatar peteruhrig avatar programmin1 avatar tang-hy avatar thoth000 avatar xuchao1688 avatar yulv-git avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alphapose's Issues

How many fps?

How many fps can the whole algorithm achieve when run end-to-end(including the human detector part)?
Can AlphaPose run in real time?

Ask for updated paper

Hi, I noticed this edition of RMPE performanced much better than before, from 61% mentioned in ICCV paper to 72%.

However, I just checked the link of paper and found out it had not been updated yet. So, when will the updated paper be released ?

At last, congratulation for your great work .

module 'hdf5' not found:No LuaRocks module found for hdf5

I met such a problem:
50%|█████████████████████████████████████████████████████████████████████████████████████████ | 2/4 [00:03<00:03, 1.75s/it]
Traceback (most recent call last):
File "demo-alpha-pose.py", line 176, in
num_boxes=demo(sess, net, im_name, xminarr,yminarr,xmaxarr,ymaxarr,results,score_file,index_file,num_boxes,inputpath, mode)
File "demo-alpha-pose.py", line 67, in demo
scores, boxes = im_detect(sess, net, im)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 91, in im_detect
blobs, im_scales = _get_blobs(im,scale)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 63, in _get_blobs
blobs['data'], im_scale_factors = _get_image_blob(im,scale)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 35, in _get_image_blob
im_orig = im.astype(np.float32, copy=True)
AttributeError: 'NoneType' object has no attribute 'astype'
pose estimation with RMPE...
/home/yinruiying/torch/install/bin/luajit: /home/yinruiying/torch/install/share/lua/5.1/trepl/init.lua:389: module 'hdf5' not found:No LuaRocks module found for hdf5
no field package.preload['hdf5']
no file '/home/yinruiying/.luarocks/share/lua/5.1/hdf5.lua'
no file '/home/yinruiying/.luarocks/share/lua/5.1/hdf5/init.lua'
no file '/home/yinruiying/torch/install/share/lua/5.1/hdf5.lua'
no file '/home/yinruiying/torch/install/share/lua/5.1/hdf5/init.lua'
no file './hdf5.lua'
no file '/home/yinruiying/torch/install/share/luajit-2.1.0-beta1/hdf5.lua'
no file '/usr/local/share/lua/5.1/hdf5.lua'
no file '/usr/local/share/lua/5.1/hdf5/init.lua'
no file '/home/yinruiying/.luarocks/lib/lua/5.1/hdf5.so'
no file '/home/yinruiying/torch/install/lib/lua/5.1/hdf5.so'
no file '/home/yinruiying/torch/install/lib/hdf5.so'
no file './hdf5.so'
no file '/usr/local/lib/lua/5.1/hdf5.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/yinruiying/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
...e/yinruiying/Desktop/ring/env/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
...ying/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/yinruiying/Desktop/ring/env/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/yinruiying/Desktop/ring/env/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

How can I deal with it? Thanks.

Some bugs in poseflow

Hey, @YuliangXiu
This is a very good job.
But when I tried to read the codes in poseflow , i find some bugs there.

I think the method 'cal_bbox_iou' is not correct. for example, when there is no overlap between boxA and boxB, the iou get by this function is wrong.
Thus, I think it may be modified like this:

def cal_bbox_iou(boxA, boxB): xA = max(boxA[0], boxB[0]) yA = max(boxA[2], boxB[2]) xB = min(boxA[1], boxB[1]) yB = min(boxA[3], boxB[3]) if xA < xB and yA < yB: interArea = (xB - xA + 1) * (yB - yA + 1) boxAArea = (boxA[1] - boxA[0] + 1) * (boxA[3] - boxA[2] + 1) boxBArea = (boxB[1] - boxB[0] + 1) * (boxB[3] - boxB[2] + 1) iou = interArea / float(boxAArea + boxBArea - interArea+0.00001) else: iou=0. return iou

Is there something I ignoring?

Can't find 'stn'

In your code, has a word " require 'stn' ", but I can't found this file.
Thank you!

failed to build nms

Error message when do make under /AlphaPose/human-detection/lib/newnms

python setup_linux.py build_ext --inplace
Traceback (most recent call last):
  File "setup_linux.py", line 56, in <module>
    CUDA = locate_cuda()
  File "setup_linux.py", line 51, in locate_cuda
    for k, v in cudaconfig.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1

Is it because cuda is not installed correctly? Any suggestion ?

test-pose.h5 no such file

I found that I haven't downloaded such h5 file when I run run.sh. Where can I download this file?

met a problem with ./run.sh

i installed tensorflow(CPU&GPU) on python3 before, however the AlphaPose is based on python2, so i installed tensorflow1.6 with python2, and when it runs the ./run.sh,the feedback:
generating bbox from Faster RCNN... /home/ann/.local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from floattonp.floatingis deprecated. In future, it will be treated asnp.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2018-04-08 16:41:40.420039: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/ann/AlphaPose/examples/demo/

0%| | 0/3 [00:00<?, ?it/s]unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
33%|███████████████ | 1/3 [00:31<01:02, 31.42s/it]unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
67%|██████████████████████████████ | 2/3 [00:59<00:29, 29.98s/it]unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
unknown error
100%|█████████████████████████████████████████████| 3/3 [01:28<00:00, 29.57s/it]
pose estimation with RMPE...
/home/ann/torch/install/bin/lua: /home/ann/torch/install/share/lua/5.2/trepl/init.lua:389: /home/ann/.luarocks/share/lua/5.2/hdf5/ffi.lua:56: expected align(#) on line 579
stack traceback:
[C]: in function 'error'
/home/ann/torch/install/share/lua/5.2/trepl/init.lua:389: in function 'require'
/home/ann/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
/home/ann/torch/install/share/lua/5.2/paths/init.lua:84: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
.../ann/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?
/home/ann/.local/lib/python2.7/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/ann/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/ann/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/ann/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/ann/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'maybe there are some problems with tf but i have no idea about these instructions. what else module do i must install? and the tf version:tf.version
'1.6.0'`
please help me, thank u

About pose track code and dropping low confidence keypoints

Hi,
I am wondering when the tracking part code would be released?

Also, I have a question about the tracking paper, by saying "drop low-score keypoints before generating result", are you referring to keep joints that only have a confidence higher than the threshold, e.g. keep only 5 joints of a person, or referring to if the keypoints score of one person is pretty low, the entire predicted person is ruled out.

Also, did you filter low confidence bbox first before performing tracking as the "Detect-and-Track" paper mentioned?

Many thanks.

Facing problems during training my own dataset

Thanks for sharing your code publicly. When I training in my own dataset, I face the following problems.

  1. "module 'stn' not found". I have read your paper 'RMPE: Regional Multi-Person Pose Estimation', the 'addSSTN' in file 'opts.lua' should be set true during training process.

  2. In your 'AlphaPose/train/README.md' you said that "We finetune our model based on the pre-trained pyraNet model." Is possible to train from scratch in my own dataset?

Thanks for your replay!

Unsupported HDF5 version: 1.10.1

I tried to run the demo and got the following errors:


generating bbox from Faster RCNN...
2018-02-27 20:27:32.093717: I tensorflow/core/common_runtime/gpu/gpu_device.cc:938] Found device 0 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID 0000:06:00.0
Total memory: 11.20GiB
Free memory: 11.13GiB
2018-02-27 20:27:32.093935: I tensorflow/core/common_runtime/gpu/gpu_device.cc:959] DMA: 0
2018-02-27 20:27:32.093947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:969] 0: Y
2018-02-27 20:27:32.093960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1028] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:06:00.0)
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/xxx/github/AlphaPose/examples/demo/

100%|█████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00, 2.25s/it]
pose estimation with RMPE...
/usr/local/common/distro/install/bin/lua: ...l/common/distro/install/share/lua/5.1/trepl/init.lua:389: ...cal/common/distro/install/share/lua/5.1/hdf5/ffi.lua:71: Unsupported HDF5 version: 1.10.1
stack traceback:
[C]: in function 'error'
...l/common/distro/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
/home/xxx/github/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
...l/common/distro/install/share/lua/5.1/paths/init.lua:84: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
...distro/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/anaconda/lib/python2.7/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/anaconda/lib/python2.7/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/xxx/github/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/xxx/github/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

It seems like a version compatability problem.
However, I am not familiar with LUA.
Could you kindly give me some advice?

A problem in running run.sh

Environment:ubuntu 16.04;cuda 9.0.176;cuDNN 7.0.5;TensorFlow 1.6.0(gpu).
Reference to #10 #3,
I've been installed torch,lucrocks,hdf5,etc...But there are still problems running...

name@name-All-Series:~/AlphaPose$ ./run.sh --indir examples/demo/ --outdir examples/results/ --vis
0
generating bbox from Faster RCNN...
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
2018-03-06 17:32:48.947257: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-03-06 17:32:49.022229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-06 17:32:49.022485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce GTX 750 major: 5 minor: 2 memoryClockRate(GHz): 1.188
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.64GiB
2018-03-06 17:32:49.022502: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-03-06 17:32:49.244052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1403 MB memory) -> physical GPU (device: 0, name: GeForce GTX 750, pci bus id: 0000:01:00.0, compute capability: 5.2)
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/name/AlphaPose/examples/demo/
0%| | 0/3 [00:00<?, ?it/s]2018-03-06 17:32:55.388581: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:32:55.500633: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:32:56.432382: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 922.50MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
67%|██████████████████████████████ | 2/3 [00:07<00:03, 3.52s/it]2018-03-06 17:33:01.331154: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:33:01.411505: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:33:01.501123: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:33:02.435389: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-03-06 17:33:02.543607: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
100%|█████████████████████████████████████████████| 3/3 [00:10<00:00, 3.45s/it]
pose estimation with RMPE...
/home/name/torch/install/bin/lua: /home/name/torch/install/share/lua/5.2/trepl/init.lua:389: /home/name/torch/install/share/lua/5.2/hdf5/ffi.lua:56: expected align(#) on line 579
stack traceback:
[C]: in function 'error'
/home/name/torch/install/share/lua/5.2/trepl/init.lua:389: in function 'require'
/home/name/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
/home/name/torch/install/share/lua/5.2/paths/init.lua:84: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
...oyer/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/name/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/name/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

So,how can I solve it?

Human Detection Performance

Using a GTX 1080 with 8 GB RAM what would you expect human detection inference performance to be?

I made some changes to demo-alpha-pose.py related to TensorFlow session config. I removed allow_growth=True and replaced it with per_process_gpu_memory_fraction=4 and then I run 3 docker instances. This has been an easy way in the past for me to maximize the GPU utilization. I'm able to keep the GPU at 90%+ utilization based on watching watch -n 1 nvidia-smi I did NOT adjust any of the config variables and I removed writing to HDF5 files and instead push to MongoDB via RabbitMQ.

I'm only getting around 1.8-2 images (3264x1836 resolution which gets scaled down as cfg.TEST.MAX_SIZE=1000) per second though. Is that expected? I believe I was getting closer to 3-4 images per second using this card while running inference on Mask RCNN (Matterports version) I'll test using my 1080 TI's in a few days to see the performance bump with more CUDA cores and memory.

How to deal with invisible keypoint?

Hi, MVIG groups:
I'm quite a newer for human keypoint detection, thanks for your excellent work. I have a question, how to deal with invisible keypoints during train? And, I can't find the preprocess code for MSCOCO to make the annt.h5 file, so I'm confused.

No module named tqdm

when i run the cmd ./run.sh --indir examples/demo/ --outdir examples/results/ --vis, I got error
Traceback (most recent call last): File "demo-alpha-pose.py", line 30, in <module> from tqdm import tqdm ImportError: No module named tqdm
But I have installed tqdm,I don't know what's wrong i did Thanks

no such child 'xmin' for [HDF5Group 33554432 /]

Hi, I met such a problem:
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/yinruiying/Desktop/ring/env/AlphaPose/examples/demo/

50%|█████████████████████████████████████████████████████████████▌ | 2/4 [00:03<00:03, 1.53s/it]
Traceback (most recent call last):
File "demo-alpha-pose.py", line 176, in
num_boxes=demo(sess, net, im_name, xminarr,yminarr,xmaxarr,ymaxarr,results,score_file,index_file,num_boxes,inputpath, mode)
File "demo-alpha-pose.py", line 67, in demo
scores, boxes = im_detect(sess, net, im)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 91, in im_detect
blobs, im_scales = _get_blobs(im,scale)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 63, in _get_blobs
blobs['data'], im_scale_factors = _get_image_blob(im,scale)
File "/home/yinruiying/Desktop/ring/env/AlphaPose/human-detection/tools/../lib/model/test.py", line 35, in _get_image_blob
im_orig = im.astype(np.float32, copy=True)
AttributeError: 'NoneType' object has no attribute 'astype'
pose estimation with RMPE...
MPII
/home/yinruiying/torch/install/bin/luajit: /home/yinruiying/torch/install/share/lua/5.1/hdf5/group.lua:327: HDF5Group:read() - no such child 'xmin' for [HDF5Group 33554432 /]
stack traceback:
[C]: in function 'error'
/home/yinruiying/torch/install/share/lua/5.1/hdf5/group.lua:327: in function 'read'
...e/yinruiying/Desktop/ring/env/AlphaPose/predict/util.lua:39: in function 'loadAnnotations'
main-alpha-pose.lua:27: in main chunk
[C]: in function 'dofile'
...ying/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/yinruiying/Desktop/ring/env/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/yinruiying/Desktop/ring/env/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Then how can I settle it? Thanks...

problem when run run.sh

Unable to open file: name = '/home/zlc/network/alphapose/examples/results/pose/test-pose.h5
IOError: [Errno 2] No such file or directory: '/home/zlc/Network/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'
when i run './run.sh --indir examples/demo/ --outdir examples/results/ --vis', i don't know where can i find these two files.

IOError: Unable to open file (Unable to open file: name = 'examples/results/pose/test-pose.h5'

@hjhhh3000
The log of run is shown as following:

generating bbox from Faster RCNN...
Traceback (most recent call last):
File "demo-alpha-pose.py", line 118, in
os.mkdir(args.outputpath)
OSError: [Errno 2] No such file or directory: 'examples/results/'
pose estimation with RMPE...
./run.sh: 行 78: th: 未找到命令
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/lib/python2.7/dist-packages/h5py/_hl/files.py", line 272, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/lib/python2.7/dist-packages/h5py/_hl/files.py", line 92, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/build/h5py-nQFNYZ/h5py-2.6.0/h5py/_objects.c:2577)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/build/h5py-nQFNYZ/h5py-2.6.0/h5py/_objects.c:2536)
File "h5py/h5f.pyx", line 76, in h5py.h5f.open (/build/h5py-nQFNYZ/h5py-2.6.0/h5py/h5f.c:1811)
IOError: Unable to open file (Unable to open file: name = 'examples/results/pose/test-pose.h5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)

_tkinter.TclError: no display name and no $DISPLAY environment variable in json-video.py

When estimating the keypoints with flag '--vis', such an error sometimes happens:

"_tkinter.TclError: no display name and no $DISPLAY environment variable" which is called back from the predict/json/json-video.py

And this bug is likely related to the version of matplotlib, and for my own case, this can be fixed in method below:

before the statement at the beginning of json-video.py:
import matplotlib.pyplot as plt

insert two lines:
import matplotlib
matplotlib.use('Agg')

Error message when run.sh

Hi , I follow each steps and got the following error when I run the run.sh command as show on guide , pls help, thanks!

generating bbox from Faster RCNN...
Traceback (most recent call last):
File "demo-alpha-pose.py", line 31, in
from nets.vgg16 import vgg16
File "/home/chikiuso/Downloads/AlphaPose/human-detection/tools/../lib/nets/vgg16.py", line 11, in
import tensorflow.contrib.slim as slim
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/init.py", line 31, in
from tensorflow.contrib import factorization
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/factorization/init.py", line 24, in
from tensorflow.contrib.factorization.python.ops.gmm import *
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/factorization/python/ops/gmm.py", line 27, in
from tensorflow.contrib.learn.python.learn.estimators import estimator
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/init.py", line 88, in
from tensorflow.contrib.learn.python.learn import *
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/init.py", line 23, in
from tensorflow.contrib.learn.python.learn import *
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/init.py", line 25, in
from tensorflow.contrib.learn.python.learn import estimators
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/init.py", line 297, in
from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn.py", line 30, in
from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 31, in
from tensorflow.contrib.learn.python.learn.estimators import estimator
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 49, in
from tensorflow.contrib.learn.python.learn.learn_io import data_feeder
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/init.py", line 21, in
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/dask_io.py", line 26, in
import dask.dataframe as dd
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/dask/dataframe/init.py", line 3, in
from .core import (DataFrame, Series, Index, _Frame, map_partitions,
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/dask/dataframe/core.py", line 40, in
pd.core.computation.expressions.set_use_numexpr(False)
AttributeError: 'module' object has no attribute 'expressions'
pose estimation with RMPE...
/home/chikiuso/torch/install/bin/lua: .../chikiuso/torch/install/share/lua/5.1/trepl/init.lua:389: ...me/chikiuso/torch/install/share/lua/5.1/hdf5/ffi.lua:56: expected align(#) on line 687
stack traceback:
[C]: in function 'error'
.../chikiuso/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
/home/chikiuso/Downloads/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
.../chikiuso/torch/install/share/lua/5.1/paths/init.lua:84: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
./run.sh: line 79: 15096 Segmentation fault (core dumped) CUDA_VISIBLE_DEVICES=${GPU_ID} th main-alpha-pose.lua predict ${INPUT_PATH} ${OUTPUT_PATH} ${GPU_NUM} ${BATCH_SIZE} ${DATASET}
/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/chikiuso/.conda/envs/py27/lib/python2.7/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/feedstock_root/build_artefacts/h5py_1491364442174/work/h5py-2.7.0/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/feedstock_root/build_artefacts/h5py_1491364442174/work/h5py-2.7.0/h5py/_objects.c:2804)
File "h5py/h5f.pyx", line 78, in h5py.h5f.open (/feedstock_root/build_artefacts/h5py_1491364442174/work/h5py-2.7.0/h5py/h5f.c:2123)
IOError: Unable to open file (Unable to open file: name = '/home/chikiuso/downloads/alphapose/examples/results/pose/test-pose.h5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)
visualization...
mkdir: cannot create directory ‘/home/chikiuso/Downloads/AlphaPose/examples/results//RENDER’: No such file or directory
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/chikiuso/Downloads/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

Testing on my own videos

An error occurred when I used it to test my own video,

convert video to images...
./run.sh: line 55: ffmpeg: command not found
generating bbox from Faster RCNN...
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
2018-03-14 00:18:30.331615: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-14 00:18:30.331682: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-03-14 00:18:30.331704: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/hansheng/AlphaPose/video-tmp

0it [00:00, ?it/s]
pose estimation with RMPE...
MPII
HDF5-DIAG: Error detected in HDF5 (1.8.11) thread 140542779459456:
#000: ../../../src/H5Dio.c line 171 in H5Dread(): no output buffer
major: Invalid arguments to routine
minor: Bad value
/home/hansheng/torch/install/bin/luajit: /home/hansheng/torch/install/share/lua/5.1/hdf5/dataset.lua:78: HDF5DataSet:all() - failed reading data from [HDF5DataSet (83886081 /xmin DATASET)]
stack traceback:
[C]: in function 'error'
/home/hansheng/torch/install/share/lua/5.1/hdf5/dataset.lua:78: in function 'all'
/home/hansheng/AlphaPose/predict/util.lua:39: in function 'loadAnnotations'
main-alpha-pose.lua:27: in main chunk
[C]: in function 'dofile'
...heng/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
parametric-pose-nms-MPII.py:104: UserWarning: loadtxt: Empty input file: "/home/hansheng/AlphaPose/examples/results/BBOX/score-proposals.txt"
scores_proposals = np.loadtxt(os.path.join(outputpath,"BBOX/score-proposals.txt"), ndmin=1)
parametric-pose-nms-MPII.py:18: UserWarning: loadtxt: Empty input file: "scores-proposals.txt"
proposal_scores = np.loadtxt("scores-proposals.txt", ndmin=1)
visualization...
0it [00:00, ?it/s]
rendering video...
./run.sh: line 96: ffmpeg: command not found

ModuleNotFoundError: No module named 'cpu_nms'

Hi, I followed the instruction to install AlphaPose, but when I run the test demo, it comes out this error. How to solve this. Thanks.
cetc@cetc-computer:/桌面/HD/AlphaPose-master/human-detection/lib$ make clean
rm -rf /.pyc
rm -rf /.so
cetc@cetc-computer:
/桌面/HD/AlphaPose-master/human-detection/lib$ make
python3 setup.py build_ext --inplace
running build_ext
skipping 'utils/bbox.c' Cython extension (up-to-date)
building 'utils.cython_bbox' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/utils
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
gcc -pthread -B /home/cetc/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/cetc/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/cetc/anaconda3/include/python3.6m -c utils/bbox.c -o build/temp.linux-x86_64-3.6/utils/bbox.o -Wno-cpp -Wno-unused-function
gcc -pthread -shared -B /home/cetc/anaconda3/compiler_compat -L/home/cetc/anaconda3/lib -Wl,-rpath=/home/cetc/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/utils/bbox.o -o /home/cetc/桌面/HD/AlphaPose-master/human-detection/lib/utils/cython_bbox.cpython-36m-x86_64-linux-gnu.so
skipping 'utils/nms.c' Cython extension (up-to-date)
building 'utils.cython_nms' extension
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
gcc -pthread -B /home/cetc/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/cetc/anaconda3/lib/python3.6/site-packages/numpy/core/include -I/home/cetc/anaconda3/include/python3.6m -c utils/nms.c -o build/temp.linux-x86_64-3.6/utils/nms.o -Wno-cpp -Wno-unused-function
gcc -pthread -shared -B /home/cetc/anaconda3/compiler_compat -L/home/cetc/anaconda3/lib -Wl,-rpath=/home/cetc/anaconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/utils/nms.o -o /home/cetc/桌面/HD/AlphaPose-master/human-detection/lib/utils/cython_nms.cpython-36m-x86_64-linux-gnu.so
rm -rf build
cetc@cetc-computer:~/桌面/HD/AlphaPose-master$ ./run.sh --indir examples/demo/ --outdir examples/results/ --vis
0
generating bbox from Faster RCNN...
Traceback (most recent call last):
File "demo-alpha-pose.py", line 22, in
from newnms.nms import soft_nms
File "/home/cetc/桌面/HD/AlphaPose-master/human-detection/tools/../lib/newnms/nms.py", line 4, in
from cpu_nms import cpu_nms, cpu_soft_nms
ModuleNotFoundError: No module named 'cpu_nms'
pose estimation with RMPE...
./run.sh: 行 78: th: 未找到命令
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/cetc/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/cetc/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/home/cetc/桌面/HD/AlphaPose-master/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/cetc/桌面/HD/AlphaPose-master/examples/results/POSE/alpha-pose-results-forvis.json'

how to solve this problem

generating bbox from Faster RCNN...
Traceback (most recent call last):
File "demo-alpha-pose.py", line 22, in
from newnms.nms import soft_nms
File "/home/luoyuncen/AlphaPose/human-detection/tools/../lib/newnms/nms.py", line 3, in
from cpu_nms import cpu_nms, cpu_soft_nms
ImportError: No module named cpu_nms
pose estimation with RMPE...

./run.sh: line 78: th: command not found

hi,when I run this command "./run.sh --indir examples/demo/ --outdir examples/results/ --vis" ,it returns the following error,I don't know what's the 'th' command mean,please give me some advice ,thanks

the entire error information as follow:

pose estimation with RMPE...
./run.sh: line 78: th: command not found
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/zzy/work/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/zzy/work/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json

A little bug with np.loadtxt

There are proposal_scores = np.loadtxt("scores-proposals.txt") in parametric-pose-nms-MPII.py and parametric-pose-nms-COCO.py.
But when there is only one number in scores-proposals.txt - it means there is only one person in pic,
proposal_scores[i] may be wrong, because proposal_scores.shape will be ().
So I change the code to

proposal_scores = np.loadtxt("scores-proposals.txt", ndmin=1)

I test it in centos6, python2.7, numpy 1.14

Is SSTN used in the 'final_model.t7'?

I have read the paper about RMPE, in which symmetric spatial transformer network(SSTN) was used to enhance SPPE. However, i didn't find SSTN in the 'final_model.t7' when drawing the network graph using nngraph, so i wonder whether the SSTN is used in the final model?

TypeError: integer argument expected, got float

I' have followed the tutorial steps to test the demo but i'm getting the following error:

$ ./run.sh --indir examples/demo/ --outdir examples/results/ --vis
0
generating bbox from Faster RCNN...
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-02-05 20:54:09.837963: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/daniel/Documentos/AlphaPose/examples/demo/

100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:48<00:00, 16.12s/it]
pose estimation with RMPE...
MPII	
 [======================================== 69/69 ======================================>]  Tot: 6s757ms | Step: 99ms    
----------Finished----------	
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
visualization...
  0%|                                                                                                            | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "json-video.py", line 66, in <module>
    display_pose(inputpath, outputpath, imgname)    
  File "json-video.py", line 53, in display_pose
    fig.savefig(os.path.join(outputpath,'RENDER',imgname.split('/')[-1]),pad_inches = 0.0, bbox_inches=extent, dpi=12.91)
  File "/home/daniel/.local/lib/python2.7/site-packages/matplotlib/figure.py", line 1834, in savefig
    self.canvas.print_figure(fname, **kwargs)
  File "/home/daniel/.local/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2267, in print_figure
    **kwargs)
  File "/home/daniel/.local/lib/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 584, in print_jpg
    return background.save(filename_or_obj, format='jpeg', **options)
  File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1675, in save
    save_handler(self, fp, filename)
  File "/usr/lib/python2.7/dist-packages/PIL/JpegImagePlugin.py", line 708, in _save
    ImageFile._save(im, fp, [("jpeg", (0, 0)+im.size, 0, rawmode)], bufsize)
  File "/usr/lib/python2.7/dist-packages/PIL/ImageFile.py", line 480, in _save
    e = Image._getencoder(im.mode, e, a, im.encoderconfig)
  File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 431, in _getencoder
    return encoder(mode, *args + extra)
TypeError: integer argument expected, got float

Sorry but I don't know very much about python its libs, does anyone know what's going on?

Failed to run demo

I think I have installed the package correctly! I am working on Ubuntu 16.04, TF-gpu v1.4. I literally follow the instruction of installation. But when I ran the demo:
./run.sh --indir examples/demo/ --outdir examples/results/ --vis
I found error message here when the code tried to create a local save:

/home/ruoyuli/torch/install/bin/luajit: /home/ruoyuli/.luarocks/share/lua/5.1/hdf5/file.lua:10: HDF5File.__init() requires a fileID - perhaps you want HDF5File.create()?
stack traceback:
[C]: in function 'assert'
/home/ruoyuli/.luarocks/share/lua/5.1/hdf5/file.lua:10: in function '__init'
/home/ruoyuli/.luarocks/share/lua/5.1/torch/init.lua:91: in function </home/ruoyuli/.luarocks/share/lua/5.1/torch/init.lua:87>
[C]: in function 'open'
/home/ruoyuli/AlphaPose/predict/util.lua:34: in function 'loadAnnotations'
main-alpha-pose.lua:27: in main chunk
[C]: in function 'dofile'
...yuli/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
/home/ruoyuli/.conda/envs/tf_p2.7/lib/python2.7/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/home/ruoyuli/.conda/envs/tf_p2.7/lib/python2.7/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/ruoyuli/.conda/envs/tf_p2.7/lib/python2.7/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/ruoyuli/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/ruoyuli/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

Any suggestions ? Thank you ahead!

module 'bit' not found

Hello, I installed torch as #10 mentioned but still can't run the demo. The result looks like:
`XX::~/AlphaPose$ ./run.sh --indir examples/demo/ --outdir examples/results/ --gpu 0,1,2,3 --batch 5
0
generating bbox from Faster RCNN...
2018-02-14 15:50:23.741917: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 15:50:23.741941: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 15:50:23.741946: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 15:50:23.741950: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 15:50:23.741954: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2018-02-14 15:50:23.976259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:06:00.0
Total memory: 11.90GiB
Free memory: 11.29GiB
2018-02-14 15:50:24.133633: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x3fe2bc0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2018-02-14 15:50:24.134315: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:0b:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
2018-02-14 15:50:24.135003: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1
2018-02-14 15:50:24.135014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y Y
2018-02-14 15:50:24.135018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y Y
2018-02-14 15:50:24.135032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:06:00.0)
2018-02-14 15:50:24.135039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: TITAN X (Pascal), pci bus id: 0000:0b:00.0)
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/icl/AlphaPose/examples/demo/

100%|████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.18s/it]
pose estimation with RMPE...
/home/icl/torch/install/bin/lua: /home/icl/torch/install/share/lua/5.2/trepl/init.lua:389: /home/icl/torch/install/share/lua/5.2/trepl/init.lua:389: module 'bit' not found:No LuaRocks module found for bit
no field package.preload['bit']
no file '/home/icl/.luarocks/share/lua/5.2/bit.lua'
no file '/home/icl/.luarocks/share/lua/5.2/bit/init.lua'
no file '/home/icl/torch/install/share/lua/5.2/bit.lua'
no file '/home/icl/torch/install/share/lua/5.2/bit/init.lua'
no file '/home/icl/.luarocks/share/lua/5.1/bit.lua'
no file '/home/icl/.luarocks/share/lua/5.1/bit/init.lua'
no file '/home/icl/torch/install/share/lua/5.1/bit.lua'
no file '/home/icl/torch/install/share/lua/5.1/bit/init.lua'
no file './bit.lua'
no file '/home/icl/torch/install/share/luajit-2.1.0-beta1/bit.lua'
no file '/usr/local/share/lua/5.1/bit.lua'
no file '/usr/local/share/lua/5.1/bit/init.lua'
no file '/home/icl/.luarocks/lib/lua/5.2/bit.so'
no file '/home/icl/torch/install/lib/lua/5.2/bit.so'
no file '/home/icl/torch/install/lib/bit.so'
no file '/home/icl/.luarocks/lib/lua/5.1/bit.so'
no file '/home/icl/torch/install/lib/lua/5.1/bit.so'
no file './bit.so'
no file '/usr/local/lib/lua/5.1/bit.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/icl/torch/install/share/lua/5.2/trepl/init.lua:389: in function 'require'
/home/icl/AlphaPose/predict/util.lua:7: in main chunk
[C]: in function 'dofile'
/home/icl/torch/install/share/lua/5.2/paths/init.lua:84: in function 'dofile'
main-alpha-pose.lua:7: in main chunk
[C]: in function 'dofile'
.../icl/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 272, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 92, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2684)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2642)
File "h5py/h5f.pyx", line 76, in h5py.h5f.open (/tmp/pip-4rPeHA-build/h5py/h5f.c:1930)
IOError: Unable to open file (Unable to open file: name = '/home/icl/alphapose/examples/results/pose/test-pose.h5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)
`
I try to google it. But there is no module named "bit"... Module "bit32" have been installed with lua2.
So.. Could anyone give any suggestion?
Many thanks~

No module named 'gpu_nms'

Hi, I followed the instruction to install AlphaPose, but when I run the test demo, it comes out this error. How to solve this. Thanks.

~/project/AlphaPose$ ./run.sh --indir examples/demo/ --outdir examples/results/ --vis
0
generating bbox from Faster RCNN...
Traceback (most recent call last):
File "demo-alpha-pose.py", line 22, in
from newnms.nms import soft_nms
File "/nfs/home/longwei/project/AlphaPose/human-detection/tools/../lib/newnms/nms.py", line 4, in
from gpu_nms import gpu_nms
ImportError: No module named 'gpu_nms'
pose estimation with RMPE...
./run.sh: line 78: th: command not found
_frozen_importlib:321: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/nfs/home/longwei/anaconda2/envs/mask_rcnn_old/lib/python3.4/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/nfs/home/longwei/anaconda2/envs/mask_rcnn_old/lib/python3.4/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/nfs/home/longwei/project/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/nfs/home/longwei/project/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

about dataset

您好 非常冒昧的打扰了。我是ustc的学生。貌似pose track 的数据集官网下载通道关闭,我非常急用。您能通过网盘或者邮件分享这个数据集给我一下吗

Questions about parallel SPPE?

Hey,in the parallel SPPE branch,how to calculate the loss between the center-located ground truth poses and the output?And how to make the center-located ground truth pose image?Is it just cropped from the origin image?

FPS

Hi, can you forecast how many fps show on GTX 1050?

bad argument #2 to '?' (out of bounds

I have tried several pictures in the demo without any problem . But when I want to use the demo on larger data, the program goes down and the error info:

0it [00:00, ?it/s]
pose estimation with RMPE...
MPII
/home/shiyx/torch/install/bin/luajit: bad argument #2 to '?' (out of bounds
at/home/shiyx/torch/pkg/torch/lib/TH/generic/THStorage.c:202)

PoseFlow : Posetrack Dataset decompression error

Hello,

First of all thank you for sharing your implementation which seems really promising ! I tried to install following the Readme instructions but when I try to download the data from posetrack using :
cat .tar | tar -xvf - -i
I run into an error ..
It seems that some images are corrupted and so the decompression fails.
Did you have similar problems during decompression of the dataset ? If so how did you solve the problem ?
Finally, is it necessary to download the whole posetrack dataset or only specific batch ? Would it be possible to add those informations in the Readme ?

Thank you again for sharing your work,

Cheers

Can I use camera as input?

您好,目前给的例子只是用了图片或者视频流作为输入,是否支持实时采集摄像头数据,然后进行处理呢?谢谢!

main-alpha-pose.lua:96: 'for' step must be a number

I've been installed tensorflow, torch, hdf5, etc... But there are still problem running.

./run.sh --indir examples/demo --outdir examples/results/ --gpu 0 --batch size 3
``0generating bbox from Faster RCNN... /usr/local/lib/python2.7/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype fromfloat` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2018-03-16 10:43:14.883830: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-03-16 10:43:15.014889: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-16 10:43:15.015286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.66GiB
2018-03-16 10:43:15.015300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/tiger/work/fall_detection/AlphaPose-master/./examples/demo

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:04<00:00, 1.41s/it]
pose estimation with RMPE...
MPII
/home/tiger/torch/install/bin/luajit: main-alpha-pose.lua:96: 'for' step must be a number
stack traceback:
main-alpha-pose.lua:96: in function 'loop'
main-alpha-pose.lua:176: in main chunk
[C]: in function 'dofile'
...iger/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
/usr/local/lib/python2.7/dist-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "h5py/h5f.pyx", line 78, in h5py.h5f.open (/tmp/pip-nCYoKW-build/h5py/h5f.c:2117)
IOError: Unable to open file (Unable to open file: name = '/home/tiger/work/fall_detection/alphapose-master/./examples/results/pose/test-pose.h5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)`

Any suggestions?Thank you ahead

run ./run.sh error

/home/liutl/torch7/install/bin/luajit: /home/liutl/torch7/install/share/lua/5.1/trepl/init.lua:389: module 'threads' not found:No LuaRocks module found for threads
no field package.preload['threads']
no file '/home/liutl/.luarocks/share/lua/5.1/threads.lua'
no file '/home/liutl/.luarocks/share/lua/5.1/threads/init.lua'
no file '/home/liutl/torch7/install/share/lua/5.1/threads.lua'
no file '/home/liutl/torch7/install/share/lua/5.1/threads/init.lua'
no file './threads.lua'
no file '/home/liutl/torch7/install/share/luajit-2.1.0-beta1/threads.lua'
no file '/usr/local/share/lua/5.1/threads.lua'
no file '/usr/local/share/lua/5.1/threads/init.lua'
no file '/home/wanggw/.luarocks/share/lua/5.1/threads.lua'
no file '/home/wanggw/.luarocks/share/lua/5.1/threads/init.lua'
no file '/home/liutl/.luarocks/lib/lua/5.1/threads.so'
no file '/home/liutl/torch7/install/lib/lua/5.1/threads.so'
no file './threads.so'
no file '/usr/local/lib/lua/5.1/threads.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/home/wanggw/.luarocks/lib/lua/5.1/threads.so'
stack traceback:
[C]: in function 'error'
/home/liutl/torch7/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
main-alpha-pose.lua:1: in main chunk
[C]: in function 'dofile'
...utl/torch7/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406620
Traceback (most recent call last):
File "parametric-pose-nms-MPII.py", line 256, in
get_result_json(args)
File "parametric-pose-nms-MPII.py", line 243, in get_result_json
test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/wanggw/Work/AlphaPose-master/AlphaPose-master/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
File "json-video.py", line 63, in
with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/wanggw/Work/AlphaPose-master/AlphaPose-master/examples/results/POSE/alpha-pose-results-forvis.json'
wanggw@liutl:~/Work/AlphaPose-master/AlphaPose-master$

'--sep' doesn't work

Hello, when I turn on the --sep flag, all results are still saved in one file alpha-pose-results.json.
There only exists a single .json file in folder sep-json .

./run.sh --indir ../data --outdir ../alphapose --vis --sep --gpu 0,1 --batch 5

Can U help me to separate the results? Thank you ahead~

some questions about poseflow

@YuliangXiu
Hey, I have some questions when i run the poseflow part of this repo to track pedestrians. Could you me a hand?

  1. Globally optimal solution is obtained by the Hungarian algorithm to solve bipartite matching problem. Thus no matter what the similarity is, one node in set A will be assigned a node from set B even if the smallest similarity is negative, if there are same number of nodes in A and B, which will cause many mismatches. So I'm wondering whether a threshold is needed to select convincing matches after the line: 120 in tracker.py.

  2. I find a margin '10 frames' is set to find the lost target. And for the lost target, the box and pose corresponding to the lastest detection are copied. However, matching points obtained by deepmatching are between successive frames. So, i think the copy operator may leads to mismatches in some cases. Am i right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.