Giter Club home page Giter Club logo

convolutional-pose-machines-tensorflow's People

Contributors

argentumwalker avatar elevanth avatar hyandell avatar timctho avatar xiaoyongzhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convolutional-pose-machines-tensorflow's Issues

Error

i found this error how to fix this?
Traceback (most recent call last):
File "C:/Users/dody/Desktop/convolutional-pose-machines-tensorflow-master/demo_cpm_hand.py", line 430, in
tf.app.run()
File "C:\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
_sys.exit(main(argv))
File "C:/Users/dody/Desktop/convolutional-pose-machines-tensorflow-master/demo_cpm_hand.py", line 106, in main
model = cpm_hand.CPM_Model(FLAGS.stages, FLAGS.joints + 1)
TypeError: init() missing 2 required positional arguments: 'stages' and 'joints'

what are training datas like?

I find if hand size is small,like playing the piano,it can't work well.
And right hand detection is better than left hand.

some problems with the code

in your code , demo_cpm_hand.py. there is
"model = cpm_hand.CPM_Model(FLAGS.stages, FLAGS.joints + 1)
model.build_model(input_data, center_map, 1)"
both of them are wrong...
my pycharm gives me 'CPM_Model' object has no attribute 'build_model'

may you updata the code?

Training to replicate the weights provided in the pickle file.

On what dataset was the body model trained on? How many images was used for training? Was a custom dataset used or a public dataset? And does the value of gaussian variance affect the result? (ground truth joint gaussian point)

Based on the limb connections in the code it doesn't look like the dataset used for the weights provided is public since only LSP dataset is of 14 joints and it does not follow the same order of connections. MPII is 16 joints and FLIC is for 7 joints covering only upper body. If LSP was used for training why is the joint order mixed up?

Some question around using these models in Unity3D

Sorry, I have a relatively limited understanding of neural nets, to begin with, but I am trying to integrate your CNN into Unity3D using their new 'ML Agents' interface. To implement this, Unity docs say I need a .bytes file of the trained network to use in their interface. I know you supplied us with the weights to run demo scripts (again, new territory for me as I'm more of a .NET kind of guy than a Python one!) but I was wondering how I would get such a .bytes file. Of course, if I do manage to implement such a thing, I would be happy to share this particular implementation, which has been the strongest that I have managed to find on the web so far. If anyone has any suggestions as to how to do this, or any new ways I could achieve this, I would welcome anything you guys may have.
Link to Unity docs for ML Agents: https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md

Thanks.

Ensemble_data_generator module is missing

Please, can you upload the Ensemble_data_generator module. If its something that cannot be shared. please can you give us a small description about Ensemble_data_generator module (what it does ? i/p & o/p parameters).

Initialisation of the Kalman Filter

When using the Kalman Filter, the initial values seem to be set to the upper left corner of the image. It requires a few frames for the pose estimation to move away from the upper corner and converge to realistic estimates. Is there any way to fix this issue? It appears that in the initial frames, all joints are assumed to be at (0,0). I am using the demo_cpm_body file.

Thanks so much for releasing this, I have found it very useful.

vlcsnap-2018-07-24-14h32m18s267
vlcsnap-2018-07-24-14h32m57s590

Error running demo_cpm_hand.py

I get the following error:

ValueError: Dimension 3 in both shapes must be equal, but are 23 and 22. Shapes are [1,1,512,23] and [1,1,512,22]. for 'Assign_32' (op: 'Assign') with input shapes: [1,1,512,23], [1,1,512,22].

This is when i pass finetune = False in load weights.

some questions about heatmaps。

Hey man,I read the paper first but had many questions and then I read your code, that helped a lot. But I still have some questions,can you help me ?
1.By the code(cpm_hand_v2.py:build_loss),I still can't understand how you determined the size of the predicted heatmaps and the ground truth heatmaps.Could you tell me how?
2.By the paper,the loss function is the l2 loss,but in my opinion,each pixel in the predicted heatmap represents an area of the input image,but the ground truth heatmap is just a gaussian kernel and each pixel of the ground truth heatmap just represents one pixel of the input image,so why can we calculate l2 loss between them?
Thanks for your time!

why i can not reach 30FPS?

i also run on an NVIDA gpu , but i only get 5 FPS....may you tell me your tensorflow environment? or can you give me some suggestions?

Does color matters?

Hi, when I try with black glove is worn, the software does not yield output. Is it because of the color or the fact that the glove hides/masks joints of the hand?

Thanks in advance.

run demo_cmp_body.py error

when i run demo_cmp_body.py ,there is such a problem:

absl.flags._exceptions.IllegalFlagValueError: flag --kalman_noise=0.03: Expect argument to be a string or int, found <class 'float'>

I don't know how to solve it. Hope anyone can help.Thanks!!!

AssertionError: stage_3/mid_conv7/BiasAdd:0 is not in graph

When tried to run the script run_freeze_graph.py, I got the the error "AssertionError: stage_3/mid_conv7/BiasAdd:0 is not in graph"

the parameter is set in config.py output_node_names = 'stage_3/mid_conv7/BiasAdd:0' which I couldn't verify if it exists. Thank you.

Missing arguments?

I run body demo well. but when I run demo_cpm_hand.py , I got this error:

Traceback (most recent call last):
  File "demo_cpm_hand.py", line 430, in <module>
    tf.app.run()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
    _sys.exit(main(argv))
  File "demo_cpm_hand.py", line 106, in main
    model = cpm_hand.CPM_Model(FLAGS.stages, FLAGS.joints + 1)
TypeError: __init__() missing 2 required positional arguments: 'stages' and 'joints'

I followed instruction and download model put weights file and did't change anything.
I'm using python 3.6 win10, tensorflow 1.7, opencv3

Hand keypoints regression

Hi @timctho,

When training the model to detect hand keypoints, do you distinguish whether it is the keypoint of left or right hand and of which finger? Thanks!

AttributeError: 'module' object has no attribute 'make_gaussian'

When I run the modify version of create_cpm_tfr_fulljoints.py script in utils folder to train my custom model, It throws back an error. Obviously, there is no make_gaussian defined in utils rather it is defined in cpm_utils.py. But the function defined in cpm_utils.py accepts 3 arguments but the function make_gaussian invoked in cpm_tfr_fulljoints.py provides 4 arguments, which throws an error.

How can performance be improved ?

Hi,

Thank you for sharing the repository.
It's great there's a demo ready to run already.

Unfortunately I ran into some (memory) issues with TensorFlow 1.6 with (a 2GB) GPU.
I could run the demo using TensorFlow 1.4 CPU, but I get about 1fps.
I've tried reducing the webcam resolution as much as possible, but got no visible improvements.

How did you manage to get ~40fps ?

Thank you,
George

Save Trained Model using Keras as "model.pb"

Hi. Thanks for an awesome project. I am trying to save the trained model, by loading the weights that you have provided and saving the model in the session using tf.contrib.keras.save_model.
This requires me to provide the "input/placeholder" which is input_placeholder and "output/placeholder" which I am unable to find. Can anyone please help me as to what would be the name of the "output/placeholder" for this model?
I am using the run_demo_hand_with_tracker.py file for this.
Thanks

Ensemble_data_generator not found

I dont see the Ensemble_data_generator file. So, it gives error:
ModuleNotFoundError: No module named 'Ensemble_data_generator'

Can you please provide that too. Thanks so much for your efforts

Help needed for generating frozen graph( protobuf (.pb) format) of the tensorflow checkpoints shared.

Dear @timctho ,
I want to generate a frozen graph using the Tensorflow checkpoint file that you have supplied. It would be great if you can help me.
I am using the folllowing code to do that .. However I am getting error if I use the following snippet .. I need help regarding the model argument(Highlighted below), i.e which model to use .. As if I use cpm_hand then it throws an error stating NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Code snippet for generating frozen graph

`import tensorflow as tf
import argparse
from networks import get_network
import os

from pprint import pprint

os.environ['CUDA_VISIBLE_DEVICES'] = ''
parser = argparse.ArgumentParser(description='Tensorflow Pose Estimation Graph Extractor')
parser.add_argument('--model', type=str, default='cpm_hand', help='')
parser.add_argument('--size', type=int, default=224)
parser.add_argument('--checkpoint', type=str, default='', help='checkpoint path')
parser.add_argument('--output_node_names', type=str, default='Convolutional_Pose_Machine/stage_5_out')
parser.add_argument('--output_graph', type=str, default='./model.pb', help='output_freeze_path')

args = parser.parse_args()

input_node = tf.placeholder(tf.float32, shape=[1, args.size, args.size, 3], name="image")

with tf.Session() as sess:
net = get_network(args.model, input_node, trainable=False)
saver = tf.train.Saver()
saver.restore(sess, args.checkpoint)
input_graph_def = tf.get_default_graph().as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(sess, # The session
input_graph_def, # input_graph_def is useful for retrieving the nodes
args.output_node_names.split(",")
)

with tf.gfile.GFile(args.output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
`

About training data, What is bg_img_dir ?

I am trying to train my hand pose model with your contribution, thanks. When I reading your code, I am a little confused about bg_img_dir(in config.py). Can u tell me what is it, please?

demo_cpm_hand.py does not work.

I just download your project and run the 'demo_cpm_hand.py'.
But It does not work.
Line 106 should be replaced "model = cpm_hand.CPM_Model(FLAGS.input_size, FLAGS.hmap_size, FLAGS.stages, FLAGS.joints + 1)" and line 107 should be removed.
But I do not know how to fix it from now on.

problem run_demo "TypeError: __init__() got an unexpected keyword argument"

Hi,
I would train your net with my own data but before, I'm trying to launch your run_demo.py to test it and I have always the same problem :
"TypeError: init() got an unexpected keyword argument"
Then I realised that the input arguments in the net cpm_body for example is not the same that the inputs parameters given by the script run_demo .py
What can I do to modify it?
Thanks

Some questions about heatmap

I found the output on the 22nd channel is weird, which is not the background of the source image.
test
The original image:
1

Could you explain this?

convolutional-pose-machines-tensorflow

How can I fix this error?

Traceback (most recent call last):
File "D:/convolutional-pose-machines-tensorflow-master/convolutional-pose-machines-tensorflow-master/demo_cpm_hand.py", line 431, in
tf.app.run()
File "C:\Users\Khin Sabai\PycharmProjects\PythonRun\venv\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "D:/convolutional-pose-machines-tensorflow-master/convolutional-pose-machines-tensorflow-master/demo_cpm_hand.py", line 106, in main
model = cpm_hand.CPM_Model(FLAGS.stages, FLAGS.joints + 1)
TypeError: init() missing 2 required positional arguments: 'stages' and 'joints'

Some problem of training

I used create_cpm_tfr_fulljoints.py create TFRecord correctly, but I can't find how to use it in train script.
Any help?

dataset used to training

The model named Hand Pose Model (tf) on the website was trained using which dataset?
thank you

License?

Hi Tim,

What license are you making this available under? Is it Apache 2.0 like Tensorflow?

Import Error: Ensemble_data_generator

I think there must be a ensemble_data_generator.py file in the repo which has not been uploaded yet for training on own data. Can you please commit that file in the repo ?

tenserflow settings

hello I am having issue trying the demo I am using Jetson TX2 the issue is with protobuff:

libprotobuf FATAL google/protobuf/stubs/common.cc:61] This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "external/protobuf_archive/src/google/protobuf/any.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "external/protobuf_archive/src/google/protobuf/any.pb.cc".

so how can i fix this or change the settings to use that version

Thanks !

How can two hands be recognized and posed

For example, I have one previous picture.

image
It is recognized as:
image
When it cut with two separated hands, OK:
image

image
It looks like that the numbers of the joints is set as 21, one hand will have almost 20 points. If multiple hands, e.g., 2 hands have, how can I recognize it successfully? Will the setting of joint-number work?

Some questions about training

Hey I used a different network architecture(resnet-50) but the same method as you do,but when I run training,I had 2 questions:
1、During training,at first the loss went down quickly, from 10e7 to 10e4,but after that,the loss fall into shock and can't go down to smaller easily.My question is, when you train the net,how does the loss change?Does my loss change normally?And What the value of loss after your training finished?(By the way,the learning rate and some other parameters are copyed from your config.py)
2、 I'm not sure how to make the heatmap for the background class,in your code,I see you just make a nxn matrix whichs' value are all 1,and then you reduce all other heatmapss' value ele-wise.And is the gaussian_varianve a important parameter?Because when I changed it bigger,the net seems more difficult to converge.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.