Giter Club home page Giter Club logo

Comments (9)

YHaooo-4508 avatar YHaooo-4508 commented on July 28, 2024 2

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.

Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

Your reply has saved me a lot of time. I originally intended to use a depth camera for real-time inference, but now it seems unnecessary.

Its a novel idea for HPE using anchors. But I feel like there are many unreasonable designs. Firstly, the mathematical calculation of anchor coordinates plus offset multiplied by weight can almost be replaced by anchor coordinates multiplied by weight. Secondly, in depth(z_coord) calculation, the network predicts the depth value of each anchor point(111116*14), and then weighted and summed up to obtain the final keypoints depth.....Why not directly use the depth corresponding to the anchor point to weight and sum up?

from a2j.

logic03 avatar logic03 commented on July 28, 2024

Can you provide the training code for ITOP data set?Just like the NYU dataset is provided.

from a2j.

zhangboshen avatar zhangboshen commented on July 28, 2024

@logic03, Sorry that ITOP training code is kind of in a mess and requires efforts to reorganize, but most of the ITOP training details are similar to nyu code, except that we use bndbox instead of center points.
And for the poor performance on your data, my guess is also the MEAN and STD in your pics are very different with ITOP dataset., maybe you can train your own model.

from a2j.

Shreyas-NR avatar Shreyas-NR commented on July 28, 2024

Hi @logic03
Were you able to utilize this model to predict the Joints for a custom dataset?
I'm also trying to pass a depth frame along with the ITOP side dataset and change the mean value so that the input depth frame to the model matches with the ITOP_side dataset. Unfortunately, the results are very bad.
Could you tell me if you were able to do something more on this?

from a2j.

bmmtstb avatar bmmtstb commented on July 28, 2024

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.

Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

from a2j.

git-xuefu avatar git-xuefu commented on July 28, 2024

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.
Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

Your reply has saved me a lot of time. I originally intended to use a depth camera for real-time inference, but now it seems unnecessary.

Its a novel idea for HPE using anchors. But I feel like there are many unreasonable designs. Firstly, the mathematical calculation of anchor coordinates plus offset multiplied by weight can almost be replaced by anchor coordinates multiplied by weight. Secondly, in depth(z_coord) calculation, the network predicts the depth value of each anchor point(11_11_16*14), and then weighted and summed up to obtain the final keypoints depth.....Why not directly use the depth corresponding to the anchor point to weight and sum up?

Hi, have you used other models to successfully do real-time reasoning?

from a2j.

YHaooo-4508 avatar YHaooo-4508 commented on July 28, 2024

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.
Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

Your reply has saved me a lot of time. I originally intended to use a depth camera for real-time inference, but now it seems unnecessary.
Its a novel idea for HPE using anchors. But I feel like there are many unreasonable designs. Firstly, the mathematical calculation of anchor coordinates plus offset multiplied by weight can almost be replaced by anchor coordinates multiplied by weight. Secondly, in depth(z_coord) calculation, the network predicts the depth value of each anchor point(11_11_16*14), and then weighted and summed up to obtain the final keypoints depth.....Why not directly use the depth corresponding to the anchor point to weight and sum up?

Hi, have you used other models to successfully do real-time reasoning?

Currently, there are few depth-based algorithm, but there are many rgb-based 3D HPE algorithm,such as RLE、Poseformer、motionbert and so on.

from a2j.

git-xuefu avatar git-xuefu commented on July 28, 2024

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.
Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

Your reply has saved me a lot of time. I originally intended to use a depth camera for real-time inference, but now it seems unnecessary.
Its a novel idea for HPE using anchors. But I feel like there are many unreasonable designs. Firstly, the mathematical calculation of anchor coordinates plus offset multiplied by weight can almost be replaced by anchor coordinates multiplied by weight. Secondly, in depth(z_coord) calculation, the network predicts the depth value of each anchor point(11_11_16*14), and then weighted and summed up to obtain the final keypoints depth.....Why not directly use the depth corresponding to the anchor point to weight and sum up?

Hi, have you used other models to successfully do real-time reasoning?

Currently, there are few depth-based algorithm, but there are many rgb-based 3D HPE algorithm,such as RLE、Poseformer、motionbert and so on.目前基于深度的算法很少,但基于rgb的3D HPE算法很多,如RLE、Poseformer、motionbert等。

Thanks for your reply,it help me a lot . one more question, do you know RGBD-based algorithm can use to do real-time inference.

from a2j.

YHaooo-4508 avatar YHaooo-4508 commented on July 28, 2024

I tried a long time to predict the keypoints on a custom dataset, cleaned up the code, modified my depth images to be as close to the ITOP_side ones as possible, but all the results are pretty much garbage. My guess is that the model overfitted the ITOP data. Did anyone train a more general model on multiple datasets? As far as I can see were none of the models tested on different datasets than their training set. Different data yes, but not on different datasets.
Additionally I cannot get the speeds praised by the original paper. I can get to round about 10 iterations / second and that is on a better GPU with my fully optimized torch only code with better data-loading and no output... With e.g. yolo-v3 prediction human bboxes I get not much more than 8 iterations / sec.

Your reply has saved me a lot of time. I originally intended to use a depth camera for real-time inference, but now it seems unnecessary.
Its a novel idea for HPE using anchors. But I feel like there are many unreasonable designs. Firstly, the mathematical calculation of anchor coordinates plus offset multiplied by weight can almost be replaced by anchor coordinates multiplied by weight. Secondly, in depth(z_coord) calculation, the network predicts the depth value of each anchor point(11_11_16*14), and then weighted and summed up to obtain the final keypoints depth.....Why not directly use the depth corresponding to the anchor point to weight and sum up?

Hi, have you used other models to successfully do real-time reasoning?

Currently, there are few depth-based algorithm, but there are many rgb-based 3D HPE algorithm,such as RLE、Poseformer、motionbert and so on.目前基于深度的算法很少,但基于rgb的3D HPE算法很多,如RLE、Poseformer、motionbert等。

Thanks for your reply,it help me a lot . one more question, do you know RGBD-based algorithm can use to do real-time inference.

Using depth map,there is little research in this area. I don't know if there is an RGBD-based algorithm that can achieve both fast and accurate results. From the results of my paper search, it appears that A2J is the algorithm closest to your requirements

from a2j.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.