pengfeiren96 / srn Goto Github PK
View Code? Open in Web Editor NEW[BMVC 2019] Code for "SRN: Stacked Regression Network for Real-time 3D Hand Pose Estimation"
[BMVC 2019] Code for "SRN: Stacked Regression Network for Real-time 3D Hand Pose Estimation"
Hi, I am reading your paper and code recently. It is great work. Could you provide the training code? Thank you
Thanks for sharing the realtime code. May I know how you obtained the seen and unseen results for the Hands2017 dataset? I'm looking the dataset but I'm not able to identify the subejct id of the train and test images.
Hi,
Could you provide pretrained models of NYU, MSRA and ICVL?
Hello and thanks for sharing your work.
According to Section 5.3 of your paper the model size is 21.3 MB. However, the checkpoint has a size of 128 MB. Is it a typo in the paper or am I missing something here? Thanks in advance.
Hi, in the paper it is mentioned that the re-parameterization module takes the joint coordinates and the depth as inputs and outputs the 3D heat maps and unit vector fields. Do the joint coordinates here refer to the predicted joint coordinates from the previous regression module?
Thank you.
Hi, I'm running the pertained model on DHG-14/24 and SHREC'17 hand datasets and its working perfectly. I was able to get the world coordinates in X and Y with an error of ~15mm in my dataset's world coordinate system.
However, I'm confused about the depth returned by SRN. In "Coordinate Decoupling" section of the paper, it's mentioned that SRN predicts scaled image coordinates and their corresponding depths in decoupled manner. My question is, are these corresponding depths in pixel values as that of depth images, or are they in millimetres?
If I assume these are the depth image pixel values, I try to compare it to my dataset's depth values in world coordinate system (which is in meters), I get an average scaling factor of 0.00092, what I presume to be the depth scale of the RealSense camera used by my dataset's creators.
Multiplying SRN's output of depth by 0.00092 for px -> m conversion gives me a better estimation of depth than dividing by 1000 for mm -> m conversion when tested with my dataset's world depths. But if the returned depth values are in millimetres, I guess it's better to multiply by 0.001 to get meters directly to stay true to SRN's output format.
Hello! Your results are very promising. When will you release the code? Thank you!
Hello! I’m trying to use you’re NN with images acquired from my Kinect V2.
I started in offline mode so simply acquiring some depth images from the Kinect, saving them in the data/kinect2 folder and then running the realtime.py script.
However I wasn’t successful. This is an example of the results I get (I think the network it’s just guessing and drawing some random joints):
I digged more into the code to check if my images were different from yours and found that the depth was different, so I rescaled mine to be in the range of about 0-3000 and what I get is this:
instead of this (yours):
Can you give me any suggestions please?
Moreover, I have some questions:
Thanks in advance!
Hi
Could you advise me some useful method for 3d hand pose ?
both for acc and speed just like Mediapipe(but it only for 2d rgb)
Thanks so much !
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.