Giter Club home page Giter Club logo

Comments (11)

xinghaochen avatar xinghaochen commented on June 9, 2024 1

Hi, if you use depth cameras other than SR300, the only thing you have to deal with is changing the intrinsic parameters in hand_pose_estimator.cpp or realsense_realtime_demo_librealsense2.py.

The default values are intrinsic parameters of Intel SR300 camera and you can replace these parameters with those of R415 or 435.

Another tip for real-time demo: since we simply use a naive depth thresholding strategy to detect and segment the hand, you have to put your right hand in infront of the camera and try to avoid clusttered foreground and redundant arm around the hand.

Currently I am not quite sure if we could release the pre-trained models on Hands17 dataset online due to the related license of the dataset. We may contact the authors of the dataset to clarify this issue in the future.

from pose-ren.

xinghaochen avatar xinghaochen commented on June 9, 2024 1

@joohaeng Hi, In cases you are still interested, I just release the pre-trained models on HANDS17 dataset online.

from pose-ren.

xinghaochen avatar xinghaochen commented on June 9, 2024 1

@Suraj520 Would you mind checking the original depth image from the camera to see if the camera is working properly.
According to your screenshot, the depth image is quite poor and the hand can hardly be recognized even by eyes. In the display function, I manually discard the depth values (> 1500) that are too far away from the camera. Did you happen to put your hand too far away?
Also, if you set the parameters lower_ and upper_ to 180 and 350, you should make sure your hand is within [180mm, 350mm] from the camera, otherwise the hand segmentation would fail.

from pose-ren.

joohaeng avatar joohaeng commented on June 9, 2024

I could run the demo with SR300, which shows the great performance as in your demo images.

I will also test with the R400 cameras with proper intrinsic parameters. By the way, do you expect R400 will give some performance gain in pose estimation?

I've already have Hands17 dataset. Before waiting your release in the future, I may try to train with this dataset by myself. Do you have any guides for training procedure with Hands17 dataset?

Thank you again.

from pose-ren.

xinghaochen avatar xinghaochen commented on June 9, 2024

I am not quite familiar with R400 camera, but I think it will bring some performance gain if it can provide better depth images than SR300.
You can find details of the training procedure in our paper.

from pose-ren.

joohaeng avatar joohaeng commented on June 9, 2024

I could run the real-time demo for D435 with the following intrinsic values after calibration:
stream.depth: width: 640, height: 480, ppx: 316.802, ppy: 241.818, fx: 385.13, fy: 385.13

The performance was as good as SR300, I guess.

By the way, what should be proper values for lower_ and upper_? The default values are 0 and 650. But, I changed them to 180 and 350.

Thanks again.

from pose-ren.

xinghaochen avatar xinghaochen commented on June 9, 2024

Thanks for sharing the intrinsic parameters for D435 camera! Glad to hear that the demo works good for D435.

The values of lower_ and upper_ are just used for hand segmentation via depth thresholding. It's ok to change them to any other values as long as the hand is properly segmented.

from pose-ren.

joohaeng avatar joohaeng commented on June 9, 2024

@joohaeng Hi, In cases you are still interested, I just release the pre-trained models on HANDS17 dataset online.

That's HUGE! I was testing MSR models to recognize ASL. But found some limitations for subtle finger postures. I am so curious to know how HANDS17 performs for harder cases. Thanks a lot!

from pose-ren.

Suraj520 avatar Suraj520 commented on June 9, 2024

I could run the real-time demo for D435 with the following intrinsic values after calibration:
stream.depth: width: 640, height: 480, ppx: 316.802, ppy: 241.818, fx: 385.13, fy: 385.13

The performance was as good as SR300, I guess.

By the way, what should be proper values for lower_ and upper_? The default values are 0 and 650. But, I changed them to 180 and 350.

Thanks again.

Hi @joohaeng , I am trying to run the demo of Pose-REN using Intel Realsense D435, I was redirected by @xinghaochen to use the camera intrinsic parameters mentioned by you. I have a small doubt - "Are the ux, uy mentioned in the Line 53 of "realsense_realtime_demo_librealsense2.py " same as the values of ppx and ppy mentioned by you above !?,
issue
because I modified the scripts according to your suggestion above with the instrinsic parameters of D435 but the hand segmentation and pose estimation is not appreciable (*P.S: I am following all the rest instructions of using right hand, non occluded background etc mentioned by @xinghaochen .)

Looking forward for your help!.

(Attaching my detection results using icvl pre-trained models on Intel realsense D435)

from pose-ren.

joohaeng avatar joohaeng commented on June 9, 2024

Hi @joohaeng , I am trying to run the demo of Pose-REN using Intel Realsense D435, I was redirected by @xinghaochen to use the camera intrinsic parameters mentioned by you. I have a small doubt - "Are the ux, uy mentioned in the Line 53 of "realsense_realtime_demo_librealsense2.py " same as the values of ppx and ppy mentioned by you above !?,
because I modified the scripts according to your suggestion above with the instrinsic parameters of D435 but the hand segmentation and pose estimation is not appreciable (*P.S: I am following all the rest instructions of using right hand, non occluded background etc mentioned by @xinghaochen .)

Looking forward for your help!.

(Attaching my detection results using icvl pre-trained models on Intel realsense D435)

Hi @Suraj520. My experiment with D435 was a month ago, and I've got a bad memory. ^_^

I'll check my environment and be back to you soon. Thanks.

from pose-ren.

Suraj520 avatar Suraj520 commented on June 9, 2024

@Suraj520 Would you mind checking the original depth image from the camera to see if the camera is working properly.
According to your screenshot, the depth image is quite poor and the hand can hardly be recognized even by eyes. In the display function, I manually discard the depth values (> 1500) that are too far away from the camera. Did you happen to put your hand too far away?
Also, if you set the parameters lower_ and upper_ to 180 and 350, you should make sure your hand is within [180mm, 350mm] from the camera, otherwise the hand segmentation would fail.

Thanks a lot @xinghaochen , your suggestions helped me to solve the bug! :)

from pose-ren.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.