Giter Club home page Giter Club logo

Comments (10)

zxz267 avatar zxz267 commented on June 15, 2024

Traceback (most recent call last): File "/home/yyh_file/AvatarJLM-main/vis.py", line 94, in main(opt) File "/home/yyh_file/AvatarJLM-main/vis.py", line 75, in main avg_error = evaluate(opt, logger, model, test_loader, save_animation=1) File "/home/yyh_file/AvatarJLM-main/test.py", line 41, in evaluate vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm, fps=60, resolution = (800,800)) File "/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py", line 174, in save_animation mv = MeshViewer(width=imw, height=imh, use_offscreen=True) File "/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py", line 59, in init self.viewer = pyrender.OffscreenRenderer(*self.figsize) File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py", line 31, in init self._create() File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py", line 137, in _create egl_device = egl.get_device_by_index(device_id) File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py", line 83, in get_device_by_index raise ValueError('Invalid device ID ({})'.format(device_id, len(devices))) ValueError: Invalid device ID (0) Thanks for your work, but sadly there seems to be a problem with the pose visualization code. Have you encountered such a problem? Or how to solve it. thank you for your reply!

Apologies for the delay in responding. You can attempt to address this issue by the following link: GitHub Issue #79.

from avatarjlm.

wwwpkol avatar wwwpkol commented on June 15, 2024

回溯(最近一次调用最后一次): 文件“/home/yyh_file/AvatarJLM-main/vis.py”,第 94 行,在 main(opt) 中 文件 “/home/yyh_file/AvatarJLM-main/vis.py”,第 75 行,在主avg_error中 = evaluate(opt, logger, model, test_loader, save_animation=1) 文件 “/home/yyh_file/AvatarJLM-main/test.py”,第 41 行,在 evaluate vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm,fps=60,分辨率 = (800,800)) 文件“/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py”,第 174 行,save_animation mv = MeshViewer(width=imw, height=imh, use_offscreen=True) 文件“/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py”,第 59 行,init self.viewer = pyrender。OffscreenRenderer(*self.figsize) 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 31 行,init self._create() 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 137 行,_create egl_device = egl.get_device_by_index(device_id) 文件 “/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py”,第 83 行,在 get_device_by_index raise ValueError('无效的设备 ID ({})'.format(device_id, len(devices))) ValueError:设备 ID 无效 (0) 感谢您的工作,但遗憾的是姿势可视化代码似乎存在问题。你遇到过这样的问题吗?或者如何解决它。感谢您的回复!

对于延迟回复,我们深表歉意。您可以尝试通过以下链接解决此问题:GitHub 问题 #79

Thanks for your reply, I have resolved this issue. But a new problem arose for me. I used OpenVR to obtain the rotation and position of the HTC Vive. After preprocessing the data, the posture obtained through the network was completely incorrect. Do you have any reference for preprocessing data in VR?
image

from avatarjlm.

zxz267 avatar zxz267 commented on June 15, 2024

回溯(最近一次调用最后一次): 文件“/home/yyh_file/AvatarJLM-main/vis.py”,第 94 行,在 main(opt) 中 文件 “/home/yyh_file/AvatarJLM-main/vis.py”,第 75 行,在主avg_error中 = evaluate(opt, logger, model, test_loader, save_animation=1) 文件 “/home/yyh_file/AvatarJLM-main/test.py”,第 41 行,在 evaluate vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm,fps=60,分辨率 = (800,800)) 文件“/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py”,第 174 行,save_animation mv = MeshViewer(width=imw, height=imh, use_offscreen=True) 文件“/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py”,第 59 行,init self.viewer = pyrender。OffscreenRenderer(*self.figsize) 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 31 行,init self._create() 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 137 行,_create egl_device = egl.get_device_by_index(device_id) 文件 “/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py”,第 83 行,在 get_device_by_index raise ValueError('无效的设备 ID ({})'.format(device_id, len(devices))) ValueError:设备 ID 无效 (0) 感谢您的工作,但遗憾的是姿势可视化代码似乎存在问题。你遇到过这样的问题吗?或者如何解决它。感谢您的回复!

对于延迟回复,我们深表歉意。您可以尝试通过以下链接解决此问题:GitHub 问题 #79

Thanks for your reply, I have resolved this issue. But a new problem arose for me. I used OpenVR to obtain the rotation and position of the HTC Vive. After preprocessing the data, the posture obtained through the network was completely incorrect. Do you have any reference for preprocessing data in VR? image

I do not have any experience using OpenVR to obtain the tracking signals of HTC Vive. The results may indicate the coordinate system of the training data is different from your preprocessed data.

from avatarjlm.

wwwpkol avatar wwwpkol commented on June 15, 2024

回溯(最近一次调用最后一次): 文件“/home/yyh_file/AvatarJLM-main/vis.py”,第 94 行,在 main(opt) 中 文件 “/home/yyh_file/AvatarJLM-main/vis.py”,第 75 行,在主avg_error中 = evaluate(opt, logger, model, test_loader, save_animation=1) 文件 “/home/yyh_file/AvatarJLM-main/test.py”,第 41 行,在 evaluate vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm,fps=60,分辨率 = (800,800)) 文件“/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py”,第 174 行,save_animation mv = MeshViewer(width=imw, height=imh, use_offscreen=True) 文件“/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py”,第 59 行,init self.viewer = pyrender。OffscreenRenderer(*self.figsize) 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 31 行,init self._create() 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 137 行,_create egl_device = egl.get_device_by_index(device_id) 文件 “/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py”,第 83 行,在 get_device_by_index raise ValueError('无效的设备 ID ({})'.format(device_id, len(devices))) ValueError:设备 ID 无效 (0) 感谢您的工作,但遗憾的是姿势可视化代码似乎存在问题。你遇到过这样的问题吗?或者如何解决它。感谢您的回复!

对于延迟回复,我们深表歉意。您可以尝试通过以下链接解决此问题:GitHub 问题 #79

Thanks for your reply, I have resolved this issue. But a new problem arose for me. I used OpenVR to obtain the rotation and position of the HTC Vive. After preprocessing the data, the posture obtained through the network was completely incorrect. Do you have any reference for preprocessing data in VR? image

I do not have any experience using OpenVR to obtain the tracking signals of HTC Vive. The results may indicate the coordinate system of the training data is different from your preprocessed data.

Thanks for your reply. What did you use to obtain the data from the headset and controller?

from avatarjlm.

zxz267 avatar zxz267 commented on June 15, 2024

回溯(最近一次调用最后一次): 文件“/home/yyh_file/AvatarJLM-main/vis.py”,第 94 行,在 main(opt) 中 文件 “/home/yyh_file/AvatarJLM-main/vis.py”,第 75 行,在主avg_error中 = evaluate(opt, logger, model, test_loader, save_animation=1) 文件 “/home/yyh_file/AvatarJLM-main/test.py”,第 41 行,在 evaluate vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm,fps=60,分辨率 = (800,800)) 文件“/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py”,第 174 行,save_animation mv = MeshViewer(width=imw, height=imh, use_offscreen=True) 文件“/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py”,第 59 行,init self.viewer = pyrender。OffscreenRenderer(*self.figsize) 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 31 行,init self._create() 文件“/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py”,第 137 行,_create egl_device = egl.get_device_by_index(device_id) 文件 “/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py”,第 83 行,在 get_device_by_index raise ValueError('无效的设备 ID ({})'.format(device_id, len(devices))) ValueError:设备 ID 无效 (0) 感谢您的工作,但遗憾的是姿势可视化代码似乎存在问题。你遇到过这样的问题吗?或者如何解决它。感谢您的回复!

对于延迟回复,我们深表歉意。您可以尝试通过以下链接解决此问题:GitHub 问题 #79

Thanks for your reply, I have resolved this issue. But a new problem arose for me. I used OpenVR to obtain the rotation and position of the HTC Vive. After preprocessing the data, the posture obtained through the network was completely incorrect. Do you have any reference for preprocessing data in VR? image

I do not have any experience using OpenVR to obtain the tracking signals of HTC Vive. The results may indicate the coordinate system of the training data is different from your preprocessed data.

Thanks for your reply. What did you use to obtain the data from the headset and controller?

We use the tools provided by our colleagues to record the data from the PICO devices.

from avatarjlm.

wwwpkol avatar wwwpkol commented on June 15, 2024

thank you for your reply

from avatarjlm.

wwwpkol avatar wwwpkol commented on June 15, 2024
image The current coordinate system is correct, but all postures predicted using the open source network model have huge jitter. I suspect it is a problem with the HCT Vive I am using, and I plan to replace the PICO device. Could you please share the data recording tool and preprocessing code? thanks

from avatarjlm.

zxz267 avatar zxz267 commented on June 15, 2024

image The current coordinate system is correct, but all postures predicted using the open source network model have huge jitter. I suspect it is a problem with the HCT Vive I am using, and I plan to replace the PICO device. Could you please share the data recording tool and preprocessing code? thanks

If you aim to utilize PICO devices for testing purposes, you might consider using the PICO SDK designed for developers, which could potentially help in recording tracking data (although I must admit that I am not well-versed in this particular aspect). Unfortunately, our tools and corresponding code are currently unavailable for release.
Regarding jitter issues, they may stem from discrepancies between the joint alignments in SMPL-H models and the tracking devices. To mitigate this problem to a certain degree, we employ device-specific empirical transformations.
Additionally, we also observe the presence of jitter during testing on the devices, which can be attributed to variations between the training data and the test data to some extent.

from avatarjlm.

XuxinBlue avatar XuxinBlue commented on June 15, 2024

image The current coordinate system is correct, but all postures predicted using the open source network model have huge jitter. I suspect it is a problem with the HCT Vive I am using, and I plan to replace the PICO device. Could you please share the data recording tool and preprocessing code? thanks

If you aim to utilize PICO devices for testing purposes, you might consider using the PICO SDK designed for developers, which could potentially help in recording tracking data (although I must admit that I am not well-versed in this particular aspect). Unfortunately, our tools and corresponding code are currently unavailable for release. Regarding jitter issues, they may stem from discrepancies between the joint alignments in SMPL-H models and the tracking devices. To mitigate this problem to a certain degree, we employ device-specific empirical transformations. Additionally, we also observe the presence of jitter during testing on the devices, which can be attributed to variations between the training data and the test data to some extent.

Hello, I am very interested in your research in this area, I also have a lot of questions to ask you: 1. The position and pose parameters of the headset and controller are obtained by what software or SDK; 2. Your demo is very stable, perhaps after some Data pre-processing, can you please tell me the specific ideas; 3. If it's too much to talk about, can you give us a copy of the pre-processed JSON (or other format) file that holds the parameter data and the corresponding backup of the recorded video? Thank you very much!

from avatarjlm.

zxz267 avatar zxz267 commented on June 15, 2024

image The current coordinate system is correct, but all postures predicted using the open source network model have huge jitter. I suspect it is a problem with the HCT Vive I am using, and I plan to replace the PICO device. Could you please share the data recording tool and preprocessing code? thanks

If you aim to utilize PICO devices for testing purposes, you might consider using the PICO SDK designed for developers, which could potentially help in recording tracking data (although I must admit that I am not well-versed in this particular aspect). Unfortunately, our tools and corresponding code are currently unavailable for release. Regarding jitter issues, they may stem from discrepancies between the joint alignments in SMPL-H models and the tracking devices. To mitigate this problem to a certain degree, we employ device-specific empirical transformations. Additionally, we also observe the presence of jitter during testing on the devices, which can be attributed to variations between the training data and the test data to some extent.

Hello, I am very interested in your research in this area, I also have a lot of questions to ask you: 1. The position and pose parameters of the headset and controller are obtained by what software or SDK; 2. Your demo is very stable, perhaps after some Data pre-processing, can you please tell me the specific ideas; 3. If it's too much to talk about, can you give us a copy of the pre-processed JSON (or other format) file that holds the parameter data and the corresponding backup of the recorded video? Thank you very much!

  1. While I am not well-versed in this specific aspect, it's plausible that there exists a method to extract tracking signals from the official SDK; however, I cannot confirm this with certainty. The tools I utilize for this purpose are developed by our team and may indeed interface with the SDK. Unfortunately, these proprietary tools are not available for release.
  2. We apply device-specific empirical rigid transformations to translate the raw sensor signals into data corresponding to the head and hand joints.
  3. You can access some of our testing samples through this link. Please note that these samples do not include accompanying recorded videos. At present, we are unable to provide test samples that feature synchronized video recordings.

from avatarjlm.

Related Issues (6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.