kevinltt / video2bvh Goto Github PK
View Code? Open in Web Editor NEWExtracts human motion in video and save it as bvh mocap file.
License: MIT License
Extracts human motion in video and save it as bvh mocap file.
License: MIT License
It seems that the current version of the xyz-to-bvh code cannot well handle the self-rotation of the shoulder, arm and hand. I am wondering how to utilize the limitted skeleton imformation (for example on Human 3.6M dataset) to estimate the self-rotation of the above-mentioned parts. Thanks!
So far I have been able to execute the code up to where I need to do 3D estimation. I am following the steps listed in the Google Colab notebook (https://colab.research.google.com/gist/eyaler/886ecdff6429ef9cd585d5dc8fdd51ec/openpose-video2bvh.ipynb#scrollTo=Oq64m1nPHm5E).
When I run the Convert 3D pose from camera coordinates to world coordinates block, I get this error: RuntimeError: CUDA out of memory. Tried to allocate 828.00 MiB (GPU 0; 6.00 GiB total capacity; 1.78 GiB already allocated; 0 bytes free; 1.81 GiB reserved in total by PyTorch)
I have tried several things online, including empty_cache().
Does anyone have a solution to this?
作者您好:我有两个问题咨询
1.请问在计算方向余弦矩阵时,所要的两个向量是如何确定的呢?如何选取向量才能保证骨骼旋转欧拉角的正确性?
2.不同关节点定义的旋转顺序的原理是什么?
图形学小白,还望不吝赐教
作者您好,最近一直在研读您的这段代码,其他地方我都没有问题,只有对这个eulermap函数x\y\z_dir的选取有些疑问;比如下面这个
if joint == 'Hip':
x_dir = pose[index['LeftHip']] - pose[index['RightHip']]
y_dir = None
z_dir = pose[index['Spine']] - pose[joint_idx]
order = 'zyx'
为什么x_dir是lefthip-righthip而不能是lefthip-hip或者hip-righthip?
再比如这里
elif joint == 'Thorax':
x_dir = pose[index['LeftShoulder']] - pose[index['RightShoulder']]
y_dir = None
z_dir = pose[joint_idx] - pose[index['Spine']]
order = 'zyx'
为什么z_dir不能是neck-thorax?
顺便您看看我下面的理解有没有错吗:
按照我的理解,假设第t帧第i个节点的坐标为Ai},他的父节点为Ap,那么我们要求的就是旋转矩阵R使得:
Ai-Ap=A_{offset}R
这里面R包含了9个参数,上面有三个方程,R自身每一行是单位向量多了三个条件,同时$R$每两行要正交,这也是三个条件,正好九个方程就个条件。我没有计算任意两个坐标变换的情况,但是上面的方程应该是只有有限解的,我不确定是否唯一。如果我按照这样的方式能解出来的话是否也可替代这个eulermap函数?而且如果解不唯一是否说明这个eulermap函数也不唯一?
另一方面,我自己试了个简单的二维情形,我发现如果选用平行于骨架方向的坐标轴,offset也必须是平行的,不然计算出来动作会有所偏差,会比实际上多旋转了一个欧拉角。但是我现在的任务拿到的inital direction并不是像您代码里那样完全平行于某一坐标轴的,而出于某些原因,我不能将他强行调整成平行于坐标轴那样,因此这个eulermap似乎并不能奏效,对此您有什么好的解决方法吗?
谢谢!
The demo show body capture but now face and hands. Just wondering if this can also be converted from Openpose json files?
Hi, thanks for you great work.
I tried the example and everything looks good until the last step.
The output .mp4:
The pose2d_file:
outputfile.zip
What could be wrong? Thanks for your help.
Can you give me some advice about converting finger 3D coordinate to .bvh file.
How can I set cam_id and subject for different videos?
And the cameras.h5 can stay same for different videos or I need to set?
生成的bvh 是CMU骨骼 和3dsmax 不匹配,我将其重命名后尝试导入3dsmax bip骨骼,在bvhacker上观察骨骼是正常的
但是导入到3dsmax中,骨骼就挤在一起
通过尝试,发现跟 initial_directions有关
video2bvh/bvh_skeleton/cmu_skeleton.py
Line 88 in 3948283
当我更改了initial_directions,将里面各项顺序改成[y,z,x],即:
for key,item in initial_directions.items():
initial_directions[key] = [item[1],item[2],item[0]]
video2bvh/bvh_skeleton/h36m_skeleton.py
Line 212 in 87f6135
why order is xzy, what is difference between xzy and zyx?
i have been searching this for 2 hours.
In the Demo file under Initialize 3D pose estimator
,in second line
e3d = estimator_3d.Estimator3D(
config_file='models/openpose_video_pose_243f/video_pose.yaml',
checkpoint_file='models/openpose_video_pose_243f/best_58.58.pth'
)
what does this models/openpose_video_pose_243f/video_pose.yaml
reffer to?
its throwing me FileNotFoundError
no errors rises in previous code block.
I searched for the files everywhere, no lead, no reference for these in the code.
Am i doing anything wrong or smt?.
solved
Hi @KevinLTT ,
Thanks for your work! I was stuck in a issue when I use pose2bvh to convert our 3d pose to bvh which is used to finish motion retargeting: bad retargeting result is generated so I compared input: our bvh and Mixamo bvh, here are their T-poses:
picture 1 is our generated T-pose, picture 2 is Mixamo's T-pose, seems their local axis in legs is reversed, how can I convert leg's local axis to be same as Mixamo's? when use pose2bvh I did some changes: use right-hand coordinate instead of left-hand coordinate.
Expect your reply sincerely , thanks!
regards,
summer Gao
Hello, Thank you for your great work
Can you provide a Colab Notebook Demo, it will be helpful for everyone and this repo will be easy to use
Thank you
Hello, after installing openpose I am not able to solve how to remove ImportError: cannot import name 'pyopenpose' this error? during the development of colab how were you able to run it? please help!
Hi, it the cmu_skeleton.py you have also implemented get_bvh_header and pose2euler, while in openpose_skeleton.py not. Do you have plan to develop these functions for openpose skeleton?
video2bvh/bvh_skeleton/h36m_skeleton.py
Line 75 in 87f6135
z axis up, x axis right, y axis towards the screen?
Thanks for releasing the nice code.
In
video2bvh/bvh_skeleton/coco_skeleton.py
Line 32 in 312d18f
we can see that Nose has two children: LeftEye and RightEye
'Nose': ['LeftEye', 'RightEye'],
The initial direction of LeftEye seems to have two dimensions of movement, along x and z?
What is the best practice to handle this case? Should we introduce a pseudo joint MidEye?
Hi,
I am now trying to convert a file that storing 3D coordinates of a pose to BVH, and then visualize it in Blender. However after converting I notice that the coordinate system in Blender is right-handed, while the math3D.py uses left-handed rule for rotation calculation. I am new to math3D, and I do not know how to re-edit the code for right-handed system. Is there any clue that can help me re-edit the code?
Thank you so much!
I get the error "cannot import name 'pyopenpose'" when I execute the first part. What should I do?
I want to use the coco_skeleton. But it seems that its class is incomplete compared to the other two.
扫了一下Issues
感觉很多人都在问这类问题。。。
可否写个教程blog之类的?
作者您好,在demo的最后的Convert 3D pose to BVH中。您用了同一个npy来生成了human3.6M和cmu_skeleton两种骨骼样式的.bvh文件。然后human3.6M和cmu_skeleton样式并不一样,在keypoint2index中的keypoint顺序也不一样,请问您是从这里根据命名来解决这个问题的吗?
I have 3D coordinate data (.h5 file)
I want to convert the data of these coordinates into a bvh file.
What should I do?
生成bvh, 用makewalk导入到blender, 模型变形
Hi,
Do you know offhand which of 3d-pose-baseline or VideoPose3D is faster/lighter? I'm looking to do 2d->3d pose inference on an Android device in real time. I tried VideoPose3D but it was way too slow...am wondering if I should also try 3d-pose-baseline.
Hi,
Thanks for the great work for retargeting. In addition to pose keypoints, is it possible to add face and hand keypoints?
Thanks.
will you have some interests in building one docker image with this excellent repo.
I know your repo from OpenMMD,
Thanks.
Looking forward to your any replies
非常好的工程,感谢,我这边有个问题,导出的bvh用blender2.8导入之后,尺寸太大不合适,请问哪里可以设置bvh大小呢?非常感谢!
Thanks Kevin for your fantastic work!
I came across an issue when I was analysing videos. When people walk towards to the camera, the 3d coordinates on y axis are not changed or changed slightly. Could I ask is there any settings to upgrade the performance?
I use Openpose with max accuracy settings to generate 2d data and the VideoPose3D pre-trained model to convert to 3d. The video is very simple, just a person walks towards to the camera with the distance around 7 meters.
Thanks again!
Looking forward to hearing from you!
Aaron
您好,请问left_hand和right_hand如何表示的空间坐标应该怎么计算啊?
while stack:
node = stack.pop()
joint = node.name
joint_idx = self.keypoint2index[joint]
if node.is_root:
channel.extend(pose[joint_idx])
index = self.keypoint2index
order = None
if joint == 'Hips':
x_dir = pose[index['LeftUpLeg']] - pose[index['RightUpLeg']]
y_dir = None
z_dir = pose[index['Spine']] - pose[joint_idx]
order = 'zyx'
elif joint in ['RightUpLeg', 'RightLeg']:
child_idx = self.keypoint2index[node.children[0].name]
x_dir = pose[index['Hips']] - pose[index['RightUpLeg']]
y_dir = None
z_dir = pose[joint_idx] - pose[child_idx]
order = 'zyx'
elif joint in ['LeftUpLeg', 'LeftLeg']:
child_idx = self.keypoint2index[node.children[0].name]
x_dir = pose[index['LeftUpLeg']] - pose[index['Hips']]
y_dir = None
z_dir = pose[joint_idx] - pose[child_idx]
order = 'zyx'
elif joint == 'Spine':
x_dir = pose[index['LeftUpLeg']] - pose[index['RightUpLeg']]
y_dir = None
z_dir = pose[index['Spine1']] - pose[joint_idx]
order = 'zyx'
elif joint == 'Spine1':
x_dir = pose[index['LeftArm']] - \
pose[index['RightArm']]
y_dir = None
z_dir = pose[joint_idx] - pose[index['Spine']]
order = 'zyx'
elif joint == 'Neck1':
x_dir = None
y_dir = pose[index['Spine1']] - pose[joint_idx]
z_dir = pose[index['HeadEndSite']] - pose[index['Spine1']]
order = 'zxy'
elif joint == 'LeftArm':
x_dir = pose[index['LeftForeArm']] - pose[joint_idx]
y_dir = pose[index['LeftForeArm']] - pose[index['LeftHand']]
z_dir = None
order = 'xzy'
elif joint == 'LeftForeArm':
x_dir = pose[index['LeftHand']] - pose[joint_idx]
y_dir = pose[joint_idx] - pose[index['LeftArm']]
z_dir = None
order = 'xzy'
elif joint == 'RightArm':
x_dir = pose[joint_idx] - pose[index['RightForeArm']]
y_dir = pose[index['RightForeArm']] - pose[index['RightHand']]
z_dir = None
order = 'xzy'
elif joint == 'RightForeArm':
x_dir = pose[joint_idx] - pose[index['RightHand']]
y_dir = pose[joint_idx] - pose[index['RightArm']]
z_dir = None
order = 'xzy'
Hi. I'm workin on coordinates to bvh convertor. can you please provide the intuition behind axis choice and direction calculations for skeletons?
Thanks for your work!
I followed your 'demo.ipynb' and made my own bvh file.
But it seems not work well, even if I use your 'cvh.bvh' file.
I downloaded blender, MakeHuman, makewalk.
First i load my mhx2 file (I use 'cmu mb skeleton)
And 'load and retarget' your 'cvh.bvh'
and play...
(sorry I can't make gif file or video file)
How do I fix this error?
Hi @KevinLTT ,
Thanks for the code. Works well.
With respect to OpenPose face & hand keypoints, since you are using SMPL model (which does not consider face & hand keypoints) to generate the mesh model and also as you are selecting specific 3D keypoints to write to csv file, I don't think resultant bvh will have facial expression & hand motions. Correct me if I am wrong. (I tried and didn't work)
Also, let me know which all files have to be updated to enable facial expressions & hands motions in the output bvh.
Thanks.
Hello, after installing openpose I am not able to solve how to removeNotImplementedError: cannot instantiate 'PosixPath' on your system? please help!
老师您好,在您的代码中有一点不太理解的是:为什么在计算肩膀Shoulder、肘部Elbow的时候要采用xy方向向量计算方向余弦矩阵呢?按照初始姿态定义的情况,此时可以确定平面的还是xz两个方向向量啊,LeftWrist—>LeftElbow这个向量按初始姿态应该算不出y分量的值呀,同样的疑惑还有Neck选取的yz向量,我按照我的想法修改了但是结果不正确,可能理解有误,但不知道其中问题出在哪,还望老师解答。
elif joint == 'Neck':
x_dir = None
y_dir = pose[index['Thorax']] - pose[joint_idx]
z_dir = pose[index['HeadEndSite']] - pose[index['Thorax']]
order = 'zxy'
elif joint == 'LeftShoulder':
x_dir = pose[index['LeftElbow']] - pose[joint_idx]
y_dir = pose[index['LeftElbow']] - pose[index['LeftWrist']]
z_dir = None
order = 'xzy'
I'm really insterested in your excellent repo. However, I can not success in deploying one openpose's repo environment, owing to the special service environment of my using. So, I try to use other keypoint detection network for producing the keypoint. I hope you can support some middle file of 2d,3d keypoint.
Thx
Looking forward to any replies
您在文档写的是使用的openpose来检测25个关键点,但在骨骼中观察是17个关键点?
The bvh file cann't be used in Daz Studio.
Any update for real time demo?
I am currently looking into obtainig the pose from a video/image and then converting them into BVH format.I am looking at using Mediapipe for 3d world pose estimation,but it seems that the mediapipe skeleton and openpose skeleton are a bit different.
for example the current mediapipe skeleton mapping that I am working on is as follows
self.keypoint2index = {
'mid_hip': 0,
'right_hip': 1,
'right_knee': 2,
'right_ankle': 3,
'right_foot_index': 4,
'left_hip': 5,
'left_knee': 6,
'left_ankle': 7,
'left_foot_index': 8,
'spine': 9,
'mid_shoulder': 10,
'left_shoulder': 11,
'left_elbow': 12,
'left_wrist': 13,
'right_shoulder': 14,
'right_elbow': 15,
'right_wrist': 16,
'right_foot_EndSite': -1,
'left_foot_EndSite': -1,
'left_wrist_EndSite': -1,
'right_wrist_EndSite': -1,
'mid_shoulder_EndSite': -1
}
I also noticed that when I view the bvh in blender,the axes need to be rotated by 360 degree to see the model in correct pose.In my code,the files bvh_helper,math3d are the same as yours.Only changes i did were in the cmu_skeleton before line 119.The changes are mentioned below
self.index2keypoint = {v: k for k, v in self.keypoint2index.items()}
self.keypoint_num = len(self.keypoint2index)
# Create the parent-children dictionary
self.children = {
'mid_hip': ['left_hip', 'spine', 'right_hip'],
'left_hip': ['left_knee'],
'left_knee': ['left_ankle'],
'left_ankle': ['left_foot_index'],
'left_foot_index': ['left_foot_EndSite'],
'left_foot_EndSite': [],
'spine': ['mid_shoulder'],
'mid_shoulder': ['left_shoulder', 'right_shoulder', 'mid_shoulder_EndSite'],
'mid_shoulder_EndSite': [],
'left_shoulder': ['left_elbow'],
'left_elbow': ['left_wrist'],
'left_wrist': ['left_wrist_EndSite'],
'left_wrist_EndSite': [],
'right_shoulder': ['right_elbow'],
'right_elbow': ['right_wrist'],
'right_wrist': ['right_wrist_EndSite'],
'right_wrist_EndSite': [],
'right_hip': ['right_knee'],
'right_knee': ['right_ankle'],
'right_ankle': ['right_foot_index'],
'right_foot_index': ['right_foot_EndSite'],
'right_foot_EndSite': []
}
self.parent = {self.root: None}
for parent, children in self.children.items():
for child in children:
self.parent[child] = parent
self.left_joints = [
joint for joint in self.keypoint2index
if 'left_' in joint
]
self.right_joints = [
joint for joint in self.keypoint2index
if 'right_' in joint
]
# Create T-pose and define the direction from parent to child joint
# What if the direction is not aligned with any axis????? I don't know, may be use sin, cos to represent it???????????????????
self.initial_directions = {
'mid_hip': [0, 0, 0],
'right_hip': [-1, 0, 0],
'right_knee': [0, 0, -1],
'right_ankle': [0, 0, -1],
'right_foot_index': [0, -1, 0],
'right_foot_EndSite': [0, -1, 0],
'left_hip': [1, 0, 0],
'left_knee': [0, 0, -1],
'left_ankle': [0, 0, -1],
'left_foot_index': [0, -1, 0],
'left_foot_EndSite': [0, -1, 0],
'spine': [0, 0, 1],
'mid_shoulder': [0, 0, 1],
'mid_shoulder_EndSite': [0, 0, 1],
'left_shoulder': [1, 0, 0],
'left_elbow': [1, 0, 0],
'left_wrist': [1, 0, 0],
'left_wrist_EndSite': [1, 0, 0],
'right_shoulder': [-1, 0, 0],
'right_elbow': [-1, 0, 0],
'right_wrist': [-1, 0, 0],
'right_wrist_EndSite': [-1, 0, 0]
}
I would appreciate it if you could suggest me some pointers on what can be done to correct the final pose in my case.
@hanhailangya @KevinLTT
前辈们,针对这句话:在初始姿态RightLeg->RightUpLeg向量与z轴平行, 所以在计算的时候取其为z轴方向. 同时, 在初始姿态中Hips->RightUpLeg向量与x轴平行, RightLeg->RightUpLeg向量和Hips->RightUpLeg向量构成了xz平面, 所以对二者做叉乘可以求出y轴方向, 最后再对z轴和y轴做叉乘就可以求出x轴方向了, 所以最后的计算顺序是'zyx'.
- 竟然已经求出xz平面,并且进行叉乘求出y轴方向,那不是已经了确定新的xyz轴向了,为何还要再z轴和y轴做叉乘求出x轴方向,这个新的x和原先设定的Hips->RightUpLeg向量x不是一样的吗
- 我也对 About direction vector #14 (comment)
存在疑惑,在初始restpose中:video2bvh/bvh_skeleton/cmu_skeleton.py
Lines 232 to 236 in 312d18f
若能解惑,不胜感激
朋友,你好
请问你这个图是怎么可视化出来的呀?是在blender打开还是用了什么插件
Originally posted by @Kismetzc in #13 (comment)
Hi
I have a problem using video2bvh, when I try to run demo.ipynb I have this error in the first cell :
ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from pose_estimator_2d import openpose_estimator
2 from pose_estimator_3d import estimator_3d
3 from utils import smooth, vis, camera
4 from bvh_skeleton import openpose_skeleton, h36m_skeleton, cmu_skeleton
5
~/video2bvh/pose_estimator_2d/openpose_estimator.py in
1 from .estimator_2d import Estimator2D
----> 2 from openpose import pyopenpose as op
3
4
5 class OpenPoseEstimator(Estimator2D):
ModuleNotFoundError: No module named 'openpose'
But Openpose is installed and working, I think I forget to do something, do I have to specify where is located my Openpose folder ?
Thanks for your work
作者你好,小白想问一下T-pose的初始化坐标是依据是什么得来的呢?
I want to write a coco_skeleton module, but I don’t have the relevant basic knowledge
Hi, thx for your great work! I am an animation novice, recently, I‘m busy with converting human pose keypoints numpy matrix to bvh files. I find your code (especially cmu_skeleton.py) inspiring. Since I'm a novice, I don't understand what pose2euler does. What does this function do to convert keypoints to euler? Could you please give me a general description?
Hi, I would like to use my own skeleton, however, some of my initial_directions are not paralleled to any coordinate axis. So, my question is how to amend the result?
For example:
I got LeftElbow': [1, 0, 0] to LeftElbow': [0.718, 0.718, 0], this makes a rotation of 45 degree around with the z axis in local coordinate system. Now, for the .BVH the XYZ rotation parameterss are [a, b, c], so how could I get new parameters for the new local coordinate system which are [a', b', c']
作者您好,我想使用我自己的骨架,然而,我的一些initial_directions是不平行于任何坐标轴的。所以,我的问题是如何去修正得到的ZXY旋转结果?
例如:
我的LeftElbow'初始direction是:[1,0,0],修改为LeftElbow':[0.718, 0.718, 0],这个旋转变化体现为在局部坐标系的中的绕z轴旋转45度。对于原来的direction,在.bvh文件中的对应点的 X、Y、Z旋转参数是[a, b, c],那么我如何获得新的局部坐标系的旋转参数[a',b',c']呢?
Hey @KevinLTT,
Pleased to see this amazing work.
I need a help from you.
Actually I wanna utilize your code just to convert 3d joints data to .bvh file.
Can I know your exact format of 3d joints data (3d_pose.npy) so that I can convert my data into this format?
Or can you please provide '3d_pose.npy' file for a sample if possible as soon as possible?
Hey! I would like to use this repository for educational purposes. Is that possible? Could you add a license to the repository to clarify that? Thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.