yhw-yhw / show Goto Github PK
View Code? Open in Web Editor NEWThis is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],
License: Other
This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],
License: Other
I am trying to run SHOW to get a rendering of a face portrait video. It is showing the following error:
2023-08-04 12:49:04 | INFO | SHOW.load_assets:48 - mmpose det length before: 1
2023-08-04 12:49:04 | INFO | SHOW.load_assets:53 - no whole person detected
2023-08-04 12:49:04 | ERROR | SHOW.load_assets:122 - max_person_crop_im is None
The model is also trying to import Openpifpaf, which was not installed during the setup process. It was commented out in modules/PyMAF/requirements.txt
.
Here is a sample image from the video attached:
Is there a way to run it without using Openpifpaf and solve this error?
It is provided that the hand pose shape is (bs,12) in the dataset description, [https://github.com/yhw-yhw/SHOW#dataset-description] but each hand should consist of 15 joints resulting a shape of (bs,45) right. Can you please explain the discrepancy here.
Could you please share the mmcv-full, mmpose, mmdet version ? These libs confilcts with each other. According to readme, the installed libs are mmcv-full==1.7.2 and mmpose==1.3.1.
From file all.pkl
I can get the parameters:
vertices
joints
full_pose
global_orient
transl
v_shaped
betas
body_pose
left_hand_pose
right_hand_pose
expression
jaw_pose
How to convert to Poses and Trans from SMPLX model?
Thank you!
It seems that stage 1 expects ground truth op values to run on a video, but also calculates the open pose values if they dont exist.
However, in practice if there is no all.pkl in test/demo_video/ours
, it does not detect any GT values and ends stage1 here
Lines 218 to 224 in 06447d4
What steps do we need to run Show on a custom video from scratch?
thanks to your brilliant job
I notice that you used SMPLX_MALE_shape2019_exp2020.npz as smplx parameters, but I cannot find the model in https://smpl-x.is.tue.mpg.de/download.php, can you share the used process method to get the file? how do I get a SMPLX_FEMALE_shape2019_exp2020.npz
I noticed the author is having problems with the mmcv library in Colab
mmhuman3d requires mmcv>=1.3.17, < 1.6.1
but follow https://mmcv.readthedocs.io/en/latest/get_started/installation.html#install-with-pip
Current Colab: torch=1.13.1 and cuda=cu116 only support mmcv=1.7.0
So I decided to build mmcv=1.6.0 from source
I followed the instructions here
https://mmcv.readthedocs.io/en/latest/get_started/build.html
!curl -LO https://github.com/open-mmlab/mmcv/archive/refs/tags/v1.6.0.tar.gz
!tar xzf v1.6.0.tar.gz
%cd mmcv-1.6.0
!pip install -r requirements/optional.txt
!MMCV_WITH_OPS=1 pip install -e . -v
Takes about 30 minutes
Can you upload the PyMAF-X_model_checkpoint.pt file?
Good afternoon, maybe I was inattentive when I read the documentation, but how do I run your model on an audio file?
openpose is good, but it's not properly using as a lib, since it hard to calling inside python scripts. Only useful when prepareing data.
But if inference needed data, better using own pose model or mmpose or alphapose.
Where can I find the PyMAF-X_model_checkpoint.pt file?
Awesome work, when I execute "wget https://www.dropbox.com/s/gqdcu51ilo44k3i/models.zip?dl=0 -O models.zip" return 403:Forbidden
大家好,请教大家问题:
我在训练pixel自回归模型时,遇到以下两个问题:
When I was training a pixel autoregressive model, I encountered the following two problems:
2.自回归模型会出现身份泄露的问题,比如speakerA生成时,会出现speakerB的动作和手势。
The autoregressive model will have the problem of identity leakage. For example, when speakerA is generated, the actions and gestures of speakerB will appear.
请教大家有什么解决问题的策略吗?感谢!
Is there an easy way to run the pipeline only on face? Without body/hand optimization
Does the talkshow able to geneate realastic expressions and demos with Mandrain inputs?
Hi, i try to run SHOW on demo video but i found some mistakes as below:
WARNING | SHOW.utils:95 - not exist: /home/dell/projects/talkingface/SHOW/test/demo_video/ours/final_metric.json
WARNING | stage1_main:135 - final_losses_json_path not valid
WARNING | stage1_main:139 - ours_pkl_file_path not exists
WARNING | stage1_main:222 - op_valid_flag is all False, skipping
WARNING | stage2_main:108 - bs_at_a_time: 14
WARNING | stage2_main:138 - ours_pkl_file_path not exists: /home/dell/projects/talkingface/SHOW/test/demo_video/ours/all.pkl
Hi,
I see that the author can run SHOW on Windows
I am having problems installing pytorch3D on Windows
Do you have any instructions for me?
Thank you very much!
作者你好,我按照您的安装指南部署了torch1.12和cuda11.3,在这个版本下,请问您的mmcv,mmpose和mmdet分别是哪几个版本呢~可否给一个能用的参考,十分感谢!
I want to download the video provided in the paper . But most of links to video on youtube seems invalid. Could you release or directly send me the links to download the video used in paper? I tried to download with the download_youtube.py and only download successfully with no more than 11w seconds, smaller than the 27hours described in the paper.
How to convert SMPLX_MALE_shape2019_exp2020.npz to shape2020_neurral?
I run on another video but it has this issue. How to fix it?
Thank you for your thorough and comprehensive work firstly! It is a very solid work.
But I found some differences between paper and code default implementation.
For the default config, I found that:
I am interested in understanding the reasons behind these design choices and the insights behind them.
Thank you for your time and kindness!
Thanks to the author team for this great work!
I fixed some bugs in Colab to make it work. I have detailed notes.
Thank you very much!
greetings! How to crop the videos according to SHOW_intervals_subject4.csv ?
Hello, @yhw-yhw
I observed that SHOW has set the initial focal length to 5000. Could this value vary? I ask because I intend to utilize the pre-calibrated camera intrinsics.
Hi, i try to run SHOW on demo video but i found some mistakes as below:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.