Giter Club home page Giter Club logo

glamr's Introduction

GLAMR

This repo contains the official PyTorch implementation of our paper:

GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
Ye Yuan, Umar Iqbal, Pavlo Molchanov, Kris Kitani, Jan Kautz
CVPR 2022 (Oral)
website | paper | video

Overview

News

  • [08/10/22]: Demos for multi-person videos are added (Thanks to Haofan Wang)!
  • [08/01/22]: Demos for dynamic and static videos are released!

Table of Content

Installation

Environment

  • Tested OS: MacOS, Linux
  • Python >= 3.7
  • PyTorch >= 1.8.0
  • HybrIK (used in demo)

Dependencies

  1. Clone this repo recursively:
    git clone --recursive https://github.com/NVlabs/GLAMR.git
    
    This will fetch the submodule HybrIK.
  2. Follow HybrIK's installation instructions and download its models.
  3. Install PyTorch 1.8.0 with the correct CUDA version.
  4. Install system dependencies (Linux only):
    source install.sh
    
  5. Install python dependencies:
    pip install -r requirements.txt
    
  6. Download SMPL models & joint regressors and place them in the data folder. You can obtain the model following SPEC's instructions here.
  7. Download pretrained_w_cam.pth from Google Drive

Pretrained Models

Demo

We provide demos for single- and multi-person video with both dynamic and static cameras.

Dynamic Videos

Run the following command to test GLAMR on a single-person video with dynamic camera:

python global_recon/run_demo.py --cfg glamr_dynamic \
                                --video_path assets/dynamic/running.mp4 \
                                --out_dir out/glamr_dynamic/running \
                                --save_video

This will output results to out/glamr_dynamic/running. Results videos will be saved to out/glamr_dynamic/running/grecon_videos. Additional dynamic test videos can be found in assets/dynamic. More video comparison with HybrIK are available here.

Static Videos

Run the following command to test GLAMR on a single-person video with static camera:

python global_recon/run_demo.py --cfg glamr_static \
                                --video_path assets/static/basketball.mp4 \
                                --out_dir out/glamr_static/basketball \
                                --save_video

This will output results to out/glamr_static/basketball. Results videos will be saved to out/glamr_static/basketball/grecon_videos. Additional static test videos can be found in assets/static. More video comparison with HybrIK are available here.

Multi-Person Videos

Use the --multi flag and the glamr_static_multi config in the above demos to test GLAMR on a multi-person video:

python global_recon/run_demo.py --cfg glamr_static_multi \
                                --video_path assets/static/basketball.mp4 \
                                --out_dir out/glamr_static_multi/basketball \
                                --save_video \
                                --multi

This will output results to out/glamr_static_multi/basketball. Results videos will be saved to out/glamr_static_multi/basketball/grecon_videos.

Datasets

We use three datasets: AMASS, 3DPW, and Dynamic Human3.6M. Please download them from the official website and place them in the dataset folder with the following structure:

${GLAMR_ROOT}
|-- datasets
|   |-- 3DPW
|   |-- amass
|   |-- H36M

Evaluation

First, run GLAMR on the test set of the dataset you want to evaluate. For example, to run GLAMR on the 3DPW test set:

python global_recon/run_dataset.py --dataset 3dpw --cfg glamr_3dpw --out_dir out/3dpw

Next, evaluate the results generated by GLAMR:

python global_recon/eval_dataset.py --dataset 3dpw --results_dir out/3dpw

Similarly, to evaluate on Dynamic Human3.6M, you can replace the 3dpw to h36m for the dataset and config.

AMASS

The following command processes the original AMASS dataset into a processed version used in the code:

python preprocess/preprocess_amass.py

3DPW

The following command processes the original 3DPW dataset into a processed version used in the code:

python preprocess/preprocess_3dpw.py

Dynamic Human3.6M

Please refer to this doc for generating the Dynamic Human3.6M dataset.

Motion Infiller

To train the motion infiller:

python motion_infiller/train.py --cfg motion_infiller_demo --ngpus 1

where we use the config motion_infiller_demo.


To visualize the trained motion infiller on test data:

python motion_infiller/vis_motion_infiller.py --cfg motion_infiller_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_motion_infiller.

Trajectory Predictor

To train the trajectory predictor:

python traj_pred/train.py --cfg traj_pred_demo --ngpus 1

where we use the config traj_pred_demo.


To visualize the trained trajectory predictor on test data:

python traj_pred/vis_traj_pred.py --cfg traj_pred_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_traj_pred.

Joint Motion Infiller and Trajectory Predictor

For ease of use, we also define a joint (wrapper) model of motion infiller and trajectory predictor, i.e., the model merges the motion infilling and trajectory prediction stages. The joint model composes of pretrained motion infiller and trajectory predictor and is just a convenient abstraction. We can define the joint model using config files such as joint_motion_traj_demo. The joint model will also be used in the global optimization stage.


To visualize the joint model's results:

python motion_infiller/vis_motion_traj_joint_model.py --cfg joint_motion_traj_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_motion_traj_joint_model.

Citation

If you find our work useful in your research, please cite our paper GLAMR:

@inproceedings{yuan2022glamr,
    title={GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras},
    author={Yuan, Ye and Iqbal, Umar and Molchanov, Pavlo and Kitani, Kris and Kautz, Jan},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
}

License

Please see the license for further details.

glamr's People

Contributors

haofanwang avatar jonathanlehner avatar khrylx avatar take2rohit avatar umariqb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glamr's Issues

The visualization

Thank you for releasing the code. During my re-implementation, I find the visualization is abnormal. All of the saved images are blue without anything else (both motion in-filler and trajectory predictor). How can I address this problem?

Missing a 4th representation HybrIK(World)

Hi!
Thanks for the excellent work.

In this gif: https://github.com/NVlabs/GLAMR/blob/main/docs/basketball_glamr_vs_hybrik.gif
a direct comparison between GLAMR and HybrIK in world and camera projection was presented.

I analyzed global_recon/run_demo.py with some attention and didn't find a direct way to generate this 4th representation (HybrIK(World)), only GLAMR (Cam), GLAMR (World) and HybrIK (Cam).

My question is:
Is there any way to get this 4th representation from the data collected in run_demo.py? Or should this process be performed while estimating the pose and shape generated by HybrIK in run_pose_est_on_video(...)?

I believe it would be interesting to include in the project the code snippet used to generate this 4th representation. And as such, I would be grateful if someone could publish it.

Thanks!

Code release

Kudos for the great work!

Do you plan you plan to release the code of the model or only the code to generate the dataset?

Best,

wrong output

I cloned the repository and followed the installation instructions but the output is wrong
Screenshot from pexels-walking-dynamic-5738706_seed1_sbs_all mp4

How are the predicted 54 joint points converted into the pose parameters required for SMPL?

When i test GLAMR on a single-person video with dynamic camera by running this command:

python global_recon/run_demo.py --cfg glamr_static \ --video_path assets/static/basketball.mp4 \ --out_dir out/glamr_static/basketball \ --save_video
i found an error on

new_dict['smpl_pose'] = smpl_pose_wroot[:, 1:].reshape(-1, 69)
:
Exception has occurred: ValueError(note: full exception trace is shown but execution is paused at: _run_module_as_main)
cannot reshape array of size 47700 into shape (69)
and the shape of smpl_pose_wroot is 300x54x3. But the shape 69 looks like 23x3.

i try to select the former 24 joints as the param SMPL need like:
new_dict['smpl_pose'] = smpl_pose_wroot[:, 1:24].reshape(-1, 69)
but the vis result looks very strange.

so i want kown how are the predicted 54 joint points converted into the pose parameters required for SMPL?

install.sh error

Hi,
When I deploied this on my ubuntu 20.04 platform and run command source install.sh, it shows url access deine so I check the url and find right url here and try it again. But it still shows error

······
./mesa_18.3.3-0.deb not debian package

Could you give me some advices?

Is it possible to use GLAMR by providing a dataset of videos?

Hi everyone!

First of all congrats for your amazing work!

My question is simple. Is it possible to GLAMR by providing a dataset of videos?

Another question is. Is there any more improvement achieved on this amazing work?

If it is not possible to add a dataset of videos to this model. Is there any other option?

Thank you so much for your time!

Errors may occur in the multi person mode due to occlusion

I want to automatically fill in when one person is occlusion and invisible in multi mode, just like you show the video, but the following error occurs:

  File "D:\python\scripts\GLAMR-main\global_recon\models\global_recon_model.py", line 126, in init_data
    f = interp1d(vis_ind.astype(np.float32), new_dict[key], axis=0, assume_sorted=True, fill_value="extrapolate")
  File "N:\anaconda3\lib\site-packages\scipy\interpolate\interpolate.py", line 561, in __init__
    raise ValueError("x and y arrays must have at "
ValueError: x and y arrays must have at least 2 entries

@Khrylx @haofanwang

cannot reshape into shape 69

Hi, I was able to install all the dependencies and run Hybrik. But I am blocked by the following line
new_dict['smpl_pose'] = smpl_pose_wroot[:, 1:].reshape(-1, 69)

ValueError: cannot reshape array of size 47700 into shape (69)

I even tried with different video patterns in the repo and it is still the same issue.

An error when run the demo

Hi, I faced an error when I ran the dynamic demo. Could anyone help? And what does the function find_last_version() do?

71b52b1f2268c04facd0cc59c0bade0

And the folder is empty. I don't know what's wrong.

image

TypeError: __init__() got multiple values for argument 'cfg'

When I ran the demo using

python global_recon/run_demo.py --cfg glamr_dynamic \
                                --video_path assets/dynamic/running.mp4 \
                                --out_dir out/glamr_dynamic/running \
                                --save_video

I encountered the following error:

  File "/home/user/GLAMR/global_recon/run_demo.py", line 59, in <module>
    grecon_model = model_dict[cfg.grecon_model_name](cfg, device, log)
  File "/home/user/GLAMR/global_recon/models/global_recon_model.py", line 67, in __init__
    self.load_model()
  File "/home/user/GLAMR/global_recon/models/global_recon_model.py", line 72, in load_model
    self.mt_model = MotionTrajJointModel(self.mt_cfg, self.device, self.log)
  File "/home/user/GLAMR/motion_infiller/models/motion_traj_joint_model.py", line 29, in __init__
    self.load_motion_infiller()
  File "/home/user/GLAMR/motion_infiller/models/motion_traj_joint_model.py", line 44, in load_motion_infiller
    self.mfiller = mfiller_model_dict[self.mfiller_cfg.model_name].load_from_checkpoint(model_cp, cfg=self.mfiller_cfg, strict=False)
  File "/home/user/anaconda3/envs/hybrik/lib/python3.9/site-packages/pytorch_lightning/core/saving.py", line 169, in load_from_checkpoint
    model = cls._load_model_state(checkpoint, *args, **kwargs)
  File "/home/user/anaconda3/envs/hybrik/lib/python3.9/site-packages/pytorch_lightning/core/saving.py", line 205, in _load_model_state
    model = cls(*cls_args, **cls_kwargs)
TypeError: __init__() got multiple values for argument 'cfg'

Before this, I referred to Making GLAMR actually work and solving all errors! to configure the environment and did not modify the code.
Can someone help me?

Worse Than Original HybrIK

Hello, Thank you for your great job. I have run GLAMR on some videos. However, I found the results of GLAMR are even worse than the ones of the original HybrIK repo. Although the GLAMR repo provides the results of the HybrIK and GLAMR, they are much worse than the original HybrIK repo's results. What's the difference between the HybrIK in GLAMR and the original HybrIK? Thank you.

File not found [solved]

I encountered a problem and can't find the pose.pkl file. Can someone help me ?

FileNotFoundError: [Errno 2] No such file or directory: 'out/glamr_dynamic/running/pose_est/pose.pkl'

VTK and PYVISTA errors

Hi, I followed the instruction to prepare the env.

packages in environment at /home/kevinq/anaconda3/envs/glamr:

Name Version Build Channel

_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 1.3.0 py37h06a4308_0
aiohttp 3.8.3 py37h5eee18b_0
aiosignal 1.2.0 pyhd3eb1b0_0
appdirs 1.4.4 pyhd3eb1b0_0
async-timeout 4.0.2 py37h06a4308_0
asynctest 0.13.0 py_0
attrs 22.1.0 py37h06a4308_0
blas 1.0 mkl
blinker 1.4 py37h06a4308_0
brotlipy 0.7.0 py37h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.0 h5eee18b_0
ca-certificates 2022.12.7 ha878542_0 conda-forge
cachetools 4.2.2 pyhd3eb1b0_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py37h5eee18b_3
cftime 1.6.0 py37hda87dfa_1 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.0.4 py37h06a4308_0
cryptography 39.0.1 py37h9ce1e76_0
cudatoolkit 11.1.1 ha002fc5_10 conda-forge
curl 7.88.1 h5eee18b_0
cycler 0.11.0 pypi_0 pypi
expat 2.4.9 h6a678d5_0
ffmpeg 4.2.2 h20bf706_0
filterpy 1.4.5 pypi_0 pypi
flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.38.0 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.3.3 py37h5eee18b_0
fsspec 2022.11.0 py37h06a4308_0
future 0.18.3 py37h06a4308_0
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.6.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.1 py_2
grpcio 1.42.0 py37hce63b2e_0
h5py 2.9.0 nompi_py37hcafd542_1103 conda-forge
hdf4 4.2.13 h3ca952b_2
hdf5 1.10.4 hb1b8bf9_0
icu 58.2 he6710b0_3
idna 3.4 py37h06a4308_0
imageio 2.9.0 pyhd3eb1b0_0
importlib-metadata 6.1.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
jpeg 9b h024ee3a_2
jsoncpp 0.10.6 1 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.4 h568e23c_0
lame 3.100 h7b6447c_0
ld_impl_linux-64 2.38 h1181459_1
libcurl 7.88.1 h91b91d3_0
libedit 3.1.20221030 h5eee18b_0
libev 4.33 h7f8727e_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 11.2.0 h1234567_1
libidn2 2.3.2 h7f8727e_0
libnetcdf 4.6.1 h11d0813_2
libnghttp2 1.46.0 hce63b2e_0
libogg 1.3.5 h27cfd23_1
libopus 1.3.1 h7b6447c_0
libpng 1.6.39 h5eee18b_0
libprotobuf 3.20.3 he621ea3_0
libssh2 1.10.0 h8f2d780_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.16.0 h27cfd23_0
libtheora 1.1.1 h7f8727e_3
libtiff 4.1.0 h2733197_1
libunistring 0.9.10 h27cfd23_0
libuv 1.44.2 h5eee18b_0
libvorbis 1.3.7 h7b6447c_0
libvpx 1.7.0 h439df22_0
libxml2 2.9.14 h74e7548_0
llvmlite 0.39.1 pypi_0 pypi
lz4-c 1.8.3 he1b5a44_1001 conda-forge
markdown 3.4.1 py37h06a4308_0
markupsafe 2.1.1 py37h7f8727e_0
matplotlib 3.5.3 pypi_0 pypi
meshio 4.4.2 pyhd8ed1ab_0 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py37h7f8727e_0
mkl_fft 1.3.1 py37hd3c417c_0
mkl_random 1.2.2 py37h51133e4_0
multi-person-tracker 0.1 dev_0
multidict 6.0.2 py37h5eee18b_0
ncurses 6.4 h6a678d5_0
netcdf4 1.4.2 py37h808af73_0
nettle 3.7.3 hbbd107a_1
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
numba 0.56.4 pypi_0 pypi
numpy 1.21.5 py37h6c91a56_3
numpy-base 1.21.5 py37ha15fc14_3
oauthlib 3.2.0 pyhd3eb1b0_1
olefile 0.46 py37_0
openh264 2.1.1 h4ff587b_0
openssl 1.1.1t h7f8727e_0
packaging 22.0 py37h06a4308_0
pillow 9.4.0 pypi_0 pypi
pip 22.3.1 py37h06a4308_0
protobuf 3.20.3 py37h6a678d5_0
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pycparser 2.21 pyhd3eb1b0_0
pydeprecate 0.3.0 pyhd8ed1ab_0 conda-forge
pyjwt 2.4.0 py37h06a4308_0
pyopenssl 23.0.0 py37h06a4308_0
pyparsing 3.0.9 pypi_0 pypi
pysocks 1.7.1 py37_1
python 3.7.16 h7a1cb2a_0
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.7 2_cp37m conda-forge
pytorch 1.8.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch
pytorch-lightning 1.3.5 pyhd8ed1ab_0 conda-forge
pyvista 0.31.3 pyhd8ed1ab_2 conda-forge
pyyaml 5.4.1 py37h5e8e339_1 conda-forge
readline 8.2 h5eee18b_0
requests 2.28.1 py37h06a4308_0
requests-oauthlib 1.3.0 py_0
rsa 4.7.2 pyhd3eb1b0_1
scipy 1.7.3 pypi_0 pypi
scooby 0.7.1 pyhd8ed1ab_0 conda-forge
setuptools 65.6.3 py37h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.1 h5eee18b_0
tbb 2021.7.0 hdb19cb5_0
tensorboard 2.6.0 py_1
tensorboard-data-server 0.6.1 py37h52d8a92_0
tensorboard-plugin-wit 1.8.1 py37h06a4308_0
tk 8.6.12 h1ccaba5_0
torchaudio 0.8.0 py37 pytorch
torchmetrics 0.10.3 pyhd8ed1ab_0 conda-forge
torchvision 0.9.0 py37_cu111 pytorch
tqdm 4.64.1 py37h06a4308_0
transforms3d 0.4.1 pyhd8ed1ab_0 conda-forge
typing-extensions 4.4.0 py37h06a4308_0
typing_extensions 4.4.0 py37h06a4308_0
urllib3 1.26.14 py37h06a4308_0
vtk 8.2.0 py37haa4764d_200
werkzeug 2.2.2 py37h06a4308_0
wheel 0.38.4 py37h06a4308_0
x264 1!157.20191217 h7b6447c_0
xz 5.2.10 h5eee18b_1
yaml 0.2.5 h7b6447c_0
yarl 1.8.1 py37h5eee18b_0
yolov3 0.1 dev_0
zipp 3.11.0 py37h06a4308_0
zlib 1.2.13 h5eee18b_0
zstd 1.4.4 h3b9ef0a_2 conda-forge

However, I keep encountering package errors and locate the problem is caused by "import pyvista"

Traceback (most recent call last):
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/vtkIOParallel.py", line 5, in
from .vtkIOParallelPython import *
ImportError: libjsoncpp.so.19: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/_vtk.py", line 382, in
from vtk.vtkCommonKitPython import (buffer_shared,
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtk.py", line 32, in
all_spec.loader.exec_module(all_m)
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/all.py", line 83, in
from .vtkIOParallel import *
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/vtkIOParallel.py", line 9, in
from vtkIOParallelPython import *
ModuleNotFoundError: No module named 'vtkIOParallelPython'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/vtkIOParallel.py", line 5, in
from .vtkIOParallelPython import *
ImportError: libjsoncpp.so.19: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/init.py", line 11, in
from pyvista.plotting import *
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/plotting/init.py", line 7, in
from .helpers import plot, plot_arrows, plot_compare_four, plot_itk
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/plotting/helpers.py", line 7, in
from pyvista.utilities import is_pyvista_dataset
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/utilities/init.py", line 2, in
from .errors import (GPUInfo, Observer, Report,
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/utilities/errors.py", line 12, in
from pyvista import _vtk
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/pyvista/_vtk.py", line 386, in
from vtk.vtkCommonCore import (buffer_shared,
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtk.py", line 32, in
all_spec.loader.exec_module(all_m)
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/all.py", line 83, in
from .vtkIOParallel import *
File "/home/kevinq/anaconda3/envs/glamr/lib/python3.7/site-packages/vtkmodules/vtkIOParallel.py", line 9, in
from vtkIOParallelPython import *
ModuleNotFoundError: No module named 'vtkIOParallelPython'

Would you mind helping me solve this?
Thanks!

Questions regarding evaluation code

Thank you for this great work and making the code publicly available!

I'm running your evaluation code and there's certain parts that I don't understand. Specifically:

  • Why do you rotate the 3DPW poses by 90 degrees around the x-axis (as found here)? Is this a 3DPW thing or should this be done with all SMPL poses?
  • What exactly does the function convert_traj_world2heading do in get_aligned_orient_trans?

Thanks for your help!

Demos on unmovable camera videos?

Since GLAMR can solve moveable camera videos, is that could be even better for unmovable camera videos? Is there any such demo video?

Usage of motion infiller

I'm using another 3D pose estimation method instead of HybrIK, I have several questions.

  1. Does GLAMR depend on HybrIK? Can it be replaced by other recent pose estimation methods?

  2. Is it possible to use the pre-trained motion infiller alone given a sequence of SMPL pose? For multi-person tracking, it's common that one person is occluded in several frames.

Linked model licensing

Just wanted to clarify that the trained models are NVIDIA licensed as well since the repository falls under the same license. For example, that would mean that the demos fall under the NVIDIA source license.

Evaluation code

Hi Yeyuan,

would it be possible to provide the evaluation code?

Best,
Jonathan

camera position is in xyz or zxy?

Hi, can I ask about this line in visualizer3d.py self.pl.camera.position = (self.distance, 0, 0), is the camera position parameter is in xyz or zxy?

pose.pkl not getting generated/dumped

Hi,

I just started using GLAMR and working out to have it up and running on my local machine. While running a demo, I am getting following error: FileNotFoundError: [Errno 2] No such file or directory: 'out/glamr_dynamic/running/pose_est/pose.pkl'. I do see that this particular file is not getting dumped or getting generated.

What could be the issue?

Transformer Decoder

Hi,

Thank you for presenting this great work! I have a question towards the transformer decoder part. I found the transformer model you use do not have a seperate inference process. The transformer docoder seems not making prediction step by step. Is my understanding correct?

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.