Official implementation of paper:
Scene-aware Egocentric 3D Human Pose Estimation
Jian Wang, Diogo Luvizon, Weipeng Xu, Lingjie Liu, Kripasindhu Sarkar, Christian Theobalt
CVPR 2023
- Create a new anaconda environment
conda create -n sceneego python=3.9
conda activate sceneego
-
Install pytorch 1.13.1 from https://pytorch.org/get-started/previous-versions/
-
Install other dependencies
pip install -r requirements.txt
-
Download pre-trained pose estimation model and put it under
models/sceneego/checkpoints
-
run:
python demo.py --config experiments/sceneego/test/sceneego.yaml --img_dir data/demo/imgs --depth_dir data/demo/depths --output_dir data/demo/out --vis True
The result will be shown with the open3d visualizer and the predicted pose is saved at data/demo/out
.
- The predicted pose is saved as the pkl file (e.g.
img_001000.jpg.pkl
). To visualize the predicted result, run:
python visualize.py --img_path data/demo/imgs/img_001000.jpg --depth_path data/demo/depths/img_001000.jpg.exr --pose_path data/demo/out/img_001000.jpg.pkl
The result will be shown with the open3d visualizer.