Giter Club home page Giter Club logo

uhc's Introduction

UHC: Universal Humanoid Controller

TODOS:

  • Tutorial for smpl_robot
  • Runnable and trainalbe code for the SMPL model for motion imitation
  • Data processing code for the AMASS dataset
  • Perpetual Humanoid Controller Support (No RFC model). -> Please refer to PHC-MJX.

News 🚩

[Feb 24, 2024] PHC-MJX and SMPLSim has been released. It is a porting of this codebase into the newest MuJoCo, and also incorporating lessons learned in PHC.

[Oct 15th, 2023 ] PHC (motion tracking in Isaac gym without residual force) has been released! Please go to PHC for the newest on motion tracking!

[June 5th, 2023 ] Since there is a growing interest in using smpl_robot in Isaac (e.g. in Trace & Pace), I uploaded smpl_local_robot.py that is compatible with Isaac.

[March 31, 2023 ] Adding the implicit-shape model. Implicit RFC + training with different SMPL body shapes. Adding AMASS data processing code.

[February 24, 2023 ] Full code runnable

[February 22, 2023 ] Evaluation code released

Introduction

In this project, we develop the Universal Humanoid Controller used in our projects: embodiedpose, kin_poly, and agent_design. It is a physics-based humanoid controller trained with Reinforcement Learning to imitate reference human motion. UHC is task-agnostic and only takes reference frames as input. We uses the MuJoCo simulator for this repository. It relives heavily on residual force control for keeping the humanoid stable, and we are actively working on relying less on this dependency. Here are a few highlights of the controller:

  • Supports controlling humanoids constructed from SMPL, SMPL-H, and SMPL-X models, of all genders and body shapes.
  • Causal and takes in one-frame of reference frame as input.
  • Supports optimizing the humanoid's body shape paramters based on Transform2Act and agent_design.
  • Can support simulating mulitple humanoids in the same scene, though only as a proof of concept.

Dependencies

To create the environment, follow the following instructions:

  1. Create new conda environment and install pytroch:
conda create -n uhc python=3.8
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia # install pytorch
pip install -r requirements.txt
  1. Download and setup mujoco: Mujoco
wget https://github.com/deepmind/mujoco/releases/download/2.1.0/mujoco210-linux-x86_64.tar.gz
tar -xzf mujoco210-linux-x86_64.tar.gz
mkdir ~/.mujoco
mv mujoco210 ~/.mujoco/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco210/bin

SMPL Robot

The SMPL robot, an adaption of Robot class from Transform2Act, is an automatic humanoid generation class that supports SMPL, SMPL-H, and SMPL-X models. It creates XML file humanoid model for MuJoCo simulator, and can create humanoid of different gender and body shape. It supports both capsule based and mesh based models. We use SMPL Robot to create SMPL models to simulate on-the-fly when training our UHC with different body shapes. To run SMPL Robot, use:


python uhc/smpllib/smpl_robot.py

To generate Isaac Gym comptialbe humanoid models, use:

python uhc/smpllib/smpl_local_robot.py 

This model supports mesh and non-mesh based humanoid used in Trace & Pace. There is a number of setting you can play with like "upright_start", which generates a humanoid that that starts in an upright position facing x-direcation (default SMPL faces upward).

You will need to have downloaded smpl model files from SMPL, SMPL-H, and SMPL-X. Put them in the data/smpl folder, unzip them into 'data/smpl' folder. Please download the v1.1.0 version, which contains the neutral humanoid. Rename the files basicmodel_neutral_lbs_10_207_0_v1.1.0, basicmodel_m_lbs_10_207_0_v1.1.0.pkl, basicmodel_f_lbs_10_207_0_v1.1.0.pkl to SMPL_NEUTRAL.pkl, SMPL_MALE.pkl and SMPL_FEMALE.pkl. The file structure should look like this:


|-- data
    |-- smpl
        |-- SMPL_FEMALE.pkl
        |-- SMPL_NEUTRAL.pkl
        |-- SMPLH_MALE.pkl
        |-- SMPLH_FEMALE.pkl
        |-- SMPLH_NEUTRAL.pkl
        |-- SMPLX_MALE.pkl
        |-- SMPLX_FEMALE.pkl
        |-- SMPLX_NEUTRAL.pkl
        |-- SMPLX_MALE.pkl
        

Data processing for training & evaluating UHC

UHC is trained on the AMASS dataset. First, download the AMASS dataset from AMASS. Then, run the following script on the unzipped data:

python uhc/data_process/process_amass_raw.py

which dumps the data into the amass_db_smplh.pt file. Then, run

python uhc/data_process/process_amass_db.py

For processing your own SMPL data for evaluation, you can refer to

python uhc/data_process/process_smpl_data.py

Trained models

Download pretrained models:

You can also use the download_data script to download the models:

bash download_data.sh

Evaluation

python scripts/eval_uhc.py --cfg uhc_implicit --epoch 19000 --data sample_data/amass_copycat_take5_test_small.pkl
python scripts/eval_uhc.py --cfg uhc_implicit_shape --epoch 4700 --data sample_data/amass_copycat_take5_test_small.pkl
python scripts/eval_uhc.py --cfg uhc_explicit --epoch 5000 --data sample_data/amass_copycat_take5_test_small.pkl

For computing statistics (mpjpe, success rate, etc.), use the --mode stats command.

Training models

python scripts/train_uhc.py --cfg uhc_implicit_shape

Viewer Shortcuts

Keyboard Function
Q Next sample
Space pause
B hide expert
N seperate expert and imitation
M hide imitation
T Record screenshot
V Record video

Citation

If you find our work useful in your research, please cite our papers embodiedpose, kin_poly, and agent_design.

@inproceedings{Luo2022EmbodiedSH,
  title={Embodied Scene-aware Human Pose Estimation},
  author={Zhengyi Luo and Shun Iwase and Ye Yuan and Kris Kitani},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}

@inproceedings{Luo2021DynamicsRegulatedKP,
  title={Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation},
  author={Zhengyi Luo and Ryo Hachiuma and Ye Yuan and Kris Kitani},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}

@article{Luo2022FromUH,
  title={From Universal Humanoid Control to Automatic Physically Valid Character Creation},
  author={Zhengyi Luo and Ye Yuan and Kris M. Kitani},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.09286}
}

References

This repository is built on top of the following amazing repositories:

  • Part of the UHC code is from: rfc
  • SMPL models and layer is from: SMPL-X model
  • Feature extractors are from: SPIN
  • NN modules are from (khrylib): DLOW

uhc's People

Contributors

zhengyiluo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

uhc's Issues

Visualizing in Isaac Gym

Howdy--

Thanks for all your work on this!!

I'm trying to get this to work in Isaac Gym, and my first goal is to visualize your agent embodying AMASS data inside the Isaac Gym simulated environment.

When I uploaded your test_good.xml file and loaded it as an asset in Isaac Gym, and animated it through its range of motion, I got some weird behavior (see attached video, where I used the joint-monkey.py script that comes with the IsaacGym installation. This script reads in an asset file and moves the asset through all the ranges of motion afforded to them in their .xml construction: https://drive.google.com/file/d/1FCvmDjmMSqoeGs72bZ_-i_gfvVIP46aw/view?usp=sharing)

The behavior:

  1. Lots of spacing between the joints (e.g. the arm isn't touching the torso, the femur isn't touching the pelvis)
  2. Where the kneecap should be, it appears an ankle is, e.g. the agent's toes wiggle where the kneecap is, see the video

I wanted to ask if this behavior is expected, and if you have other tips for how to get this up and running on Isaac Gym... thanks for your help!!

Cheers
Nate

Unable to run process_amass_db.py

Hi Zhengyi,

Thanks for sharing this repo.

I am trying to setup training with this codebase. However, I am a bit confused with how to set up the models for running process_amass_db.py

In particular, it wasn't clear to me how to set up the models for SMPL-H. In process_amass_db.py you used the SMPLH parser for male, female and neutral but am unsure how to obtain those

There are a few issues with current instructions.

  1. The link in the README.md for SMPL-H is invalid. I believe you mean https://mano.is.tue.mpg.de/index.html as opposed to https://smpl.is.tue.mpg.de/downloads ?
  2. In the MANO download section, there were both the Model & Code download and the Extended SMPL+H model download. In the former, I can find some pkl files with name SMPLH_female and SMPLH_male but there is no neutral. In the Extended SMPL+H download I can find male female and neutral but those are in npz format instead of pkl. Can you kindly clarify how to prepare the models used for training the UHC controller?

Best,
Yicheng

size mismatch when running process_amass_db.py

Hi Zhengyi,
When running process_amass_db.py, i am facing the problem of data size mismatch. I wonder how to fix this problem? I think i have convert the smplh model to the correct format.

Screenshot from 2023-04-08 13-02-34

is it possible use the UHC to denoise the noisy MoCap sequences

Hi @ZhengyiLuo ,thanks for sharing this wonderful work, the pretrained model works fine on my optical mocap data.

But when I use UHC to test some noisy mocap sequences(e,g. have foot sliding or penetration ), the results are easy to fall down, and I have tried to modify the body_diff_thresh_test to a smaller value, it falls less, but become more jitter. I also have tried to use filter to smooth the jitter, but it becomes more sliding.

I have notice your new PHC work, the fall down problem solved but the jitter seems still exist. it seems there exist a trade-off between the foot sliding and body jitter. hope you can give some advices, thanks~

encounter a problem that the render results are not synchronize with the original video

Hi @ZhengyiLuo ,I get my mocap data from rgb video and use trained model to test, but the render results seem dropped some frames and the motion cannot synchronize with the original video(usually move faster than the video).
the motion pkl file has the same frames as video file, and are 30 fps,the rendered imitation and expert results are synchronize, but they all not synchronize with the video. it seems this caused by the mujoco render.
Hope you can help, thanks~

smplh robot for isaac gym

I use the code to create the isaac gym robot for smplh. The R_Thumb3 is unnormal when use "capsule" and GAINS "R_Thumb3": [100, 10, 1, 150],
smplh

Is mujoco_py not available?

I'm trying to run the process_amass_db.py after running process_amass_raw.py.

I tried everything but below code brings various errors.
Is mujoco-py unavailable now due to the new mujoco 3.0?

from mujoco_py import load_model_from_path

error1 : fixed by downgrading cython

Cython.Compiler.Errors.CompileError: /home/qkrwlgh0314/miniconda3/envs/uhc-new/lib/python3.8/site-packages/mujoco_py/cymj.pyx

error2:
i've tried various gcc versions but failed to fix the error

    spawn(cmd, dry_run=self.dry_run, **kwargs)
  File "/home/qkrwlgh0314/miniconda3/envs/uhc2/lib/python3.8/site-packages/setuptools/_distutils/spawn.py", line 70, in spawn
    raise DistutilsExecError(
distutils.errors.DistutilsExecError: command '/opt/ohpc/pub/compiler/gcc/8.3.0/bin/gcc' failed with exit code 1

Can mujoco module accelerated by CUDA?

Great job! I found that the training process is very slow because the physical forward process is fully applied on CPU. Can the forward process of mujoco modules accelerated by CUDA?

Failed to exclude contact between shoulder and chest in IsaacGym

Hello, I am trying to train an imitation model with a self-collision check in IsaacGym. I used the smpl_local_robot.py script you provided to create a humanoid with tag. However, I found that the tag in mjcf file is useless, like the following:
image
Do you have any idea about how to slove it? Thank you very much!

Questions about the used SMPL models

Hi, thanks for sharing your code! Great work!

BTW, I have some questions about the .xml SMPL models that you shared here. I noticed that the positions of joints in the humanoid_smpl_neutral_mesh.xml and humanoid_smpl_neutral.xml are different. And after checking with SMPL's original model, it seems that the joint positions in humanoid_smpl_neutral_mesh.xml is aligned with original SMPL model while humanoid_smpl_neutral.xml is not. I also noticed that in another repo, you use the humanoid_smpl_neutral_mesh.xml for training. So if I understand correctly, if I want to use the SMPL model to train on AMASS dataset, I need to use humanoid_smpl_neutral_mesh.xml instead of humanoid_smpl_neutral.xml right? Besides, I'm curious about how did you get the humanoid_smpl_neutral_mesh.xml and what does the humanoid_smpl_neutral.xml aim for?

Hope to hear from you.

generate Smpl-x robot for IsaacGym

Hello, thanks for all your works!
When I run python uhc/smpllib/smpl_local_robot.py
robot_cfg = {
'model': "smplx",
"mesh": False,
"rel_joint_lm": False,
"upright_start": False,
"remove_toe": False,
"real_weight": True,
"real_weight_porpotion": True,
"replace_feet": True,
"masterfoot": False,
"big_ankle": True,
"master_range": 50,
"body_params": {},
"joint_params": {},
"geom_params": {},
"actuator_params": {},
}

I encounter a bug as follow:
Traceback (most recent call last):
File "uhc/smpllib/smpl_local_robot.py", line 2385, in
smpl_robot = Robot(robot_cfg)
File "uhc/smpllib/smpl_local_robot.py", line 1203, in init
self.load_from_skeleton()
File "uhc/smpllib/smpl_local_robot.py", line 1380, in load_from_skeleton
verts, joints, skin_weights, joint_names, joint_offsets, parents_dict, channels, joint_range = smpl_parser.get_offsets(betas=self.beta, zero_pose=zero_pose)
File "/home/ubuntu/disk1/Hand_object/UHC/uhc/smpllib/smpl_parser_local.py", line 624, in get_offsets
verts, Jtr = self.get_joints_verts(zero_pose, th_betas=betas)
File "/home/ubuntu/disk1/Hand_object/UHC/uhc/smpllib/smpl_parser_local.py", line 602, in get_joints_verts
smpl_output = self.forward(
File "/home/ubuntu/disk1/Hand_object/UHC/uhc/smpllib/smpl_parser_local.py", line 587, in forward
smpl_output = super(SMPLX_Parser, self).forward(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/uhc/lib/python3.8/site-packages/smplx/body_models.py", line 1230, in forward
vertices, joints = lbs(shape_components, full_pose, self.v_template,
File "/home/ubuntu/anaconda3/envs/uhc/lib/python3.8/site-packages/smplx/lbs.py", line 205, in lbs
v_shaped = v_template + blend_shapes(betas, shapedirs)
File "/home/ubuntu/anaconda3/envs/uhc/lib/python3.8/site-packages/smplx/lbs.py", line 291, in blend_shapes
blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
File "/home/ubuntu/anaconda3/envs/uhc/lib/python3.8/site-packages/torch/functional.py", line 358, in einsum
return einsum(equation, *_operands)
File "/home/ubuntu/anaconda3/envs/uhc/lib/python3.8/site-packages/torch/functional.py", line 360, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [1, 30]->[1, 1, 1, 30] [10475, 3, 20]->[1, 10475, 3, 20]

about the obs dim

Hi Zhengyi,
I wonder is it possible to remove the shape and gender dimensions from the obs dim(that is 16+1), thus the obs dim turns to 640.
Will this influence the model's performance?
Best,
Kangning

Get amass data and smpl model

Hi Zhengyi,
Thanks for your wonderful work and code release, I want to know how to get the training data and which smpl models to download and which folder to place them. Thanks!

support different shape para

The beta is clipped via

body_params: 
    beta: 
      lb: -3
      ub: 3

Is it necessary? How to use the betas of SMPL to generate charater? Could you give an example to generate charater via uhc/smpllib/smpl_robot.py and betas of SMPL/SMPLH?

imitation fails

When I try to imitate some noisy motion, some frames will change suddenly. How can I fix it?

Training speed

The training is too slow, I use A100 and 192 cpu core, and set num_threads to 192. It still require 28 days for training. Do you have any trick to speed up the training?

multiprocess error

test with full_eval = True raise "ConnectionResetError: [Errno 104] Connection reset by peer" by res = queue.get() in agent_copycat.py

how to reduce output jittering of mocap seq

Hi Zhengyi, thanks for your excellent work. I'm trying to employ UHC to remove the foot sliding and floating in mocap sequence data.
Actually, I noticed your answer https://github.com/ZhengyiLuo/UniversalHumanoidControl/issues/6#issuecomment-1503432638 with some insightful analysis in that similar issue. In my opinion, it's may not a appropriate way to take the noisy mocap seq as reference motion as it is not the motion what we want the robot to imitate. Therefore, I used the GT pose of H36M dataset as reference and the mocap noisy pose as input (the poses were aligned), trained a policy based on 'uhc_implicit_shape' with default setting. However, the movement is still jittery, particularly in the walking sequence. I'm wondering if it is a reasonable way, do you have any idea about making the robot move smoothly?
thanks a lot.

try to modify the xml file, but not work

Hi @ZhengyiLuo , since in your project the T-pose is from smpl, but I want to use my own T-pose, so in order to avoid the retargeting problem, I tried to modify the humanoid_smpl_neutral_mesh.xml by setting the joint's pos to different values based on my own T-pose, and set mujoco_model and vis_model's path in yml accordingly, but the render results make no difference. Hope you can give some advice, thank you~

reward in PHC

image
how to compute the energy penalty. I can only "get_actor_dof_forces" and "_rigid_body_ang_vel" by issaacgym.

hope can support data format that trans from bvh to pkl

Hi, @ZhengyiLuo ,thanks for sharing this wonderful work, I have add a pr to process the amass, hope it helps.

I also curious about whether bvh can trans to pkl, and I have read some discussion about this ,but the pose parameters in pkl are smpl format, so it comes to how to trans bvh's rotation vaule to smpl pose.
hope you can give some advices~

Evaluation of motion imitation AMASS dataset.

How to reproduct the evaluation of motion imitation using target motion from the train and test split of the AMASS dataset in "From Universal Humanoid Control to Automatic Physically Valid Character Creation"

The simulation of elbow is very strange

I am using eval_uhc.py to imitate the sequence, but the elbow joint looks very strange;
I am not sure if there is an issue with the creation of the robot (smpl_robot. py) or with the simulation at the moment;
image

where to get the final results after mujoco simulation?

Hi @ZhengyiLuo ,sorry to bother you again.
I have got the pred results from trained model using
eval_res = self.agent.eval_seq(take_key,loader)
pred_re = eval_res['pred']

But it seems this results haven't went throuth physic simulation and the imitation motions are not so good as rendered results.
So ,how to get the final results like above qpose format after physic simulation?
Hope you can help, thank you~

Is the reward computed via dof(axis angle). I add the code in ASE code, but the reword becomes negative. I wonder is it right?

          Is the reward computed via dof(axis angle). I add the code in ASE code, but  the reword becomes negative. I wonder is it  right?
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
self._dof_force = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, dofs_per_env)    
def get_energy(self):
        dof_force, dof_vel, dof_obs_size, dof_offsets = self._dof_force, self._dof_vel, self._dof_obs_size, self._dof_offsets
        num_joints = len(dof_offsets) - 1
        num_env = dof_force.shape[0]
        energy = torch.zeros(num_env, device = dof_force.device)

        for j in range(num_joints):
            dof_offset = dof_offsets[j]
            dof_size = dof_offsets[j + 1] - dof_offsets[j]
            force = dof_force[:, dof_offset:(dof_offset + dof_size)]
            vel = dof_vel[:, dof_offset:(dof_offset + dof_size)]
            energy += torch.mul(force, vel).sum(dim = -1)**2
        energy =  0.00005*energy
        return energy

rew_buf += -1 * self.energy_buf

Originally posted by @interestingzhuo in #21 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.