Giter Club home page Giter Club logo

calm's Introduction

Conditional Adversarial Latent Models

Code accompanying the paper: "CALM: Conditional Adversarial Latent Models for Directable Virtual Characters"
Skills

CALM builds upon, and borrows code from, Adversarial Skill Embeddings (Peng et. al., 2022, ASE).

Installation

Download Isaac Gym from the website, then follow the installation instructions.

Once Isaac Gym is installed, install the external dependencies for this repo:

pip install -r requirements.txt

CALM

Pre-Training

First, a CALM model can be trained to imitate a dataset of motions clips using the following command:

python calm/run.py --task HumanoidAMPGetup --cfg_env calm/data/cfg/humanoid_calm_sword_shield_getup.yaml --cfg_train calm/data/cfg/train/rlg/calm_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield.yaml --headless  --track

--motion_file can be used to specify a dataset of motion clips that the model should imitate. The task HumanoidAMPGetup will train a model to imitate a dataset of motion clips and get up after falling. Over the course of training, the latest checkpoint Humanoid.pth will be regularly saved to output/, along with a Tensorboard log. --headless is used to disable visualizations and --track is used for tracking using weights and biases. If you want to view the simulation, simply remove this flag. To test a trained model, use the following command:

python calm/run.py --test --task HumanoidAMPGetup --num_envs 16 --cfg_env calm/data/cfg/humanoid_calm_sword_shield_getup.yaml --cfg_train calm/data/cfg/train/rlg/calm_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield.yaml --checkpoint [path_to_calm_checkpoint]

You can also test the robustness of the model with --task HumanoidPerturb, which will throw projectiles at the character.

 

Precision-Training

After the CALM low-level controller has been trained, it can be used to train style-constrained-locomotion controllers. The following command will use a pre-trained CALM model to perform a target heading task:

python calm/run.py --task HumanoidHeadingConditioned --cfg_env calm/data/cfg/humanoid_sword_shield_heading_conditioned.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid_style_control.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield_fsm_movements.yaml --llc_checkpoint [path_to_llc_checkpoint] --headless --track

--llc_checkpoint specifies the checkpoint to use for the low-level controller. A pre-trained CALM low-level controller is available in calm/data/models/calm_llc_reallusion_sword_shield.pth.

To test a trained model, use the following command:

python calm/run.py --test --task HumanoidHeadingConditioned --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield_heading_conditioned.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield_fsm_movements.yaml --llc_checkpoint [path_to_llc_checkpoint] --checkpoint [path_to_hlc_checkpoint]

 

Task-Solving (Inference -- no training!)

The CALM low-level controller and the high-level locomotion controller can be combined to solve tasks without further trianing. This phase is inference only.

python calm/run.py --test --task HumanoidStrikeFSM --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield_strike_fsm.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid_fsm.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield_fsm_movements.yaml --llc_checkpoint [path_to_llc_checkpoint] --checkpoint [path_to_hlc_checkpoint]

--llc_checkpoint specifies the checkpoint to use for the low-level controller. A pre-trained CALM low-level controller is available in calm/data/models/calm_llc_reallusion_sword_shield.pth. --checkpoint specified the checkpoint to use for the precision-trained high-level controller. A pre-trained high-level precision-trained controller is available in calm/data/models/calm_hlc_precision_trained_reallusion_sword_shield.pth.

The built-in tasks and their respective config files are:

HumanoidStrikeFSM: calm/data/cfg/humanoid_sword_shield_strike_fsm.yaml
HumanoidLocationFSM: calm/data/cfg/humanoid_sword_shield_location_fsm.yaml

 

 

Task-Training

In addition to precision training, a high-level controller can also be trained to directly solve tasks. The following command will use a pre-trained CALM model to perform a target heading task:

python calm/run.py --task HumanoidHeading --cfg_env calm/data/cfg/humanoid_sword_shield_heading.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/RL_Avatar_Idle_Ready_Motion.npy --llc_checkpoint [path_to_llc_checkpoint] --headless --track

--llc_checkpoint specifies the checkpoint to use for the low-level controller. A pre-trained CALM low-level controller is available in calm/data/models/calm_llc_reallusion_sword_shield.ckpt. --task specifies the task that the character should perform, and --cfg_env specifies the environment configurations for that task. The built-in tasks and their respective config files are:

HumanoidReach: calm/data/cfg/humanoid_sword_shield_reach.yaml
HumanoidHeading: calm/data/cfg/humanoid_sword_shield_heading.yaml
HumanoidLocation: calm/data/cfg/humanoid_sword_shield_location.yaml
HumanoidStrike: calm/data/cfg/humanoid_sword_shield_strike.yaml

To test a trained model, use the following command:

python calm/run.py --test --task HumanoidHeading --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield_heading.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/RL_Avatar_Idle_Ready_Motion.npy --llc_checkpoint [path_to_llc_checkpoint] --checkpoint [path_to_hlc_checkpoint]

 

 

AMP

We also provide an implementation of Adversarial Motion Priors (https://xbpeng.github.io/projects/AMP/index.html). A model can be trained to imitate a given reference motion using the following command:

python calm/run.py --task HumanoidAMP --cfg_env calm/data/cfg/humanoid_sword_shield.yaml --cfg_train calm/data/cfg/train/rlg/amp_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/sword_shield/RL_Avatar_Atk_2xCombo01_Motion.npy --headless  --track

The trained model can then be tested with:

python calm/run.py --test --task HumanoidAMP --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield.yaml --cfg_train calm/data/cfg/train/rlg/amp_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/sword_shield/RL_Avatar_Atk_2xCombo01_Motion.npy --checkpoint [path_to_amp_checkpoint]

 

 

Motion Data

Motion clips are located in calm/data/motions/. Individual motion clips are stored as .npy files. Motion datasets are specified by .yaml files, which contains a list of motion clips to be included in the dataset. Motion clips can be visualized with the following command:

python calm/run.py --test --task HumanoidViewMotion --num_envs 2 --cfg_env calm/data/cfg/humanoid_sword_shield.yaml --cfg_train calm/data/cfg/train/rlg/amp_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/sword_shield/RL_Avatar_Atk_2xCombo01_Motion.npy

--motion_file can be used to visualize a single motion clip .npy or a motion dataset .yaml.

If you want to retarget new motion clips to the character, you can take a look at an example retargeting script in calm/poselib/retarget_motion.py.

calm's People

Contributors

mxsage avatar tesslerc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

calm's Issues

IndexError: The shape of the mask [1] at index 0 does not match the shape of the indexed tensor [64] at index 0

when i run the command :
python calm/run.py --test --task HumanoidStrikeFSM --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield_strike_fsm.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid_fsm.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield_fsm_movements.yaml --llc_checkpoint [path_to_llc_checkpoint] --checkpoint [path_to_hlc_checkpoint]
using the pre-trained models
calm/data/models/calm_llc_reallusion_sword_shield.pth
calm/data/models/calm_hlc_precision_trained_reallusion_sword_shield.pth
while the error is

=> loading checkpoint 'calm/data/models/calm_llc_reallusion_sword_shield.pth'
=> loading checkpoint 'calm/data/models/calm_llc_reallusion_sword_shield.pth'
Loaded LLC checkpoint from calm/data/models/calm_llc_reallusion_sword_shield.pth
=> loading checkpoint 'calm/data/models/calm_hlc_precision_trained_reallusion_sword_shield.pth'
Traceback (most recent call last):
File "calm/run.py", line 277, in
main()
File "calm/run.py", line 271, in main
runner.run(vargs)
File "/home/zhouyang/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 144, in run
player.run()
File "/home/zhouyang/task_python/CALM/calm/learning/hrl_players.py", line 139, in run
action = self.get_action(obs_dict, is_determenistic)
File "/home/zhouyang/task_python/CALM/calm/learning/hrl_fsm_players.py", line 98, in get_action
clamped_actions[self.env.task._should_strike == 1] = self.env.task._possible_latents[self.env.task._strike_index]
IndexError: The shape of the mask [1] at index 0 does not match the shape of the indexed tensor [64] at index 0

fbx python binding install error

i want to load my fbx file.
but i can't install autodesk's fbx binding.
i tried fbx 202032, 202034 (for only python3.10 whl) but failed.
build sip 4.19.3 package manually, but finally still stucked.
is anyone successfully install fbx bindings, load fbx file??

thanks

Segmentation fault

i was running ase and calm on wsl2, and the output is :

Importing module 'gym_37' (/home/zhouyang/task_python/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_37.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/zhouyang/task_python/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.13.1+cu117
Device count 1
/home/zhouyang/task_python/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/zhouyang/.cache/torch_extensions/py37_cu117 as PyTorch extensions root...
Emitting ninja build file /home/zhouyang/.cache/torch_extensions/py37_cu117/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
2023-09-11 22:38:29,256 - INFO - logger - logger initialized
Error: FBX library failed to load - importing FBX data will not succeed. Message: No module named 'fbx'
FBX tools must be installed from https://help.autodesk.com/view/FBX/2020/ENU/?guid=FBX_Developer_Help_scripting_with_python_fbx_installing_python_fbx_html
MOVING MOTION DATA TO GPU, USING CACHE: True
Importing module 'rlgpu_37' (/home/zhouyang/task_python/isaacgym/python/isaacgym/_bindings/linux-x86_64/rlgpu_37.so)
Setting seed: 9815
Started to train
Not connected to PVD
+++ Using GPU PhysX
/buildAgent/work/99bede84aa0a52c2/source/gpucommon/src/PxgCudaMemoryAllocator.cpp (59) : warning : Failed to allocate pinned memory.

/buildAgent/work/99bede84aa0a52c2/source/gpucommon/src/PxgCudaMemoryAllocator.cpp (59) : warning : Failed to allocate pinned memory.

/buildAgent/work/99bede84aa0a52c2/source/gpucommon/src/PxgCudaMemoryAllocator.cpp (59) : warning : Failed to allocate pinned memory.

/buildAgent/work/99bede84aa0a52c2/source/gpucommon/src/PxgCudaMemoryAllocator.cpp (59) : warning : Failed to allocate pinned memory.

Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
Cylinder is not natively supported, tesellated mesh will be used
/buildAgent/work/99bede84aa0a52c2/source/gpucommon/include/PxgCudaUtils.h (54) : internal error : SynchronizeStreams cuEventRecord failed

/buildAgent/work/99bede84aa0a52c2/source/gpucommon/include/PxgCudaUtils.h (60) : internal error : SynchronizeStreams cuStreamWaitEvent failed

/buildAgent/work/99bede84aa0a52c2/source/gpunarrowphase/src/PxgNarrowphaseCore.cpp (2408) : internal error : memcpy failed fail!
700

/buildAgent/work/99bede84aa0a52c2/source/gpucommon/src/PxgCudaMemoryAllocator.cpp (59) : warning : Failed to allocate pinned memory.

Segmentation fault

Does anyone know anything about this bug? Any good suggestions? I would appreciate it!

Error in pre-training

When running:

python calm/run.py --task HumanoidAMPGetup --cfg_env calm/data/cfg/humanoid_calm_sword_shield_getup.yaml --cfg_train calm/data/cfg/train/rlg/calm_humanoid.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield.yaml --headless

I successfully load motion files, but get an error:

RunningMeanStd:  (253,)
RunningMeanStd:  (140,)
Traceback (most recent call last):
  File "calm/run.py", line 274, in <module>
    main()
  File "calm/run.py", line 268, in main
    runner.run(vargs)
  File "/home/ubuibm/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 139, in run
    self.run_train()
  File "/home/ubuibm/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 125, in run_train
    agent.train()
  File "/home/ubuibm/dev/CALM/calm/learning/common_agent.py", line 108, in train
    self.obs = self.env_reset()
  File "/home/ubuibm/dev/CALM/calm/learning/calm_agent.py", line 541, in env_reset
    self._reset_latents(env_ids)
  File "/home/ubuibm/dev/CALM/calm/learning/calm_agent.py", line 585, in _reset_latents
    z, enc_amp_obs_demo = self._sample_latents(n)
  File "/home/ubuibm/dev/CALM/calm/learning/calm_agent.py", line 598, in _sample_latents
    latents = self.model.a2c_network.eval_enc(proc_enc_amp_obs_demo)
  File "/home/ubuibm/dev/CALM/calm/learning/calm_network_builder.py", line 257, in eval_enc
    if self._enc_arch_type == 'mlp':
  File "/home/ubuibm/miniconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 948, in __getattr__
    type(self).__name__, name))
AttributeError: 'Network' object has no attribute '_enc_arch_type'

please advise if possible! Thanks.

Full log:
log.txt

Motion Retarget

Hi, thanks for your great shared work!
I want to retarget my new motion clips to the amp_humanoid_sword_shield character.
I am just curious about the rotation and scaling parameters in the example retargeting json, such as retarget_cmu_to_amp.json.
image

Could you tell me how to set up these parameters, or provide the JSON file you used?
And, could I get the tpose of the amp_humanoid_sword_shield following generate_amp_humanoid_tpose.py by just rotating left_upper_arm and right_upper_arm with 90 degree?

Trying to run with 24Gb VRAM

Hi @tesslerc , thanks for the quick help on that other issue.
I'm trying to train on a 3090Ti, and wondering what are some good parameters to tweak in the config file (minibatch size, maybe?) to possibly train with this constraint. Any advice welcome! Including if it is even possible without major difference in results. Thanks.

Exception invalid load key, 'v'. when trying to execute

Thanks for posting the ASC and CALM code bases they have been very helpful to understand the papers.
Unfortunately when trying the examples there seems to be a problem loading the pretrained llc file.

python calm/run.py --test --task HumanoidStrikeFSM --num_envs 16 --cfg_env calm/data/cfg/humanoid_sword_shield_strike_fsm.yaml --cfg_train calm/data/cfg/train/rlg/hrl_humanoid_fsm.yaml --motion_file calm/data/motions/reallusion_sword_shield/dataset_reallusion_sword_shield_fsm_movements.yaml --llc_checkpoint calm/data/models/calm_llc_reallusion_sword_shield.pth --checkpoint calm/data/cfg/humanoid_sword_shield_strike_fsm.yaml

=> loading checkpoint 'calm/data/models/calm_llc_reallusion_sword_shield.pth'
Exception invalid load key, 'v'. when trying to execute <function load at 0x7f653a9b8940> with args:('calm/data/models/calm_llc_reallusion_sword_shield.pth',) and kwargs:{}...

No pre-trained model

Hello,
I am trying to use the pre-trained model for the sword and shield
. However, I was unable to locate the pre-trained model at the expected location "calm/data/models/calm_llc_reallusion_sword_shield.pth".
Can someone please help me to upload the pre-trained model?
Thank you.

Error in precision training

Hi,

I was running code for testing precision-training and it returned error:

=> loading checkpoint 'Base'
Exception [Errno 2] No such file or directory: 'Base' when trying to execute <function load at 0x7ff0fcb94790> with args:('Base',) and kwargs:{}...

Any idea of how to fix it?

Thank you so much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.