Giter Club home page Giter Club logo

temos's Introduction

TEMOS: TExt to MOtionS

Generating diverse human motions from textual descriptions

Description

Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions", ECCV 2022 (Oral).

Please visit our webpage for more details.

teaser_lightteaser_dark

Bibtex

If you find this code useful in your research, please cite:

@inproceedings{petrovich22temos,
  title     = {{TEMOS}: Generating diverse human motions from textual descriptions},
  author    = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
  booktitle = {European Conference on Computer Vision ({ECCV})},
  year      = {2022}
}

You can also put a star ⭐, if the code is useful to you.

Installation 👷

Click to expand

1. Create conda environment

Instructions
conda create python=3.9 --name temos
conda activate temos

Install PyTorch 1.10 inside the conda environment, and install the following packages:

pip install pytorch_lightning --upgrade
pip install torchmetrics==0.7
pip install hydra-core --upgrade
pip install hydra_colorlog --upgrade
pip install shortuuid
pip install rich
pip install pandas
pip install transformers
pip install psutil
pip install einops

The code was tested on Python 3.9.7 and PyTorch 1.10.0.

2. Download the datasets

Instructions

KIT Motion-Language dataset

Be sure to read and follow their license agreements, and cite accordingly.

Use the code from Ghosh et al. to download and prepare the kit dataset (extraction of xyz joints coodinates data from axis-angle Master Motor Map). Move or copy all the files which ends with "_meta.json", "_annotations.json" and "_fke.csv" inside the datasets/kit folder. " These motions are process by the Master Motor Map (MMM) framework. To be able to generate motions with SMPL body model, please look at the next section.

(Optional) Motion processed with MoSh++ (in AMASS)

Be sure to read and follow their license agreements, and cite accordingly.

Create this folder:

mkdir datasets/AMASS/

Go to the AMASS website, register and go to the Download tab. Then download the "SMPL+H G" files corresponding to the datasets [KIT, CMU, EKUT] into the datasets/AMASS directory and uncompress the archives:

cd datasets/AMASS/
tar xfv CMU.tar.bz2
tar xfv KIT.tar.bz2
tar xfv EKUT.tar.bz2
cd ../../

3. Download text model dependencies

Instructions

Download distilbert from Hugging Face

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

4. (Optional) SMPL body model

Instructions

This is only useful if you want to use generate 3D human meshes like in the teaser. In this case, you also need a subset of the AMASS dataset (see instructions below).

Go to the MANO website, register and go to the Download tab.

  • Click on "Models & Code" to download mano_v1_2.zip and place it in the folder deps/smplh/.
  • Click on "Extended SMPL+H model" to download smplh.tar.xz and place it in the folder deps/smplh/.

The next step is to extract the archives, merge the hands from mano_v1_2 into the Extended SMPL+H models, and remove any chumpy dependency. All of this can be done using with the following commands. (I forked both scripts from this repo SMPLX repo, updated them to Python 3, merged them, and made it compatible with .npz files).

pip install scipy chumpy
bash prepare/smplh.sh

This will create SMPLH_FEMALE.npz, SMPLH_MALE.npz, SMPLH_NEUTRAL.npz inside the deps/smplh folder.

5. (Optional) Download pre-trained models

Instructions

Make sure to have gdown installed

pip install --user gdown

Then, please run this command line:

bash prepare/download_pretrained_models.sh

Inside the pretrained models folder, you will find one for each type of data (see Section datasets below for more information).

pretrained_models
├── kit-amass-rot
│   └── 1cp6dwpa
├── kit-amass-xyz
│   └── 5xp9647f
└── kit-mmm-xyz
    └── 3l49g7hv

How to train TEMOS 🚀

Click to expand

The command to launch a training experiment is the folowing:

python train.py [OPTIONS]

The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value or foo.bar=value.

Experiment path

Each training will create a unique output directory (referred to as FOLDER below), where logs, configuations and checkpoints are stored.

By default it is defined as outputs/${data.dataname}/${experiment}/${run_id} with data.dataname the name of the dataset (see examples below), experiment=baseline and run_id a 8 unique random alpha-numeric identifier for the run (everything can be overridden if needed).

This folder is printed during logging, it should look like outputs/kit-mmm-xyz/baseline/3gn7h7v6/.

Some optional parameters

Datasets

  • data=kit-mmm-xyz: KIT-ML motions processed by the MMM framework (as in the original data) loaded as xyz joint coordinates (after axis-angle transformation → xyz) (by default)
  • data=kit-amass-rot: KIT-ML motions loaded as SMPL rotations and translations, from AMASS (processed with MoSh++)
  • data=kit-amass-xyz: KIT-ML motions loaded as xyz joint coordinates, from AMASS (processed with MoSh++) after passing through a SMPL layer and regressing the correct joints.

Training

  • trainer=gpu: training with CUDA, on an automatically selected GPU (default)
  • trainer=cpu: training on the CPU (not recommended)

How to generate motions with TEMOS 🚶

Click to expand

Dataset splits

To get results comparable to previous work, we use the same splits as in Language2Pose and Ghosh et al.. To be explicit, and not rely on random seeds, you can find the list of id-files in datasets/kit-splits/ (train/val/test).

When sampling Ghosh et al.'s motions with their code, I noticed that their dataloader is missing some sequences (see the discussion here). In order to compare all the methods with the same test set, we use the 520 sequences produced by Ghosh et al. code for the test set (instead of the 587 sequences). This split is refered as gtest (for "Ghosh test"). It is used per default in the sampling/evaluation/rendering code. You can change this set by specifying split=SPLIT in each command line.

You can also find in datasets/kit-splits/, the split used for the human-study (human-study) and the split used for the visuals of the paper (visu).

Sampling/generating motions

The command line to sample one motion per sequence is the following:

python sample.py folder=FOLDER [OPTIONS]

This command will create the folder FOLDER/samples/SPLIT and save the motions in the npy format.

Some optional parameters

  • mean=false: Take the mean value for the latent vector, instead of sampling (default is false)
  • number_of_samples=X: Generate X motions (by default it generates only one)
  • fact=X: Multiplies sigma by X during sampling (1.0 by default, diversity can be increased when fact>1)

Model trained on SMPL rotations

If your model has been trained with data=kit-amass-rot, it produces SMPL rotations and translations. In this case, you can specify the type of data you want to save after passing through the SMPL layer.

  • jointstype=mmm: Generate xyz joints compatible with the MMM bodies (by default). This gives skeletons comparable to data=kit-mmm-xyz (needed for evaluation).
  • jointstype=vertices: Generate human body meshes (needed for rendering).

Evaluating TEMOS (and prior works) 📊

Click to expand

To evaluate TEMOS on the metrics defined in the paper, you must generate motions first (see above), and then run:

python evaluate.py folder=FOLDER [OPTIONS]

This will compute and store the metrics in the file FOLDER/samples/metrics_SPLIT in a yaml format.

Some optional parameters

Same parameters as in sample.py, it will choose the right directories for you. In the case of evaluating with number_of_samples>1, the script will compute two metrics metrics_gtest_multi_avg (the average of single metrics) and metrics_gtest_multi_best (chosing the best output for each motion). Please check the paper for more details.

Model trained on SMPL rotations

Currently, evaluation is only implemented on skeletons with MMM format. You must therefore use jointstype=mmm during sampling.

Evaluating prior works

Please use this command line to download the motions generated from previous work:

bash prepare/download_previous_works.sh

Then, to evaluate a method, you can do for example:

python evaluate.py folder=previous_work/ghosh

or change "ghosh" with "jl2p" or "lin".

To give an overview on how to extract their motions:

  1. Generate motions with their code (it is still in the rifke feature space)
  2. Save them in xyz format (I "hack" their render script, to save them in xyz npy format instead of rendering)
  3. Load them into the evaluation code, as shown above.

Rendering motions 🔆

Click to expand

To get the visuals of the paper, I use Blender 2.93. The setup is not trivial (installation + running), I do my best to explain the process but don't hesitate to tell me if you have a problem.

Instalation

The goal is to be able to install blender so that it can be used with python scripts (so we can use ``import bpy''). There seem to be many different ways to do this, I will explain the one I use and understand (feel free to use other methods or suggest an easier way). The installation of Blender will be done as a standalone package. To use my scripts, we will run blender in the background, and the python executable in blender will run the script.

In any case, after the installation, please do step 5/6. to install the dependencies in the python environment.

  1. Please follow the instructions to install blender 2.93 on your operating system. Please install exactly this version.
  2. Locate the blender executable if it is not in your path. For the following commands, please replace blender with the path to your executable (or create a symbolic link or use an alias).
    • On Linux, it could be in /usr/bin/blender or /snap/bin/blender (already in your path).
    • On macOS, it could be in /Applications/Blender.app/Contents/MacOS/Blender (not in your path)
  3. Check that the correct version is installed:
    • blender --background --version should return "Blender 2.93.X".
    • blender --background --python-expr "import sys; print('\nThe version of python is '+sys.version.split(' ')[0])" should return "3.9.X".
  4. Locate the python installation used by blender the following line. I will refer to this path as /path/to/blender/python.
blender --background --python-expr "import sys; import os; print('\nThe path to the installation of python of blender can be:'); print('\n'.join(['- '+x.replace('/lib/python', '/bin/python') for x in sys.path if 'python' in (file:=os.path.split(x)[-1]) and not file.endswith('.zip')]))"
  1. Install pip
/path/to/blender/python -m ensurepip --upgrade
  1. Install these packages in the python environnement of blender:
/path/to/blender/python -m pip install --user numpy
/path/to/blender/python -m pip install --user matplotlib
/path/to/blender/python -m pip install --user hydra-core --upgrade
/path/to/blender/python -m pip install --user hydra_colorlog --upgrade
/path/to/blender/python -m pip install --user moviepy
/path/to/blender/python -m pip install --user shortuuid

Launch a python script (with arguments) with blender

Now that blender is installed, if we want to run the script script.py with the blender API (the bpy module), we can use:

blender --background --python script.py

If you need to add additional arguments, this will probably fail (as blender will interpret the arguments). Please use the double dash -- to tell blender to ignore the rest of the command. I then only parse the last part of the command (check temos/launch/blender.py if you are interested).

Rendering one sample

To render only one motion, please use this command line:

blender --background --python render.py -- npy=PATH_TO_DATA.npy [OPTIONS]

Rendering all the npy of a folder

Please use this command line to render all the npy inside a specific folder.

blender --background --python render.py -- folder=FOLDER_WITH_NPYS [OPTIONS]

SMPL bodies

Don't forget to generate the data with the option jointstype=vertices before. The renderer will automatically detect whether the motion is a sequence of joints or meshes.

Some optional parameters

  • downsample=true: Render only 1 frame every 8 frames, to speed up rendering (by default)
  • canonicalize=true: Make sure the first pose is oriented canonically (by translating and rotating the entire sequence) (by default)
  • mode=XXX: Choose the rendering mode (default is mode=sequence)
    • video: Render all the frames and generate a video (as in the supplementary video)
    • sequence: Render a single frame, with num=8 bodies (sampled equally, as in the figures of the paper)
    • frame: Render a single frame, at a specific point in time (exact_frame=0.5, generates the frame at about 50% of the video)
  • quality=false: Render to a higher resolution and denoise the output (default to false to speed up))

License 📚

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.

temos's People

Contributors

mathux avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

temos's Issues

Normalize Values Calculation

Hi,
Congrats and thanks for releasing your work!
May I double check how did you calculate the mean&std in normalization step? (e.g. the file under ${path.deps}/transforms/rots2rfeats/smplvelp/${.pose_rep}/${data.dataname}).

Thanks!

About non-rescaled version

Hello, I would like to use the non-scaled version suggested in your paper.

I'm reading the code and using it, but I wanna ask if it's right to do it the way below.

  1. in temos/model/metrics/compute.py,
    force_in_meter: bool = True,

    I changed the parameter force_in_meter to False
  2. in /temos/transforms/rots2joints/smplh.py,
    if jointstype == "mmm":
    from temos.info.joints import smplh_to_mmm_scaling_factor
    data *= smplh_to_mmm_scaling_factor

    I removed these codes when I use smpl data

Is this the correct way to proceed non-scaled version with smpl data?

Thanks!

Custom Prompts

Hello, thanks for releasing the code for this awesome work! Does this repo currently support custom prompts? If not, could you highlight the necessary changes?

something wrong in the paper

In the code, the num_heads of the transformer encoder in config cannot be injected into the .py file because of a spelling error, so it is always 4, which is not equal to num_layer like the paper

Evaluation with past research

Hi dear authors,
I would like to start by saying thank you for your amazing work.
Did you re-implement past research(Lin et. al./ JL2P/ Ghosh et al.)?
How can I evaluate them with your code?

About rendering on a Windows system

Hi dear authors,
I would like to start by saying thank you for your amazing work.

I tried rendering in my Windows system the way you wrote it, but it didn't work.
I can not Install those packages in the python environnement of blender.

like "defaulting to user installation because normal site-packages is not writeable"

About the python in blender.

Hi, firstly, thank you very much for the tutorial on Blender's visualization. I have almost completed the steps, but I still encountered some problems.
I have installed blender 2.93, and the path of blender python is "/snap/blender/3688/2.93/python/bin/python3.9". I followed the instructions to install the needed packages, and if I use '/snap/blender/3688/2.93/python/bin/python3.9' to start the python, I can successfully import these packages. But if I use 'blender --background --python-expr "import xxx", the packages cannot be imported, causing the "No module" error. It seems that the Python interpreters launched in these two ways are not consistent. May I ask what is causing this situation and how I can resolve it.
Hope to get your help, thanks a lot.

About evaluation result in Table 1

Hello, thank you for such a great project. I have a question when reading Table 1 of the paper:
Are the evaluation data on APE and AVE shown in Table 1 the average of multiple evaluation results? If so I would like to ask how many experiments you did?
Because I changed the random number and trained the model multiple times, the results obtained each time were not as good as shown in the paper. Even if I averaged multiple experiments, I did not achieve the effect shown in the paper.

interact.py example gives errors, import fails

Hi, I am really interested to try your method. I tried installing and following your advice to use the interact.py script, but get some errors that temos.data.kit cannot be found. Is there some special instruction how to install Temos, an install script in addition to the git clone? Or a proper way how to add to sys.path in python?

Thanks a lot for your help

 python interact.py folder=pretrained_models/kit-mmm-xyz/3l49g7hv/ saving=kick text="A person kicks with the right foot." length=60
[10/25/22 15:07:48] INFO     Interaction script. The result will be saved there: kick                   interact.py:52
                    INFO     The sentence is: A person kicks with the right foot.                       interact.py:53
[10/25/22 15:07:50] INFO     Created a temporary directory at /tmp/tmp4gjoeeh5                      instantiator.py:21
                    INFO     Writing /tmp/tmp4gjoeeh5/_remote_module_non_scriptable.py              instantiator.py:76
                    INFO     Global seed set to 1234                                                        seed.py:71
                    INFO     Loading model                                                              interact.py:71
                    INFO     Loading data module                                                        interact.py:77
Error executing job with overrides: ['folder=pretrained_models/kit-mmm-xyz/3l49g7hv/', 'saving=kick', 'text=A person kicks with the right foot.', 'length=60']
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 639, in _locate
    obj = getattr(obj, part)
AttributeError: module 'temos.data' has no attribute 'kit'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 645, in _locate
    obj = import_module(mod)
  File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/content/TEMOS/temos/data/kit.py", line 15, in <module>
    from temos.transforms import Transform
  File "/content/TEMOS/temos/transforms/__init__.py", line 2, in <module>
    from .smpl import SMPLTransform
  File "/content/TEMOS/temos/transforms/smpl.py", line 8, in <module>
    from .joints2jfeats import Joints2Jfeats
  File "/content/TEMOS/temos/transforms/joints2jfeats/__init__.py", line 2, in <module>
    from .rifke import Rifke
  File "/content/TEMOS/temos/transforms/joints2jfeats/rifke.py", line 11, in <module>
    class Rifke(Joints2Jfeats):
  File "/content/TEMOS/temos/transforms/joints2jfeats/rifke.py", line 122, in Rifke
    def extract(self, features: Tensor) -> tuple[Tensor]:
TypeError: 'type' object is not subscriptable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 134, in _resolve_target
    target = _locate(target)
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 655, in _locate
    ) from exc_import
ImportError: Error loading 'temos.data.kit.KITDataModule':
TypeError("'type' object is not subscriptable")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "interact.py", line 146, in <module>
    _interact()
  File "/usr/local/lib/python3.7/dist-packages/hydra/main.py", line 95, in decorated_main
    config_name=config_name,
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 396, in _run_hydra
    overrides=overrides,
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 453, in _run_app
    lambda: hydra.run(
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 216, in run_and_report
    raise ex
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 213, in run_and_report
    return func()
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 456, in <lambda>
    overrides=overrides,
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
  File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "interact.py", line 14, in _interact
    return interact(cfg)
  File "interact.py", line 78, in interact
    data_module = instantiate(cfg.data)
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 223, in instantiate
    config, *args, recursive=_recursive_, convert=_convert_, partial=_partial_
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 325, in instantiate_node
    _target_ = _resolve_target(node.get(_Keys.TARGET), full_key)
  File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 139, in _resolve_target
    raise InstantiationException(msg) from e
hydra.errors.InstantiationException: Error locating target 'temos.data.kit.KITDataModule', see chained exception above.
full_key: data

auto_select_gpus is deprecated in Pytorch Lightning 2.0.2

First off, thanks for the great contribution to the community!

I am getting the following error when running the training code:

Error executing job with overrides: []
Traceback (most recent call last):
  File "D:\User\TEMOS\train.py", line 12, in _train
    return train(cfg)
  File "D:\User\TEMOS\train.py", line 66, in train
    trainer = pl.Trainer(
  File "D:\User\Anaconda37\envs\temos\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 69, in insert_env_defaults
    return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'auto_select_gpus'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

It seems like the auto_select_gpus argument is now deprecated in the latest version of Pytorch Lightning. See discussion here: Lightning-AI/pytorch-lightning#16147

Therefore it shouldn't be need on this line for the latest Pytorch Lightning:

auto_select_gpus: true

EDIT: I verified that this isn't an issue in version 1.9.5

I am happy to make a PR if need be, otherwise a frozen requirements.txt that is compatible with the state of the code as it is would be also really helpful.

Thanks for you help!

Some errors related to render and blender

Firstly, I would like to thank the author for providing a very valuable project. I encountered some issues during the render and blender:

  1. Locate the python installation,/path/to/blender/pythonis:- D:\fyy\TEMOS\blender-2.93.14-windows-x64\blender-2.93.14-windows-x64\2.93\python,But when I run:

    D:\fyy\TEMOS\blender-2.93.14-windows-x64\blender-2.93.14-windows-x64\2.93\python -m ensurepip --upgrade
    

    or

    blender-2.93.14-windows-x64\blender-2.93.14-windows-x64\2.93\python -m ensurepip --upgrade
    

    cmd prompts:'blender-2.93.14-windows-x64\blender-2.93.14-windows-x64\2.93\python' is not an internal or external command, nor is it a runnable program or batch file.

  2. When I change /path/to/blender/python directly to python, I can complete the installation command:When I change /path/to/blender/Python directly to Python, I can complete the installation command:

    (temos) D:\fyy\TEMOS>python -m ensurepip --upgrade
    Looking in links: c:\Users\ADMINI~1.DES\AppData\Local\Temp\tmpcdpms3ic
    Requirement already satisfied: setuptools in c:\users\administrator.desktop-nkif354\.conda\envs\temos\lib\site-packages (65.6.3)
    Requirement already satisfied: pip in c:\users\administrator.desktop-nkif354\.conda\envs\temos\lib\site-packages (23.0.1)
    

    But there will be an error when running render. pyNo module named 'omegaconf'

    (temos) D:\fyy\TEMOS>blender-2.93.14-windows-x64\blender-2.93.14-windows-x64\blender --background --python render.py -- folder=outputs\kit-mmm-xyz\baseline\38yco6ib
    Blender 2.93.14 (hash dcf0f452818e built 2023-01-17 08:39:13)
    Read prefs: C:\Users\Administrator.DESKTOP-NKIF354\AppData\Roaming\Blender Foundation\Blender\2.93\config\userpref.blend
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "D:\fyy\TEMOS\render.py", line 11, in <module>
        import temos.launch.prepare  # noqa
      File "D:\fyy\TEMOS\temos\launch\prepare.py", line 4, in <module>
        from omegaconf import OmegaConf
    ModuleNotFoundError: No module named 'omegaconf'
    

How to solve these problems? Thank you in advance!

How to train TEMOS on other datasets?

Hello, thank you very much for providing such an excellent project.
TEMOS was trained on the KIT-ML dataset, but this dataset is too small. I would like to ask how to train TEMOS on other datasets? For example, the BABEL dataset and the HumanML3D dataset.
Looking forward to your reply, thank you!

how to accelerate render speed when using long sequences with blender

Hi, thanks for your great work. @Mathux

I tried to visualize amass annotations using Blender. I wonder if it is possible to render a long sequence of SMPL-X annotations parallelly to accelerate the rendering speed.

I am not very familiar with the blender. Could you give me some hints on how to rewrite some code to accelerate the process?

Problems related to blender installation

Thanks for your tutorial of the blender installation.
There are some problems. The command:

blender --background --python-expr "import sys; import os; print('\nThe path to the installation of python of blender can be:'); print('\n'.join(['- '+x for x in sys.path if 'python' in (file:=os.path.split(x)[-1]) and not file.endswith('.zip')]))"

works wrongly on my MacOS.

It returns me a path /Applications/Blender.app/Contents/Resources/3.2/python/lib/python3.10. However, this is not an executable python path. I changed it into /Applications/Blender.app/Contents/Resources/3.2/python/bin/python3.10, which works well on my MacOS. Is it a bug of the provided command?

This is the problem I met with here, peers with the same problem can solve it similarly. I hope the authors can modify this command. Thanks~

tnteract pytho nerror

code can not be run

python .\interact.py folder='res'

FileNotFoundError: [Errno 2] No such file or directory: 'E:\codes\MOS\res\.hydra\config.yaml'

System requirements

Thanks for this great contribution. I'd love to explore its capabilities. Can you add some system requirements or state on which OS resp. hardware you have run this?

How to get demo presented as pictures in the README files

Hi dear authors,

I would like to start by saying thank you for your amazing work. I managed to train the model and run the sample.py file. But the results I got were only a [.npy] file. Am I doing something wrong or are there additional steps I need to take to produce the similar results presented in the teaser figures?

Skeleton structure and indexes

Hi,

I notice that the skeleton used in the KIT dataset has 21 joints. I would like to know what is the detailed skeleton structure and body joint indexes of KIT.

Many Thanks for your reply and help.

APE and AVE_pose metric calculation

Hey, Thank you so much for sharing this project.
I was trying to run the training code and saw what I think is a little bug here

dico.update({f"Metrics/{metric}": value for metric, value in metrics_dict.items()})

Should it have a .mean()?
dico.update({f"Metrics/{metric}": value.mean() for metric, value in metrics_dict.items()})
Otherwise, I get this error
image

Please let me know if I'm right or not :)

Thanks!

Blender installation

Hi! Thank you for your great work.

Following your kind explanation, I was going to install the blender package.
I've tried several ways but I finally failed to install it.

  1. Install via snap: use command sudo snap install blender --channel=2.93lts/stable --classic.
    In this way, I got /snap/bin/blender and /snap/blender/3688 but when I put command blender --background --version, the system says The program 'blender' is currently not installed. You can install it by typing: sudo apt install blender.

  2. Install via binary file: download blender-2.93.18-linux-x64.tar.xz from https://www.blender.org/download/lts/2-93/ and uncompress it, and copy the binary file into /usr/bin/ directory.
    After that, blender --background --version return Blender 2.93.18 (hash cb886aba06d5 built 2023-05-22 23:33:27) correctly but blender --background --python-expr "import sys; print('\nThe version of python is '+sys.version.split(' ')[0])" occurs the following error:

Color management: using fallback mode for management
Color management: Error could not find role data role.
Blender 2.93.18 (hash cb886aba06d5 built 2023-05-22 23:33:27)
BLT_lang_init: 'locale' data path for translations not found, continuing
Color management: scene view "Filmic" not found, setting default "Standard".
blf_load_font_default: 'fonts' data path not found for 'droidsans.ttf', will not be able to display text
blf_load_font_default: 'fonts' data path not found for 'bmonofont-i18n.ttf', will not be able to display text
blf_load_font_default: 'fonts' data path not found for 'bmonofont-i18n.ttf', will not be able to display text
/run/user/1000/gvfs/ non-existent directory
bpy: couldn't find 'scripts/modules', blender probably wont start.
Freestyle: couldn't find 'scripts/freestyle/modules', Freestyle won't work properly.
ModuleNotFoundError: No module named 'bpy_types'
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy_types'
ERROR (bpy.rna): source/blender/python/intern/bpy_rna.c:7305 pyrna_srna_ExternalType: failed to find 'bpy_types' module
ModuleNotFoundError: No module named 'bpy'
Error: dropbox with unknown operator: WM_OT_drop_blend_file
terminate called after throwing an instance of 'std::logic_error'
  what():  basic_string::_S_construct null not valid
Aborted (core dumped)

I've been struggling with this problem for several days.... I couldn't find any solution on google.
I would so appreciate it if you (or anyone else) could help me with this issue.

Thank you.

File Not Found Error

Hi,
Thank you for sharing the official codes. I followed the instructions in the README.md file. However, when I tried to run the train.py, I met a File Not Found Error and I would like to know what makes this error and how to fix it.

Many thanks for your kind help.

微信截图_20220608201221

Rendering SMPLX motion sequence of AMASS

Hi, thanks for your excellent work. I follow your instruction and successfully visualize the SMPL motion sequence of AMASS. I try to visualize the SMPLX motion sequence and save the meshes in a format of (T, 10475, 3). And then, I render the meshes with render.py using blender. However, the rendering result is as follows:
Do you have any idea about this issue?
image

"_fke.csv"

kit dataset originally had "_fke.csv" file, or was it generated by data preprocessing? If it was generated by preprocessing, could you please send me the code

when I just set 'vae=False' in temos.yaml,I got errors below

hi authors, I want to try a situation when VAE is disabled and save its weights. but I was trapped in some problems(sorry, I just started)

  File "/Users/eanson/opt/miniconda3/envs/temos/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 370, in validation_step
    return self.model.validation_step(*args, **kwargs)
  File "/Users/eanson/Documents/dl/TEMOS/temos/model/base.py", line 33, in validation_step
    return self.allsplit_step("val", batch, batch_idx)
  File "/Users/eanson/Documents/dl/TEMOS/temos/model/temos.py", line 144, in allsplit_step
    loss = self.losses[split].update(ds_text=datastruct_from_text,
  File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/compute.py", line 83, in update
    total += self._update_loss("kl_text2motion", dis_text, dis_motion)
  File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/compute.py", line 105, in _update_loss
    val = self._losses_func[loss](outputs, inputs)
  File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/kl.py", line 9, in __call__
    div = torch.distributions.kl_divergence(q, p)
  File "/Users/eanson/opt/miniconda3/envs/temos/lib/python3.9/site-packages/torch/distributions/kl.py", line 170, in kl_divergence
    raise NotImplementedError("No KL(p || q) is implemented for p type {} and q type {}"
NotImplementedError: No KL(p || q) is implemented for p type NoneType and q type NoneType

pip list

Package                 Version
----------------------- -----------
absl-py                 1.3.0
aiohttp                 3.8.3
aiosignal               1.2.0
antlr4-python3-runtime  4.9.3
astroid                 2.12.12
async-timeout           4.0.2
attrs                   22.1.0
beautifulsoup4          4.11.1
cachetools              5.2.0
certifi                 2022.9.24
charset-normalizer      2.1.1
colorlog                6.7.0
commonmark              0.9.1
contourpy               1.0.6
cycler                  0.11.0
decorator               4.4.2
dill                    0.3.6
einops                  0.5.0
filelock                3.8.0
fonttools               4.38.0
frozenlist              1.3.1
fsspec                  2022.10.0
google-auth             2.13.0
google-auth-oauthlib    0.4.6
grpcio                  1.50.0
huggingface-hub         0.10.1
hydra-colorlog          1.2.0
hydra-core              1.2.0
idna                    3.4
imageio                 2.22.2
imageio-ffmpeg          0.4.7
importlib-metadata      5.0.0
isort                   5.10.1
kiwisolver              1.4.4
lazy-object-proxy       1.7.1
Markdown                3.4.1
MarkupSafe              2.1.1
matplotlib              3.6.1
mccabe                  0.7.0
moviepy                 1.0.3
multidict               6.0.2
numpy                   1.23.4
oauthlib                3.2.2
omegaconf               2.2.3
packaging               21.3
pandas                  1.5.1
Pillow                  9.3.0
pip                     22.2.2
platformdirs            2.5.2
proglog                 0.1.10
protobuf                3.19.6
psutil                  5.9.3
pyasn1                  0.4.8
pyasn1-modules          0.2.8
pyDeprecate             0.3.2
Pygments                2.13.0
pylint                  2.15.5
pyparsing               3.0.9
PySocks                 1.7.1
python-dateutil         2.8.2
pytorch-lightning       1.7.7
pytz                    2022.5
PyYAML                  6.0
regex                   2022.9.13
requests                2.28.1
requests-oauthlib       1.3.1
rich                    12.6.0
rsa                     4.9
setuptools              59.5.0
shortuuid               1.0.9
six                     1.16.0
soupsieve               2.3.2.post1
tensorboard             2.10.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit  1.8.1
tokenizers              0.13.1
tomli                   2.0.1
tomlkit                 0.11.5
torch                   1.13.0
torchmetrics            0.7.0
torchvision             0.14.0
tqdm                    4.64.1
transformers            4.23.1
typing_extensions       4.4.0
urllib3                 1.26.12
Werkzeug                2.2.2
wheel                   0.37.1
wrapt                   1.14.1
yarl                    1.8.1
zipp                    3.9.0

About Evaluation

Hi dear author, Thanks for the amazing work.

I tried to test the model so I followed the instructions in README

  1. download the pre-trained models
  2. run sample.py to sample the motions with the pre-trained kit-mmm-xyz model
  3. run evaluate.py to calculate the metrics with motions above

However, I found that the results were not good enough. I am new to this area, and don't know where I did wrong. I will be deeply graceful if you could help me find out the mistakes. Thanks

APE_root APE_traj APE_mean_pose APE_mean_joints AVE_root AVE_traj AVE_mean_pose AVE_mean_joints
Paper 0.963 0.955 0.104 0.976 0.445 0.445 0.005 0.448
My run 1.033 1.024 0.104 1.048 0.448 0.448 0.005 0.451

for visualization, the ensurepip doesn't work

I tried to use snap to install the blender 2.93lts/stable. And I've used /path/to/blender/python -m ensurepip --upgrade.
It returned as follows:

Defaulting to user installation because normal site-packages is not writeable
Looking in links: /tmp/tmp30yem5vm
Requirement already up-to-date: setuptools in /home/vatis/.local/lib/python3.9/site-packages (49.2.1)
Requirement already up-to-date: pip in /home/vatis/.local/lib/python3.9/site-packages (20.2.3)

It seems that pip was installed in the system python but not this builtin python.
Because when I used:
blender --background --python-expr "import pip"
It returned:

Blender 2.93.15 (hash 2888f351e535 built 2023-02-21 00:33:31)
Read prefs: /home/vatis/.config/blender/2.93/config/userpref.blend
Error: Python: Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pip'

location: <unknown location>:-1

Python: Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pip'

location: <unknown location>:-1

Blender quit

But all these worked for me when I followed your instruction on my mac. Did you know any solution to these? THX

Training on multi-GPU

Hi dear author,
I would like to train the temos on multi-GPU. As shown in Fig.1, It runs well when using one GPU as default. But, when I try to run it on multi-GPU and set the gpus: 4 in the trainer config file, I meet a Type Error shown in Fig.2. So, I would like to know how to train temos on multi-GPU well.

One GPU Training:
微信截图_20220610094505

Four GPUs Training:
微信截图_20220610094621

There are some errors when i tried to run sample.py to generate motion

QQ截图20230115170913
Error executing job with overrides: ['folder=D:\text-to-motion-main\TEMOS-master\TEMOS\pretrained_models\kit-amass-xyz\5xp9647f']
[01/15/23 16:45:45] INFO Global seed set to 1234 seed.py:54 INFO Loading data module sample.py:88Error executing job with overrides: ['folder=D:\text-to-motion-main\TEMOS-master\TEMOS\pretrained_models\kit-mmm-xyz\3l49g7hv']
Error in call to target 'temos.data.kit.KITDataModule':
FileNotFoundError(2, 'No such file or directory')
full_key: data

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.


besides,
kit datasets have only "xxxx_mmm.xml","xxx_annotations.json"and"xxxx_meta.json",where to find "_fke.csv"? Is it the reason why I met these errors when I wanted to generate motions?

hope you can reply as soon as possible
thank you!

About checkpoints

Hi, thanks for your great work. I have a question about checkpoints.

I saw config files, and I can find that you used mode=max in latest_checkpoint.yaml, but I can't find it in last_checkpoint.yaml.
so if you used the same metrics for them, I think we need to remove it from latest_checkpoint.yaml. (If use error or loss for metrics)

How do you think about this?

Additionally, I want to know which one is the best model. Is the last.ckpt the best model with metrics (valid error or loss)?

thanks.

The config in rendering

When rendering all the data (folder=FOLDER setting), it seems that the infolder parameter in file configs/render.yaml should be set as true.

start and end positions

thanks for this interesting work. When creating video sequences, the code ties the start and end positions tied to be the same for every prompt. Is it easy to relax this so that a prompt 'a person is sitting' has them sitting throughout all frames, rather than standing, moving to sitting, and back again ? Not sure if changing the 'canonicalize' setting would help with this. I do not need to evaluate metrics.

I am using this type of command (together with blender video creation).
python3 interact.py text="a person is sitting" folder=out saving=out length=50 jointstype=vertices

I have copied the config.yaml, hydra.yaml and overrides.yaml from pretrained model kit-amass-rot\1cp6dwpa to out/.hydra

Problem of foot skating.

Hello! First of all, thanks for your excellent work. I ran your interact code and tested it on all three of the pre-trained models provided, but found that the results had severe foot skating. It was different from the effects described in the paper.
These are the results of the text "A person takes two steps, raises his right hand, keeps walking" on the three models. Problems also arise with other text inputs.

a_person_takes_two_steps._raises_his_right_hand._keeps_walking_len_60.webm
a_person_takes_two_steps._raises_his_right_hand._keeps_walking_len_61.webm
a_person_takes_two_steps._raises_his_right_hand._keeps_walking_len_62.webm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.