Giter Club home page Giter Club logo

teach's Introduction

TEACH: Temporal Action Compositions for 3D Humans ArXiv PDF Project Page

Nikos Athanasiou · Mathis Petrovich · Michael J. Black · Gül Varol

3DV 2022

Check our upcoming YouTube video for a quick overview and our paper for more details.

Video

Features

This implementation:

  • Instruction on how to prepare the datasets used in the experiments.
  • The training code:
    • For both baselines
    • For TEACH method
  • A simple interacting demo that given some prompts with texts and durations returns back:
    • a npy file containing the vertices of the body generated by TEACH.
    • a video that demonstrates the result.

Updates

To be uploaded:

  • Instructions about the baselines and how to run them.
  • Instructions for sampling and evaluating with the code all of the models.
  • The rendering code for the blender renderings used in the paper.

Getting Started

TEACH has been implemented and tested on Ubuntu 20.04 with python >= 3.9.

Clone the repo:

git clone https://github.com/athn-nik/teach.git

After it do this to install DistillBERT:

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

Install the requirements using virtualenv :

# pip
source scripts/install.sh

You can do something equivalent with conda as well.

Running the Demo

We have prepared a nice demo code to run TEACH on arbitrary videos. First, you need download the required data(i.e our trained model from our website). The path/to/experiment directory should look like:

experiment
│   
└───.hydra
│   | config.yaml
|   | overrides.yaml
|   | hydra.yaml
|
└───checkpoints
    │   last.ckpt

Then, running the demo is as simple as:

python interact_teach.py folder=/path/to/experiment output=/path/to/yourfname texts='[text prompt1, text prompt2, text prompt3, <more prompts comma divided>]' durs='[dur1, dur2, dur3, ...]'

Data

Download the data from AMASS website. Then, run this command to extract the amass sequences that are annotated in babel:

python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/default_is_/babel/babel-smplh-30fps-male --use-betas --gender male

Download the data from TEACH website, after signing in. The data TEACH was trained was a processed version of BABEL. Hence, we provide them directly to your via our website, where you will also find more relevant details. Finally, download the male SMPLH male body model from the SMPLX website. Specifically the AMASS version of the SMPLH model. Then, follow the instructions here to extract the smplh model in pickle format.

The run this script and change your paths accordingly inside it extract the different babel splits from amass:

python scripts/amass_splits_babel.py

Then create a directory named data and put the babel data and the processed amass data in. You should end up with a data folder with the structure like this:

data
|-- amass
|  `-- your-processed-amass-data 
|
|-- babel
|   `-- babel-teach
|       `...
|   `-- babel-smplh-30fps-male 
|       `...
|
|-- smpl_models
|   `-- smplh
|       `--SMPLH_MALE.pkl

Be careful not to push any data! Then you should softlink inside this repo. To softlink your data, do:

ln -s /path/to/data

Training

To start training after activating your environment. Do:

python train.py experiment=baseline logger=none

Explore configs/train.yaml to change some basic things like where you want your output stored, which data you want to choose if you want to do a small experiment on a subset of the data etc. [TODO]: More on this coming soon.

Sampling & Evaluation

Here are some commands if you want to sample from the validaiton set and evaluate on the metrics reported in the paper:

python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8

In general the folder is: folder_our/<project>/<dataname_config>/<experimet>/<run_id> This folder should contain a checkpoints directory with a last.ckpt file inside and a .hydra directory from which the configuration will be pulled and the relevant checkpoint. This folder is created during training in the output directory and is provided in our website for the experiments in the paper.

  • align=trans: chooses if translation will be aligned or if the global orientation also(align=full)
  • slerp_ws: decides on whether slerp is done or not(=null) and what is the size of its window.

Then for the evaluation you should do:

python eval.py folder=/path/to/experiment align=true slerp=true

the two extra parameters decide the samples on which the evaluation will be performed.

Transition distance

  • Without alignment column: python compute_td.py folder=/path/to/experiment align_full_bodies=false align_only_trans=true

  • With alignment column: python compute_td.py folder=/path/to/experiment align_full_bodies=true align_only_trans=false

[TODO]: More on this coming soon.

Citation

@inproceedings{TEACH:3DV:2022,
  title={TEACH: Temporal Action Compositions for 3D Humans},
  author={Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l },
  booktitle = {International Conference on 3D Vision (3DV)},
  month = {September},
  year = {2022}
}

License

This code is available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.

Acknowledgments

We thank Benjamin Pellkofer for his IT support.

References

Many part of this code were based on the official implementation of TEMOS. Here are some great resources we benefit:

Contact

This code repository was implemented mainly by Nikos Athanasiou with the help of Mathis Petrovich.

Give a ⭐ if you like.

For commercial licensing (and all related questions for business applications), please contact [email protected].

teach's People

Contributors

athn-nik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

teach's Issues

Body model

Hi, I've noticed your'e using "mmm" body model instead of smplh.

jointstype: str = "mmm",

I just want to make sure I understand correctly and you're indeed using it and its kinematic chain.

I'll also be happy for a short explanation of why you choose this instead of smplh.

Thanks.

'Struct' object has no attribute 'hands_componentsl'

After download the data from https://mano.is.tue.mpg.de/download.php, get the SMPLH_MALE.pkl from mano_v1_2.zip, and put it to smpl_models dir.
When I run the demo interact_teach.py, got the following error:

Traceback (most recent call last):
File "/root/miniconda3/envs/teach-env/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/root/autodl-tmp/github/teach/teach/transforms/rots2joints/smplh.py", line 58, in init
self.smplh = SMPLHLayer(path, ext="pkl", gender=gender).eval()
File "/root/miniconda3/envs/teach-env/lib/python3.9/site-packages/smplx/body_models.py", line 761, in init
super(SMPLHLayer, self).init(
File "/root/miniconda3/envs/teach-env/lib/python3.9/site-packages/smplx/body_models.py", line 601, in init
left_hand_components = data_struct.hands_componentsl[:num_pca_comps]
AttributeError: 'Struct' object has no attribute 'hands_componentsl'

The rendering code

Hi, I wonder could you provide the rendering code or a link where I can learn how to render?

missing file 'deps/inference/labels.json'

Hi, trying to use your sample_seq process, I've encountered the following issue,
there is no 'deps/inference/labels.json' file, and there is no any documentation regarding this

labels = read_json('deps/inference/labels.json')

** Update, I've noticed that this is not needed and can be removed from the code

Issue on executing evaluation

I downloaded the pretrained teach model provided and tried executing the "python eval.py folder=/path/to/experiment align=true slerp=true", but got this error as below,
Traceback (most recent call last):
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 385, in resolve
found = self.importer(used)
ModuleNotFoundError: No module named 'temos'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 552, in configure
filters[name] = self.configure_filter(filters[name])
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 689, in configure_filter
result = self.configure_custom(config)
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 470, in configure_custom
c = self.resolve(c)
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 398, in resolve
raise v
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 385, in resolve
found = self.importer(used)
ValueError: Cannot resolve 'temos.tools.logging.LevelsFilter': No module named 'temos'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/miniconda3/envs/teach/lib/python3.9/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/home/miniconda3/envs/teach/lib/python3.9/site-packages/hydra/_internal/utils.py", line 378, in
lambda: hydra.run(
File "/home/miniconda3/envs/teach/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 88, in run
cfg = self.compose_config(
File "/home/miniconda3/envs/teach/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 566, in compose_config
configure_log(cfg.hydra.hydra_logging, cfg.hydra.verbose)
File "/home/miniconda3/envs/teach/lib/python3.9/site-packages/hydra/core/utils.py", line 48, in configure_log
logging.config.dictConfig(conf)
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 809, in dictConfig
dictConfigClass(config).configure()
File "/home/miniconda3/envs/teach/lib/python3.9/logging/config.py", line 554, in configure
raise ValueError('Unable to configure '
ValueError: Unable to configure filter 'onlyimportant'

Proc_label

Hi, I have tried this amazing project!

I am wondering if there is a list of proc_label in this website or github?
For example, BABEL provides its raw label list here

Thanks!

AssertionError: Path /path/to/teach/data/smpl_models/smpl/SMPL_MALE.pkl does not exist!

I followed the instructions and made the SMPLH_MALE.pkl inside /teach/data/smpl_models/smplh/ as suggested.
But I still got the following error:

python interact_teach.py folder=/path/to/teach/experiment output=/path/to/teach/results texts='[sit down, walk to the left, stand still]' durs='[5, 5, 3]'

Global seed set to 1234
[10/07/23 10:19:44][main][INFO] - Loading model
[10/07/23 10:19:44][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmpvf0vd9d2
[10/07/23 10:19:44][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmpvf0vd9d2/_remote_module_non_sriptable.py
[10/07/23 10:19:46][main][INFO] - Model 'teach' loaded
[10/07/23 10:19:48][main][INFO] - Model weights restored
Global seed set to 0
Error executing job with overrides: ['folder=/path/to/teach/experiment', 'output=/path/to/teach/results', 'texts=[sit down, walk to the left, stand still]', 'durs=[5, 5, 3]']
Traceback (most recent call last):
File "/path/to/teach/interact_teach.py", line 35, in interact
return interact(cfg)
File "/path/to/teach/interact_teach.py", line 103, in interact
vid
= visualize_meshes(motion)
File "/path/to/teach/teach/render/mesh_viz.py", line 77, in visualize_meshes
smpl = get_body_model(path=f'{get_original_cwd()}/data/smpl_models',
File "/path/to/teach/teach/utils/smpl_body_utils.py", line 116, in get_body_model
body_model = smplx.create(body_model_path, model_type=type,
File "/path/to/.conda/envs/teach-env/lib/python3.9/site-packages/smplx/body_models.py", line 2400, in create
return SMPL(model_path, **kwargs)
File "/path/to/.conda/envs/teach-env/lib/python3.9/site-packages/smplx/body_models.py", line 133, in init
assert osp.exists(smpl_path), 'Path {} does not exist!'.format(
AssertionError: Path /path/to/data/smpl_models/smpl/SMPL_MALE.pkl does not exist!

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

However, if I copy the SMPLH_MALE.pkl, rename it as SMPL_MALE.pkl, and put it under /teach/data/smpl_models/smpl. It seems working. I just wonder if this is the correct way to handle it. Thank you.

Discrepancy in number of pairs in training set

Paper states that,
There are approximately 5.7k and 23.4k pairs in the validation and training sets respectively.
But when training is executed we get as below,
Loading BABEL train: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6601/6601 [00:53<00:00, 124.08it/s]
[teach.data.babel][INFO] - Processed 6601 sequences and found 3091 invalid cases based on the datatype.
[teach.data.babel][INFO] - 15863 sequences -- datatype:separate_pairs.
[teach.data.babel][INFO] - 14.13% of the sequences which are rejected by the sampler in total.
[teach.data.babel][INFO] - 0.0% of the sequence which are rejected by the sampler, because of the excluded actions.
[teach.data.babel][INFO] - 14.13% of the sequence which are rejected by the sampler, because they are too short(<0.5 secs) or too long(>25.0 secs).
[teach.data.babel][INFO] - Discard from BML: 0
[teach.data.babel][INFO] - Discard not KIT: 0
Loading BABEL val: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2189/2189 [00:17<00:00, 124.54it/s]
[teach.data.babel][INFO] - Processed 2189 sequences and found 983 invalid cases based on the datatype.
[teach.data.babel][INFO] - 5672 sequences -- datatype:separate_pairs.
[teach.data.babel][INFO] - 16.27% of the sequences which are rejected by the sampler in total.
[teach.data.babel][INFO] - 0.0% of the sequence which are rejected by the sampler, because of the excluded actions.
[teach.data.babel][INFO] - 16.27% of the sequence which are rejected by the sampler, because they are too short(<0.5 secs) or too long(>25.0 secs).
[teach.data.babel][INFO] - Discard from BML: 0
[teach.data.babel][INFO] - Discard not KIT: 0

which results that the number of training pairs are 15.8k which doesn't match with that of in paper (i.e 23.4k)

Training on multiple GPUs

Hey, currently the training is being performed on single gpu, what are the changes to do so as to train it on multiple gpus (let's say 4). (Actually I am kinda new to hydra so I am not sure how to adapt this code onto multiple gpus)

Training is stopped because of CUDA OOM error after few epochs

I tried training the setup in order to replicate the results provided but the training is being stopped after few epochs (in exact after 101 epoch, it had saved epoch 99 checkpoint). The exact error I get is as follows,

Traceback (most recent call last):
File "/home/teach/train.py", line 48, in _train
return train(cfg, ckpt_ft)
File "/home/teach/train.py", line 131, in train
trainer.fit(model, datamodule=data_module, ckpt_path=ckpt_ft)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
self._call_and_handle_interrupt(
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
self._results = trainer.run_stage()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
return self._run_train()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train
self.fit_loop.run()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
self.epoch_loop.run(data_fetcher)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 193, in advance
batch_output = self.batch_loop.run(batch, batch_idx)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 215, in advance
result = self._run_optimization(
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 266, in _run_optimization
self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 378, in _optimizer_step
lightning_module.optimizer_step(
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1652, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 164, in step
trainer.accelerator.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 339, in optimizer_step
self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 163, in optimizer_step
optimizer.step(closure=closure, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/optim/adamw.py", line 100, in step
loss = closure()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 148, in _wrap_closure
closure_result = closure()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 160, in call
self._result = self.closure(*args, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 142, in closure
step_output = self._step_fn()
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 435, in _training_step
training_step_output = self.trainer.accelerator.training_step(step_kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 219, in training_step
return self.training_type_plugin.training_step(*step_kwargs.values())
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 213, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/teach/teach/model/base.py", line 53, in training_step
return self.allsplit_step("train", batch, batch_idx)
File "/home/teach/teach/model/teach.py", line 459, in allsplit_step
output_features_1_M_with_transition = self.motiondecoder(latent_vector_1_M, lengths=length_1_with_transition)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/teach/teach/model/motiondecoder/actor.py", line 68, in forward
output = self.seqTransDecoder(tgt=time_queries, memory=z,
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 252, in forward
output = mod(output, memory, tgt_mask=tgt_mask,
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 459, in forward
x = self.norm3(x + self._ff_block(x))
File "/home/anaconda3/envs/teach/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 483, in _ff_block
x = self.linear2(self.dropout(self.activation(self.linear1(x))))
RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 10.92 GiB total capacity; 9.63 GiB already allocated; 9.44 MiB free; 10.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Dataset process

Hi, I don't konw which kind(SMPL+HG, SMPL+XG,or Render) dataset on AMASS should be down, and how to "in a pickle format"?

KeyError: 'test' when running python scripts/amass_splits_babel.py

When running python scripts/amass_splits_babel.py, This Line

dataset_db_lists[split_of_seq].append(sample_babel)
reports KeyError: 'test'.

I found the problem is caused by dataset_db_lists, which does not have test as key.

dataset_db_lists = {'train': [],
'val': []}

Your paper mentioned that "we report our final results in the validation set, for easier reproduction, since BABEL test set is not publicly available". Does it mean all test sequences are discarded here?

run interact_teach.py error

cmd: python interact_teach.py folder=/teach output=/teach texts='jump with left root' durs='[2]'

Error executing job with overrides: ['folder=/teach', 'output=/teach', 'texts=jump with left root', 'durs=[2]']
Traceback (most recent call last):
File "/teach/interact_teach.py", line 35, in _interact
return interact(cfg)
File "/teach/interact_teach.py", line 58, in interact
model = instantiate(cfg.model,
omegaconf.errors.ConfigAttributeError: Key 'model' is not in struct
full_key: model
object_type=dict

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

It seems that key "model" is not includeed in the configure file configs/interact_teach.yaml

It seems that "deps/inference/labels.json" are not provided

I try to train the model according to the instruction. But it shows that there is a missing file:

File "/home/ajatar/TeachLocal/teach/teach/callback/__init__.py", line 1, in <module> from .render import RenderCallback File "/home/ajatar/TeachLocal/teach/teach/callback/render.py", line 27, in <module> labels_dict = read_json(f'{get_original_cwd()}/deps/inference/labels.json') File "/home/ajatar/TeachLocal/teach/teach/utils/file_io.py", line 79, in read_json with open(p, 'r') as fp: FileNotFoundError: [Errno 2] No such file or directory: '/home/ajatar/TeachLocal/teach/deps/inference/labels.json'

Where can I find the corresponding label.json?
BTW, I have also encountered this error (avoided by commenting out the corresponding code):
ModuleNotFoundError: No module named 'pytorch_lightning.utilities.logger'

Thanks a lot for your time.

how many pairs in dataset?

In your paper, you mentioned:
image
However, my train set and valid set are respectively:
image
image
which are far less than mentioned in paper.
All the code about data preprocessing and spliting is based on the default setting. I wonder what causes this problem?

about visualization of result

Dear author,
The video result obtained from 【interact_teach.py】shows that the character is moving in place, and the track can not be seen. How can I get the video shown in the paper (can see the track)?

interact run error

 module named 'pyrender'
No module named 'pyrender'
[05/05/23 15:49:50][HYDRA] C:\ProgramData\miniconda3\lib\site-packages\hydra\_internal\hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
  ret = run_job(

Global seed set to 1234
[05/05/23 15:49:50][__main__][INFO] - Loading model
Error executing job with overrides: ['folder=.\\pretrained_models\\teach\\', 'texts=[a man is walking and then jump]', 'durs=[123]', 'output=res']
Error in call to target 'teach.model.teach.TEACH':
InstantiationException('Error in call to target \'teach.model.textencoder.text_hist.TextHist\':\nValueError("\'deps\' is not in list")\nfull_key: textencoder')
full_key: model

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

'type' object is not subscriptable

Hello, When I tried to run the interact_teach.py, it just occrus this problem:
ImportError: Encountered error: 'type' object is not subscriptable when loading module 'teach.model.teach.TEACH'

The command I used is :
python ./teach/interact_teach.py folder=./teach output=./teach/experiments texts='[walk, sit down, wave hands, drink water, stand up]' durs='[5,5,5,5,5]'

Thank you in advance!

Error occured when try to load checkpoint provided in "independent-bsl"

When I try to run command:
python interact_teach.py folder=TrainedModels/independent-bsl output=RenderResults/dance texts='[dance, standing, turns left]' durs='[3.7, 3, 2]'
Error occurs, stating that "no module named teach.model.teach.TEMOS"
After I change the config target to "teach.model.temos.TEMOS" mannually, there is still an error:
ImportError: Error loading module 'temos.model.motiondecoder.ActorAgnosticDecoder'
I think there is something wrong in the ckpt file, maybe independent-bsl is recorded in a different file system

Unable to load weights from pytorch checkpoint

When I run the demo interact_teach.py, got the following error:
OSError: Error instantiating 'teach.model.temos.TEMOS' : Error instantiating 'teach.model.textencoder.distilbert_transformer.DistilbertEncoderTransformer' : Unable to load weights from pytorch checkpoint file for '/root/autodl-tmp/github/teach/deps/distilbert-base-uncased' at '/root/autodl-tmp/github/teach/deps/distilbert-base-uncased/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

ValueError: 'deps' is not in list

Hi, Thanks for your amazing work!

I am trying to use interact_teach.py. I have downloaded the model weight and set up the path like the READ.ME. However, this issue still happened.
Just wondering does this issue relate to Windows or not using Cuda? Because my current hardware is just a windows notebook without Nvidia GPU.

python interact_teach.py folder=experiment output=output texts='[walk]' du
rs='[1]'
Global seed set to 1234
[20/07/23 10:49:48][main][INFO] - Loading model
Error executing job with overrides: ['folder=experiment', 'output=output', 'text
s=[walk]', 'durs=[1]']

**rel_p = rel_p[rel_p.index('deps'):]

ValueError: 'deps' is not in list**

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\interact_teach.py", line 94, in
_interact()
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra\main.py", line 48, in decorated_main
_run_hydra(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\utils.py", line 377, in _run_hydra
run_and_report(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\utils.py", line 294, in run_and_report
raise ex
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\utils.py", line 211, in run_and_report
return func()
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\utils.py", line 378, in
lambda: hydra.run(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\hydra.py", line 111, in run
_ = ret.return_value
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra\core\utils.py", line 233, in return_value
raise self._return_value
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra\core\utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\interact_teach.py", line 35, in _interact
return interact(cfg)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\interact_teach.py", line 57, in interact
model = instantiate(cfg.model,
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 18
0, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=conve
rt
)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 24
9, in instantiate_node
return _call_target(target, *args, **kwargs)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 64
, in _call_target
raise type(e)(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 62
, in _call_target
return target(*args, **kwargs)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\teach\model\teach.py", line 55, in init
self.transforms = instantiate(transforms)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 18
0, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=conve
rt
)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 24
5, in instantiate_node
value = instantiate_node(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 24
9, in instantiate_node
return _call_target(target, *args, **kwargs)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 64
, in _call_target
raise type(e)(
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\venv\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 62
, in _call_target
return target(*args, **kwargs)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\teach\transforms\rots2rfeats\globvelandy.py", line 36, in init
super().init(path=path, normalization=normalization)
File "C:\Users\Ryan_liu\platform-tools_r34.0.0-windows\platform-tools\teach_pr
oj\teach\transforms\rots2rfeats\base.py", line 43, in init
rel_p = rel_p[rel_p.index('deps'):]

ValueError: Error instantiating 'teach.model.teach.TEACH' : Error instantiating
'teach.transforms.rots2rfeats.globvelandy.Globalvelandy' : 'deps' is not in list

Training Process

"You are using a SMPL+H model, with only 10 shape coefficients." the waring comes when I process the dataset. Is that warning normal?

And when I retrained model, the loss was always Nan. Is there something wrong with the dataset?

How to run the demo?

Did someone successfully run the demo? What's wrong with my commands?
image
Who can give an example of a correct run of demo?

problem in interact_teach.py

Hello!
I tested your code separately on two different machines, the test was successful on the first and the following error appeared on the second machine, but I did not make any modifications, just the machines are different. I don't know how to solve it, can you help me? Error message display ‘‘Path /data/zss/Virtual_patient/teach/data/zss/Virtual_patient/teach/data/smpl_models/smplh does not exist!
’’, but the path should be [ data/zss/Virtual_patient/teach/data/smpl_models/smplh ], I don't understand why it repeated the path once. The second machine is ubuntu20.04, two GPU 3090

===============================================================================================

(zss_teach) desc@xxx:/data/zss/Virtual_patient/teach$ python interact_teach.py folder=./checkpoints/teach output=./output4 texts='[walk, sit down, wave hands, drink water, stand up]' durs='[5,5,5,5,5]'
Global seed set to 1234
[03/11/22 11:12:06][main][INFO] - Loading model
[03/11/22 11:12:07][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmpkmgjgrnl
[03/11/22 11:12:07][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmpkmgjgrnl/_remote_module_non_sriptable.py
Error executing job with overrides: ['folder=./checkpoints/teach', 'output=./output4', 'texts=[walk, sit down, wave hands, drink water, stand up]', 'durs=[5,5,5,5,5]']
Traceback (most recent call last):
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/data/zss/Virtual_patient/teach/teach/transforms/rots2joints/smplh.py", line 58, in init
self.smplh = SMPLHLayer(path, ext="pkl", gender=gender).eval()
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 761, in init
super(SMPLHLayer, self).init(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 575, in init
assert osp.exists(smplh_path), 'Path {} does not exist!'.format(
AssertionError: Path /data/zss/Virtual_patient/teach/data/zss/Virtual_patient/teach/data/smpl_models/smplh does not exist!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/data/zss/Virtual_patient/teach/teach/model/teach.py", line 55, in init
self.transforms = instantiate(transforms)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 245, in instantiate_node
value = instantiate_node(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
return _call_target(target, *args, **kwargs)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
raise type(e)(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/data/zss/Virtual_patient/teach/teach/transforms/rots2joints/smplh.py", line 58, in init
self.smplh = SMPLHLayer(path, ext="pkl", gender=gender).eval()
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 761, in init
super(SMPLHLayer, self).init(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 575, in init
assert osp.exists(smplh_path), 'Path {} does not exist!'.format(
AssertionError: Error instantiating 'teach.transforms.rots2joints.smplh.SMPLH' : Path /data/zss/Virtual_patient/teach/data/zss/Virtual_patient/teach/data/smpl_models/smplh does not exist!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data/zss/Virtual_patient/teach/interact_teach.py", line 35, in _interact
return interact(cfg)
File "/data/zss/Virtual_patient/teach/interact_teach.py", line 57, in interact
model = instantiate(cfg.model,
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
return _call_target(target, *args, **kwargs)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
raise type(e)(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/data/zss/Virtual_patient/teach/teach/model/teach.py", line 55, in init
self.transforms = instantiate(transforms)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 245, in instantiate_node
value = instantiate_node(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
return _call_target(target, *args, **kwargs)
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
raise type(e)(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
return target(*args, **kwargs)
File "/data/zss/Virtual_patient/teach/teach/transforms/rots2joints/smplh.py", line 58, in init
self.smplh = SMPLHLayer(path, ext="pkl", gender=gender).eval()
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 761, in init
super(SMPLHLayer, self).init(
File "/data/miniconda3/envs/zss_teach/lib/python3.9/site-packages/smplx/body_models.py", line 575, in init
assert osp.exists(smplh_path), 'Path {} does not exist!'.format(
AssertionError: Error instantiating 'teach.model.teach.TEACH' : Error instantiating 'teach.transforms.rots2joints.smplh.SMPLH' : Path /data/zss/Virtual_patient/teach/data/zss/Virtual_patient/teach/data/smpl_models/smplh does not exist!

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Quantitative results correspond to epoch #?

Can you please confirm the epoch number for which the quantitative results (APE and AVE) are provided in the paper, because it is mentioned as 600 epochs in the paper but the max_epochs in config.yaml of provided pretrained model is 1001

Interaction Teach

Hi, I notice you have fixed some codes. So I downloaded the latest code and found something wrong.
截图 2023-02-17 08-38-04

Could you please have a look at this problem?

A problem when using sample_seq.py with pre-trained ckpts.

Hi, author,
Thank you for your nice work! I encountered a problem when I used the pre-trained model (downloaded from the project website). The module cannot be imported. The log is as follows:

`
(MMHuman3d) root:/teach-master# bash sample.sh
[08/11/22 19:42:41][main][INFO] - Sample script. The outputs will be stored in:
[08/11/22 19:42:41][main][INFO] - /home/baiye03/VirtualHuman/teach-master/exp/teach/samples_slerp_aligned_pairs/checkpoint-last/val
Global seed set to 1234
[08/11/22 19:42:41][main][INFO] - Loading data module
Error executing job with overrides: ['folder=exp/teach/', 'align=full', 'slerp_ws=8']
Traceback (most recent call last):
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/utils.py", line 570, in _locate
module = import_module(mod)
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1011, in _gcd_import
File "", line 950, in _sanity_check
ValueError: Empty module name

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "sample_seq.py", line 37, in _sample
return sample(cfg)
File "sample_seq.py", line 118, in sample
data_module = instantiate(cfg.data)
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py",
line 180, in instantiate
return instantiate_node(config, *args, recursive=recursive, convert=convert)
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py",
line 245, in instantiate_node
value = instantiate_node(
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py",
line 240, in instantiate_node
target = _resolve_target(node.get(_Keys.TARGET))
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py",
line 104, in _resolve_target
return _locate(target)
File "/opt/miniconda3/envs/MMHuman3d/lib/python3.8/site-packages/hydra/_internal/utils.py", line 573, in _locate
raise ImportError(f"Error loading module '{path}'") from e
ImportError: Error loading module 'teachg.data.sampling.FrameSampler'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
`

How to fix this error?

Regards,
by2101

Where to download the SMPLH_MALE.pkl file?

Thanks for great work!
I want to know where to download the SMPLH_MALE.pkl file. It seems there is no corresponding file on the smplx website (https://smpl-x.is.tue.mpg.de/download.php).

I tried to download SMPLH_male.pkl file from https://mano.is.tue.mpg.de/download.php, according to vchoutas/smplx#10. But unfortunately, it caused an error:

Traceback (most recent call last):
File "E:\TeachLocal\teach\scripts\process_amass.py", line 262, in
db = read_data(input_dir, model_type, output_dir, use_betas, gender)
final_seq_data = process_sequence(seq, use_betas, gender)
File "E:\TeachLocal\teach\scripts\process_amass.py", line 175, in process_sequence
bodymodel_seq = get_body_model(model_type, gender_of_seq if gender=='amass' else gender,
File "E:\TeachLocal\teach.\teach\transforms\smpl.py", line 171, in get_body_model
body_model = smplx.create(body_model_path, model_type=type,
File "C:\ProgramData\Anaconda3\envs\teach\lib\site-packages\smplx\body_models.py", line 2402, in create
return SMPLH(model_path, **kwargs)
File "C:\ProgramData\Anaconda3\envs\teach\lib\site-packages\smplx\body_models.py", line 601, in init
left_hand_components = data_struct.hands_componentsl[:num_pca_comps]
AttributeError: 'Struct' object has no attribute 'hands_componentsl'

Out file transfer to FBX

Hi.

I'd like to try the model in Unreal Engine, it seems FBX file is the fast way to do that. Is that possible to transfer the output file to a FBX file?

Or is there any other better way to do that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.