Giter Club home page Giter Club logo

language2pose's Introduction

Language2Pose:Natural Language Grounded Pose Forecasting

There are 5 steps to running this code

  • Python Virtual Environment and dependencies
  • Data download and preprocessing
  • Training
  • Sampling
  • Rendering

PS: The implementation of one of the baselines, proposed by Lin et al.[1], was not publicly available and hence we make use of our implementation of their model to generate all the results and animations marked as Lin et al. Due to the differences in training hyperparameters, dataset and experiments, the numbers reported for Lin et al. in our paper differ from the ones in the original paper [1].

PS: This repo, at the moment, is functional at best. Feel free to create issues/pull requests however you see fit.


Python Virtual Environment

Anaconda is recommended to create the virtual environment

conda create -f env.yaml
source activate torch

To handle the logistics of saving/loading models pycasper is used

git clone https://github.com/chahuja/pycasper
cd src 
ln -s ../pycasper/pycasper .
cd ..

Data

Download

We use KIT Motion-Language Dataset which can be downloaded here

wget https://motion-annotation.humanoids.kit.edu/downloads/4/2017-06-22.zip
mkdir dataset/kit-mocap
unzip 2017-06-22.zip -d dataset/kit-mocap
rm 2017-06-22.zip 

Download Word2Vec binaries

Download the binary file here and place it in src/s2v

Pre-trained Models

Download pretrained models here and place it in src/save

Preprocessing

python data/data.py -dataset KITMocap -path2data ../dataset/kit-mocap

Rendering Ground Truths

python render.py -dataset KITMocap -path2data ../dataset/kit-mocap/new_fke -feats_kind fke

Calculating mean+variance for Z-Normalization

python dataProcessing/meanVariance.py -mask '[0]' -feats_kind rifke -dataset KITMocap -path2data ../dataset/kit-mocap -f_new 8

Training

We train the models using a script train_wordConditioned.py (Pardon the misnomer; initially it was supposed to be word conditioned pose forecasting but then I ended up adding sentence conditioned pose forecasting as well and was too lazy to change the filename.)

All the arguments (and their corresponding help texts) used for training can be found in src/argsUtils.py (PS: Some of them might be deprecated, but I have not removed them in case it breaks any of the other code that I might have written in the experimentation phase. Please raise an issue/ or send me an email if you have any clarification questions about any of the arguments). It would be good to stick to the args used in the examples if you want to play with the models in the paper.

  • JL2P
python train_wordConditioned.py -batch_size 100 -cpk jl2p -curriculum 1 -dataset KITMocap -early_stopping 1 -exp 1 -f_new 8 -feats_kind rifke -losses "['SmoothL1Loss']" -lr 0.001 -mask "[0]" -model Seq2SeqConditioned9 -modelKwargs "{'hidden_size':1024, 'use_tp':False, 's2v':'lstm'}" -num_epochs 1000 -path2data ../dataset/kit-mocap -render_list subsets/render_list -s2v 1 -save_dir save/model/ -tb 1 -time 16 -transforms "['zNorm']" 

-modelKwargs need some explaination as they could vary based on the model

hidden_size: size of the joint embedding
use_tp: use a trajectory predictor [1]. False for JL2P models
s2v: sentence to vector model ('lstm' or 'bert')
  • Our Implementation of Lin et. al. [1]
python train_seq2seq.py -batch_size 100 -cpk lin -curriculum 0 -dataset KITMocap -early_stopping 1 -exp 1 -f_new 8 -feats_kind rifke -losses "['MSELoss']" -lr 0.001 -mask "[0]" -model Seq2Seq -modelKwargs "{'hidden_size':1024, 'use_tp':True, 's2v':'lstm'}" -num_epochs 1000 -path2data ../dataset/kit-mocap -render_list subsets/render_list -s2v 1 -save_dir save/model -tb 1 -time 16 -transforms "['zNorm']"

This model has 2 training steps. train_seq2seq.py uses a seq2seq model to first learn an embedding for pose sequences. Once the training is complete, train_wordConditioned.py is called which optimizes to map from language embeddings to pose embeddings.


Sampling

Sampling from trained Models

The training scripts will sample after the stopping criterion has reached, but if you would like to manually sample run the following script

python sample_wordConditioned.py -load <path-to-weights.p>

<path-to-weights.p> ends in _weights.p

Using Pretrained Models

Make sure you have downloaded the pre-trained models as described here.

  • JL2P
python sample_wordConditioned.py -load save/jl2p/exp_726_cpk_jointSampleStart_model_Seq2SeqConditioned9_time_16_chunks_1_weights.p
  • Our Implementation for Lin et. al. [1]
python sample_wordConditioned.py -load save/lin-et-al/exp_700_cpk_mooney_model_Seq2SeqConditioned10_time_16_chunks_1_weights.p 

Rendering

After sampling, it would be nice to see what animation does the model generates. We only use the test samples for rendering.

If possible, use a machine with many cpu cores, as rendering animations on matplotlib is painfully slow. render.py uses all the available cores for parallel processing.

Using your trained model

python render.py -dataset KITMocap -load <path-to-weights.p> -feats_kind fke -render_list subsets/render_list

Using pre-trained Models

  • JL2P
python render.py -dataset KITMocap -load save/jl2p/exp_726_cpk_jointSampleStart_model_Seq2SeqConditioned9_time_16_chunks_1_weights.p -feats_kind fke -render_list subsets/render_list
  • Our Implementation for Lin et. al. [1]
python render.py -dataset KITMocap -load save/lin-et-al/exp_700_cpk_mooney_model_Seq2SeqConditioned10_time_16_chunks_1_weights.p -feats_kind fke -render_list subsets/render_list

References

[1]: Lin, Angela S., et al. "1. Generating Animated Videos of Human Activities from Natural Language Descriptions." Learning 2018 (2018).

language2pose's People

Contributors

chahuja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

language2pose's Issues

ValueError: cannot convert float NaN to integer

This command
python render.py -dataset KITMocap -path2data ../dataset/kit-mocap/new_fke -feats_kind fke

Throws this error:
...
3422 ../dataset/kit-mocap/new_fke/03639_quat.fke
3423 ../dataset/kit-mocap/new_fke/03773_quat.fke
3424 ../dataset/kit-mocap/new_fke/01499_quat.fke
3322 ../dataset/kit-mocap/new_fke/03848_quat.fke
3323 ../dataset/kit-mocap/new_fke/03899_quat.fke
3324 ../dataset/kit-mocap/new_fke/01888_quat.fke
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/my/anaconda3/envs/l2p/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/my/anaconda3/envs/l2p/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(args))
File "/home/my/Jupyter/temp/language2pose/src/renderUtils.py", line 40, in readNrender
render(xyz_data, skel, time, output, figsize, description)
File "/home/my/Jupyter/temp/language2pose/src/data/kit_visualization.py", line 156, in render
suptitle=description)
File "/home/my/Jupyter/temp/language2pose/src/utils/visualization.py", line 242, in render_animation
anim = FuncAnimation(fig, update, frames=np.arange(history
history_offset, num_frames), interval=1000/fps, repeat=False, fargs=init_func(fig, figures, skeleton, figsize, fps))
File "/home/my/Jupyter/temp/language2pose/src/utils/visualization.py", line 122, in init_func
draw_offset = int(25/avg_segment_length)
ValueError: cannot convert float NaN to integer
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "render.py", line 85, in
argparseNloop(loop)
File "/home/my/Jupyter/temp/language2pose/src/argsUtils.py", line 137, in argparseNloop
loop(args, i)
File "render.py", line 82, in loop
parallelRender(filenames, descriptions, outputs, skel, args.feats_kind)
File "/home/my/Jupyter/temp/language2pose/src/renderUtils.py", line 77, in parallelRender
parallel(readNrender, zip(filenums, filenames, descriptions, skels, times, outputs, figsizes, feats_kind))
File "/home/my/Jupyter/temp/language2pose/src/common/parallel.py", line 6, in parallel
p.map(fn, args)
File "/home/my/anaconda3/envs/l2p/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/my/anaconda3/envs/l2p/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
ValueError: cannot convert float NaN to integer

AttributeError: 'Namespace' object has no attribute 'args'

why do i get this error :(

(torch) Seyeeet@Seyeeet:/media/Seyeeet/Office/language2pose/src$ python dataProcessing/meanVariance.py -mask '[0]' -feats_kind rifke -dataset KITMocap -path2data /media/Seyeeet/Office/language2pose/dataset/kit-mocap  -f_new 8
Namespace(angles=[[90]], batch_size=[100], chunks=[1], clean_render=[1], config=[None], cpk=['m'], cuda=[0], curriculum=[0], dataset=['KITMocap'], debug=[0], desc=[None], dev_frac=[0.2], early_stopping=[1], eps=[0], exp=[None], f_new=[8], feats_kind=['rifke'], greedy_save=[1], idx_dependent=[1], kl_anneal=[0], lmksSubset=[['all']], load=[None], lossKwargs=[[{'reduction': 'sum'}]], losses=[['MSELoss']], lr=[0.001], mask=[[0]], model=['Autoencoder'], modelKwargs=[{'num_channels_list': [40, 40, 20]}], num_epochs=[50], offset=[0], overfit=[0], path2data=['/media/Seyeeet/Office/language2pose/dataset/kit-mocap'], pose_mask=[0], render=['inf'], render_list=[None], s2v=[0], save_dir=['save/model'], save_model=[1], script=[None], seed=[11212], seedLength=[20], stop_thresh=[3], tb=[0], time=[32], train_frac=[0.6], transforms=[['zNorm']], view=['sentences.txt'])
[]
Namespace(angles=[90], batch_size=100, chunks=1, clean_render=1, config=None, cpk='m', cuda=0, curriculum=0, dataset='KITMocap', debug=0, desc=None, dev_frac=0.2, early_stopping=1, eps=0, exp=None, f_new=8, feats_kind='rifke', greedy_save=1, idx_dependent=1, kl_anneal=0, lmksSubset=['all'], load=None, lossKwargs=[{'reduction': 'sum'}], losses=['MSELoss'], lr=0.001, mask=[0], model='Autoencoder', modelKwargs={'num_channels_list': [40, 40, 20]}, num_epochs=50, offset=0, overfit=0, path2data='/media/Seyeeet/Office/language2pose/dataset/kit-mocap', pose_mask=0, render='inf', render_list=None, s2v=0, save_dir='save/model', save_model=1, script=None, seed=11212, seedLength=20, stop_thresh=3, tb=0, time=32, train_frac=0.6, transforms=['zNorm'], view='sentences.txt')
Traceback (most recent call last):
  File "dataProcessing/meanVariance.py", line 96, in <module>
    argparseNloop(loop)
  File "/media/Seyeeet/Office/language2pose/src/argsUtils.py", line 137, in argparseNloop
    loop(args, i)
  File "dataProcessing/meanVariance.py", line 27, in loop
    BookKeeper._set_seed(args)
  File "/media/Seyeeet/Office/language2pose/src/pycasper/BookKeeper.py", line 154, in _set_seed
    random.seed(self.args.seed)
AttributeError: 'Namespace' object has no attribute 'args'
(torch) Seyeeet@Seyeeet:/media/Seyeeet/Office/language2pose/src$ 

AssertionError: Rotation invariant Forward Kinematics have 3 parameters for root joint

When I run Calculating mean+variance for Z-Normalization:

python dataProcessing/meanVariance.py -mask '[0]' -feats_kind rifke -dataset KITMocap -path2data ../dataset/kit-mocap -f_new 8

Error:

Traceback (most recent call last):
File "dataProcessing/meanVariance.py", line 96, in
argparseNloop(loop)
File "F:\Speech2Gesture\language2pose\src\argsUtils.py", line 137, in argparseNloop
loop(args, i)
File "dataProcessing/meanVariance.py", line 47, in loop
f_new=f_new)
File "F:\Speech2Gesture\language2pose\src\dataUtils.py", line 37, in init
assert len(mask) == 1, 'Rotation invariant Forward Kinematics have 3 parameters for root joint'
AssertionError: Rotation invariant Forward Kinematics have 3 parameters for root joint

So that I print len(mask):

len(mask)=3

How to address it?

Word2Vec loading time

Hi:)
Loading word2vec takes forever (aka a few minutes).
Is it supposed to be like that? Is there a way to avoid this loading time?
Thanks!

self.model = gensim.models.KeyedVectors.load_word2vec_format(path2file, binary=True)

ValueError: cannot convert float NaN to integer

I am getting this error, any suggestion:
multiprocessing.pool.RemoteTraceback:
"""

Traceback (most recent call last):
  File "/home/seyeeet/anaconda3/envs/torch/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/home/seyeeet/anaconda3/envs/torch/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/media/seyeeet/Office/language2pose/src/renderUtils.py", line 40, in readNrender
    render(xyz_data, skel, time, output, figsize, description)
  File "/media/seyeeet/Office/language2pose/src/data/kit_visualization.py", line 156, in render
    suptitle=description)
  File "/media/seyeeet/Office/language2pose/src/utils/visualization.py", line 242, in render_animation
    anim = FuncAnimation(fig, update, frames=np.arange(history*history_offset, num_frames), interval=1000/fps, repeat=False, fargs=init_func(fig, figures, skeleton, figsize, fps))
  File "/media/seyeeet/Office/language2pose/src/utils/visualization.py", line 122, in init_func
    draw_offset = int(25/avg_segment_length)
ValueError: cannot convert float NaN to integer
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/seyeeet/Office/language2pose/src/render.py", line 85, in <module>
    argparseNloop(loop)
  File "/media/seyeeet/Office/language2pose/src/argsUtils.py", line 137, in argparseNloop
    loop(args, i)
  File "/media/seyeeet/Office/language2pose/src/render.py", line 82, in loop
    parallelRender(filenames, descriptions, outputs, skel, args.feats_kind)
  File "/media/seyeeet/Office/language2pose/src/renderUtils.py", line 77, in parallelRender
    parallel(readNrender, zip(filenums, filenames, descriptions, skels, times, outputs, figsizes, feats_kind))
  File "/media/seyeeet/Office/language2pose/src/common/parallel.py", line 6, in parallel
    p.map(fn, args)
  File "/home/seyeeet/anaconda3/envs/torch/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/seyeeet/anaconda3/envs/torch/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
ValueError: cannot convert float NaN to integer

When running render.py matplotlib showing errors

Traceback (most recent call last):
File "render.py", line 4, in
from dataUtils import *
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/dataUtils.py", line 5, in
from data.data import *
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/data/data.py", line 18, in
from utils.visualization import *
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/utils/visualization.py", line 13, in
from matplotlib.animation import FuncAnimation, writers
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/animation.py", line 737, in
class ImageMagickWriter(ImageMagickBase, MovieWriter):
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/animation.py", line 120, in wrapper
if writerClass.isAvailable():
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/animation.py", line 730, in isAvailable
return super().isAvailable()
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/animation.py", line 427, in isAvailable
return shutil.which(cls.bin_path()) is not None
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/animation.py", line 724, in bin_path
binpath = mpl._get_executable_info('magick').executable
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/init.py", line 384, in _get_executable_info
return impl([path, "--version"], r"^Version: ImageMagick (\S*)")
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/site-packages/matplotlib/init.py", line 324, in impl
args, stderr=subprocess.STDOUT, universal_newlines=True)
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/subprocess.py", line 395, in check_output
**kwargs).stdout
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/subprocess.py", line 472, in run
with Popen(*popenargs, **kwargs) as process:
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/subprocess.py", line 775, in init
restore_signals, start_new_session)
File "/home/hanSolo/anaconda3/envs/torch/lib/python3.7/subprocess.py", line 1522, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
NotADirectoryError: [Errno 20] Not a directory: 'convert'

[BERT] AttributeError: 'NoneType' object has no attribute 'embeddings'

When I use "bert" model in nlp/bert.py, the error says the BertModel has not been found.

Model name 'bert-base-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncas
ed, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz' was a path or url but couldn't find any file associated to this path or url.

File "language2pose/src/model/model.py", line 246, in init
self.sentence_enc = BertForSequenceEmbedding(self.hidden_size)
File "language2pose/src/nlp/bert.py", line 27, in init
toggle_grad(self.bert.embeddings, False)
AttributeError: 'NoneType' object has no attribute 'embeddings'

I have solved it by downloading the pretrained model manually and replaced the loading code. For more information, please check https://www.cnblogs.com/lian1995/p/11947522.html for details.

FileNotFoundError: [Errno 2] No such file or directory: 'sentences.txt'

Hi,

Thanks for your great work and the public code!
After I met early stopping and restart training, an error occurred:

File "train_wordConditioned.py", line 285, in
argparseNloop(train)
File "language2pose/src/argsUtils.py", line 137, in argparseNloop
loop(args, i)
File "train_wordConditioned.py", line 279, in train
render_new_sentences(args, exp_num, data)
File "language2pose/src/sample_wordConditioned_newSentence.py", line 213, in sample
with open(args.view, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'sentences.txt'

Where can I find "sentences.txt"?

Thanks!

about Seq2Seq model

You only give the commands for the second step of the training process, and the commands for training the seq2seq model in the first part are not given.

Error while running "sample_wordConditioned.py"

File "sample_wordConditioned_newSentence.py", line 219, in
argparseNloop(sample)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/Workflows/Pose-Generation/Repos/language2pose/src/argsUtils.py", line 137, in argparseNloop
loop(args, i)
File "sample_wordConditioned_newSentence.py", line 112, in sample
f_new=f_new)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/Workflows/Pose-Generation/Repos/language2pose/src/dataUtils.py", line 81, in init
self.datasets = self.tdt_split()
File "/mnt/D4567C46567C2AFE/Work HDD/Research/Workflows/Pose-Generation/Repos/language2pose/src/dataUtils.py", line 130, in tdt_split
**minidataKwargs) for i, row in tqdm(df_train.iterrows()) if row['descriptions']])
File "/mnt/D4567C46567C2AFE/Work HDD/Research/Workflows/Pose-Generation/Repos/language2pose/src/dataUtils.py", line 130, in
**minidataKwargs) for i, row in tqdm(df_train.iterrows()) if row['descriptions']])
File "/mnt/D4567C46567C2AFE/Work HDD/Research/Workflows/Pose-Generation/Repos/language2pose/src/dataUtils.py", line 258, in init
self.mat = self.mat_full[self.columns_subset].values.astype(np.float64)
AttributeError: 'MiniData' object has no attribute 'columns_subset'

How to generate poses from given sentences?

Hi, thank you for sharing the code. I would like to ask what is the difference between sample_wordConditioned.py and sample_wordConditioned_newSentence.py? If I want to generate a pose using the sentence.txt file, it isn't the same pose as what I get from the sample_wordConditioned.py. I am giving a sentence that is already there in the dataset. Is there a difference in the working of the two codes?

Question on visualizing the data

Hello and thanks for your work. I was wondering if you could tell me how you were able to animate the joint values from the XML-files, although some coordinates are sometimes missing? Take for example the joint LKx_joint, which only has the x-coordinate. How would you use a 3d matplotlib plot to visualize these joints? I tried examing your code, but I can't find the place where you simply read the XML-file, parse the joint values, and visualize them.

I would be grateful for any tips.

evaluation

I was not able to achieve the same number as provided in the paper for the evaluation part.
is it possible to provide the code for the evaluation metric and the way that it should be run?

fke2rifke slicing in data.py

Hello,

First of all, thanks for releasing your work/code.
I might found an issue in the code, tell me what you think.

In data/data.pyline 327, you define rifke_dict which seems to give correspondances between "part of the body" and indices.

Line 148-150, you are using some of the indexes, to extract the feet positions.

fid_l, fid_r = self.rifke_dict['fid_l'], self.rifke_dict['fid_r']
foot_heights = np.minimum(positions[:,fid_l,1], positions[:,fid_r,1]).min(axis=1)
floor_height = softmin(foot_heights, softness=0.5, axis=0)

If I understand correctly, the shape of positions is [number_of_frames, number_of_joints, 3].

Line 158, you are adding the "reference joint" by concatenating it in the "joints" axis at the beginning:

positions = np.concatenate([reference[:,np.newaxis], positions], axis=1)

From now on, I think the indices are shifted (by 1). So when you use again the correspondance table rifke_dict, the indexes doesn't match the part of the body.

Line 163-166:

 feet_l_x = (positions[1:,fid_l,0] - positions[:-1,fid_l,0])**2
 ...

Or line 184-187:

sdr_l, sdr_r, hip_l, hip_r = self.rifke_dict['sdr_l'], self.rifke_dict['sdr_r'], self.rifke_dict['hip_l'], self.rifke_dict['hip_r']
    
across1 = positions[:,hip_l] - positions[:,hip_r]
across0 = positions[:,sdr_l] - positions[:,sdr_r]

Thanks.

AttributeError: 'list' object has no attribute 'shape' when rendering the ground truths

multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/renderUtils.py", line 40, in readNrender
render(xyz_data, skel, time, output, figsize, description)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/data/kit_visualization.py", line 156, in render
suptitle=description)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/utils/visualization.py", line 288, in render_animation
anim.save(output, writer=writer)
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/site-packages/matplotlib/animation.py", line 1141, in save
anim._draw_next_frame(d, blit=False)
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/site-packages/matplotlib/animation.py", line 1176, in _draw_next_frame
self._draw_frame(framedata)
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/site-packages/matplotlib/animation.py", line 1726, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/utils/visualization.py", line 207, in update
liness[fig_num][count][hist][i-1][0].set_3d_properties([positions_world[count][hist][i, z], positions_world[count][hist][skeleton_parents[i], z]], zdir='y')
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/site-packages/mpl_toolkits/mplot3d/art3d.py", line 143, in set_3d_properties
zs = np.broadcast_to(zs, xs.shape)
AttributeError: 'list' object has no attribute 'shape'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "render.py", line 85, in
argparseNloop(loop)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/argsUtils.py", line 137, in argparseNloop
loop(args, i)
File "render.py", line 82, in loop
parallelRender(filenames, descriptions, outputs, skel, args.feats_kind)
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/renderUtils.py", line 77, in parallelRender
parallel(readNrender, zip(filenums, filenames, descriptions, skels, times, outputs, figsizes, feats_kind))
File "/mnt/D4567C46567C2AFE/Work HDD/Research/test/language2pose/src/common/parallel.py", line 6, in parallel
p.map(fn, args)
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/hanSolo/anaconda3/envs/test-torch/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
AttributeError: 'list' object has no attribute 'shape'

ModuleNotFoundError: No module named 'argunparse'

any suggestion for this error:

(torch) seyeeet@seyeeet:/media/seyeeet/Office/language2pose/src$ python /media/seyeeet/Office/language2pose/src/render.py  -dataset KITMocap -path2data /media/seyeeet/Office/language2pose/dataset/kit-mocap/new_fke -feats_kind fke
Traceback (most recent call last):
  File "/media/seyeeet/Office/language2pose/src/render.py", line 4, in <module>
    from dataUtils import *
  File "/media/seyeeet/Office/language2pose/src/dataUtils.py", line 6, in <module>
    from dataProcessing.meanVariance import loadMeanVariance
  File "/media/seyeeet/Office/language2pose/src/dataProcessing/meanVariance.py", line 10, in <module>
    from pycasper.BookKeeper import *
  File "/home/seyeeet/anaconda3/envs/torch/lib/python3.7/site-packages/PyCasper-0.1.1.dev4-py3.7.egg/pycasper/BookKeeper.py", line 12, in <module>
ModuleNotFoundError: No module named 'argunparse'

No module named 'pytorch_pretrained_bert'

When I run:

python data/data.py -dataset KITMocap -path2data ../dataset/kit-mocap

Error:
No module named 'pytorch_pretrained_bert'

The reason is the lack of file "pytorch_pretrained_bert" in folder "nlp".

IndexError: single positional indexer is out-of-bound

Hi, when I am trying to run the preprocessing step. I get an 'IndexError: single positional indexer is out-of-bound' in data.py
line 302: self.columns = pd.read_csv(self.df.iloc[0].quaternion, index_col=0).columns
I can't understand what is happening in this line exactly. Am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.