Giter Club home page Giter Club logo

simlpe's Introduction

siMLPe

Back to MLP: A Simple Baseline for Human Motion Prediction(WACV 2023)

A simple-yet-effective network achieving SOTA performance.

In this paper, we propose a naive MLP-based network for human motion prediction. The network consists of only FCs, LayerNorms and Transpose operation. There is no non-linear activation in our network.

paper link

Network Architecture


image

NOTE: In the paper, we asume that FC is realized by nn.conv1D, so we have two transpose operation before/after the spacial FC layers, see the pipeline figure. While in this repo, we use nn.Linear to implement FC, so in the code there is only one transpose after the spacial FC layer. These two implementations are equivalent. (for a input tensor (b, n, d), nn.conv1D operates on dimention n, while nn.Linear operates on dimention d).

Requirements


  • PyTorch >= 1.5
  • Numpy
  • CUDA >= 10.1
  • Easydict
  • pickle
  • einops
  • scipy
  • six

Data Preparation


Download all the data and put them in the ./data directory.

H3.6M

Original stanford link has crashed, this link is a backup.

Directory structure:

data
|-- h36m
|   |-- S1
|   |-- S5
|   |-- S6
|   |-- ...
|   |-- S11

AMASS

Directory structure:

data
|-- amass
|   |-- ACCAD
|   |-- BioMotionLab_NTroje
|   |-- CMU
|   |-- ...
|   |-- Transitions_mocap

3DPW

Directory structure:

data
|-- 3dpw
|   |-- sequenceFiles
|   |   |-- test
|   |   |-- train
|   |   |-- validation

Training


H3.6M

cd exps/baseline_h36m/
sh run.sh

AMASS

cd exps/baseline_amass/
sh run.sh

Evaluation


H3.6M

cd exps/baseline_h36m/
python test.py --model-pth your/model/path

AMASS

cd exps/baseline_amass/
#Test on AMASS
python test.py --model-pth your/model/path 
#Test on 3DPW
python test_3dpw.py --model-pth your/model/path 

Citation

If you find this project useful in your research, please consider cite:

@article{guo2022back,
  title={Back to MLP: A Simple Baseline for Human Motion Prediction},
  author={Guo, Wen and Du, Yuming and Shen, Xi and Lepetit, Vincent and Xavier, Alameda-Pineda and Francesc, Moreno-Noguer},
  journal={arXiv preprint arXiv:2207.01567},
  year={2022}
}

Contact

Feel free to contact Wen or Me or open a new issue if you have any questions.

simlpe's People

Contributors

dulucas avatar guo-w avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

simlpe's Issues

The usage of ang2joint

Hi, I noticed that there is a function named ang2joint which was imported only when it came to the AMASS and 3DPW datasets.

Based on my understanding, siMLPe requires a loss on velocities as part of its total loss. Since I am not familiar with these datasets, I guess the reason is these two datasets records joint angles (maybe angular velocity also) over time only and you need this function for further velocity loss calculation. Is that correct?

Reproduction on the AMASS Dataset

I have been attempting to replicate the results on AMASS reported in your paper. However, I have encountered some challenges and discrepancies while trying to reproduce the reported outcomes (10.8 19.6 34.3 40.5 50.5 57.3 62.4 65.7). I have (10.9, 19.8, 34.9, 41.1, 51.2, 58.1, 63.1, 66.8)
I would greatly appreciate your guidance on this matter. Could you kindly provide any insights or suggestions as to why I might be experiencing these discrepancies?

Auto-regressive motion prediction for long periods

Thank you for sharing the interesting work.

I want to inference the network for long-term (target_length > 25)
WACV paper denotes "During testing, we apply our model in an auto-regressive manner to generate motion for longer periods"
However, I couldn't find the auto-regressive code in this repository.
Is there any way to generate long-term motions (target_length > 25)?

AMASS dataset download

@dulucas
Dear author,
Can you attach the format and link to download the AMASS dataset?
Thank you very much!

image
I don't know which one to download.

animating the predicted motion

I was wondering if you can briefly explain how to get and use the predicted results to reproduce the animations you have included in the paper?

AMASS train & eval

I found that the test data of AMASS has 145945 motion sequences, but the train data has 7787.
Am I running it wrong ?

ValueError: num_samples should be a positive integer value, but got num_samples=0

Hello Authors,
Good work, thank for uploading it.
I am getting an error, please let me know what am I missing.

'''
File "train.py", line 132, in
sampler=sampler, shuffle=shuffle, pin_memory=True)
File "/home/gaurav/anaconda3/envs/hmp1/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 268, in init
sampler = RandomSampler(dataset, generator=generator)
File "/home/gaurav/anaconda3/envs/hmp1/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 103, in init
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0

'''

Pose Landmark Model

Can you please tell me where I can find the description of your Pose Landmark Model? I intend to feed your pretrained h36m model with poses obtained from Mediapipe.

So, if I just ignore the landmarks [0, 1, 6, 11, 16, 20, 23, 24, 28, 31, 32] (as you did here) will I be able to feed your model with mediapipe pose? or I should do further pose manipulation? thanks in advance.

mediapipe landmarks model

MPI_LIMITS

Hello authors, I would like to ask what is the name of MPI_LIMITS in the AMASS dataset on its official website?

Reproduction on the H36M dataset

Hi! I have been attempting to reproduce your paper results on the H36M dataset. I have followed the instruction on the README and the discussion on this issue to evaluate the checkpoint you provided on the H36M dataset. However, with reference to Table 1 in your paper, I'm not getting the same results:

  • My evaluation results: [13.4, 24.0, 47.7, 58.4, 78.6, 92.1, 105.2, 112.0]
  • Paper evaluation results: [9.6, 21.7, 46.3, 57.3, 75.7, 90.1, 101.8, 109.4]

Moreover, I have also implemented the Repeating Last-Frame baseline with simple changes in the same test.py script for the evaluation. This was to make sure the dataset I am loading is correct. I got the exact same results as in your paper.

Is there something I'm missing? Maybe the checkpoint you provided in the repo is not the correct one to obtain the same results as in your paper. Should I train the model from scratch?

Thanks in advance for you help!

Missing key(s) in state_dict while loading the pre-trained h36m model

Following the Readme about Evaluation:
python test.py --model-pth /home/user/git/siMLPe/checkpoints/h36m_model.pth
When loading the pre-trained model I get an error: Missing key(s) in state_dict

model.load_state_dict(state_dict, strict=True)
  File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for siMLPe:
        Missing key(s) in state_dict: "motion_mlp.mlps.0.fc0.fc.weight", "motion_mlp.mlps.0.fc0.fc.bias", "motion_mlp.mlps.0.norm0.alpha", "motion_mlp.mlps.0.norm0.beta", "motion_mlp.mlps.1.fc0.fc.weight", "motion_mlp.mlps.1.fc0.fc.bias", "motion_mlp.mlps.1.norm0.alpha", "motion_mlp.mlps.1.norm0.beta", "motion_mlp.mlps.2.fc0.fc.weight", "motion_mlp.mlps.2.fc0.fc.bias", "motion_mlp.mlps.2.norm0.alpha", "motion_mlp.mlps.2.norm0.beta", "motion_mlp.mlps.3.fc0.fc.weight", "motion_mlp.mlps.3.fc0.fc.bias", "motion_mlp.mlps.3.norm0.alpha", "motion_mlp.mlps.3.norm0.beta", "motion_mlp.mlps.4.fc0.fc.weight", "motion_mlp.mlps.4.fc0.fc.bias", "motion_mlp.mlps.4.norm0.alpha", "motion_mlp.mlps.4.norm0.beta", "motion_mlp.mlps.5.fc0.fc.weight", "motion_mlp.mlps.5.fc0.fc.bias", "motion_mlp.mlps.5.norm0.alpha", "motion_mlp.mlps.5.norm0.beta", "motion_mlp.mlps.6.fc0.fc.weight", "motion_mlp.mlps.6.fc0.fc.bias", "motion_mlp.mlps.6.norm0.alpha", "motion_mlp.mlps.6.norm0.beta", "motion_mlp.mlps.7.fc0.fc.weight", "motion_mlp.mlps.7.fc0.fc.bias", "motion_mlp.mlps.7.norm0.alpha", "motion_mlp.mlps.7.norm0.beta", "motion_mlp.mlps.8.fc0.fc.weight", "motion_mlp.mlps.8.fc0.fc.bias", "motion_mlp.mlps.8.norm0.alpha", "motion_mlp.mlps.8.norm0.beta", "motion_mlp.mlps.9.fc0.fc.weight", "motion_mlp.mlps.9.fc0.fc.bias", "motion_mlp.mlps.9.norm0.alpha", "motion_mlp.mlps.9.norm0.beta", "motion_mlp.mlps.10.fc0.fc.weight", "motion_mlp.mlps.10.fc0.fc.bias", "motion_mlp.mlps.10.norm0.alpha", "motion_mlp.mlps.10.norm0.beta", "motion_mlp.mlps.11.fc0.fc.weight", "motion_mlp.mlps.11.fc0.fc.bias", "motion_mlp.mlps.11.norm0.alpha", "motion_mlp.mlps.11.norm0.beta", "motion_mlp.mlps.12.fc0.fc.weight", "motion_mlp.mlps.12.fc0.fc.bias", "motion_mlp.mlps.12.norm0.alpha", "motion_mlp.mlps.12.norm0.beta", "motion_mlp.mlps.13.fc0.fc.weight", "motion_mlp.mlps.13.fc0.fc.bias", "motion_mlp.mlps.13.norm0.alpha", "motion_mlp.mlps.13.norm0.beta", "motion_mlp.mlps.14.fc0.fc.weight", "motion_mlp.mlps.14.fc0.fc.bias", "motion_mlp.mlps.14.norm0.alpha", "motion_mlp.mlps.14.norm0.beta", "motion_mlp.mlps.15.fc0.fc.weight", "motion_mlp.mlps.15.fc0.fc.bias", "motion_mlp.mlps.15.norm0.alpha", "motion_mlp.mlps.15.norm0.beta", "motion_mlp.mlps.16.fc0.fc.weight", "motion_mlp.mlps.16.fc0.fc.bias", "motion_mlp.mlps.16.norm0.alpha", "motion_mlp.mlps.16.norm0.beta", "motion_mlp.mlps.17.fc0.fc.weight", "motion_mlp.mlps.17.fc0.fc.bias", "motion_mlp.mlps.17.norm0.alpha", "motion_mlp.mlps.17.norm0.beta", "motion_mlp.mlps.18.fc0.fc.weight", "motion_mlp.mlps.18.fc0.fc.bias", "motion_mlp.mlps.18.norm0.alpha", "motion_mlp.mlps.18.norm0.beta", "motion_mlp.mlps.19.fc0.fc.weight", "motion_mlp.mlps.19.fc0.fc.bias", "motion_mlp.mlps.19.norm0.alpha", "motion_mlp.mlps.19.norm0.beta", "motion_mlp.mlps.20.fc0.fc.weight", "motion_mlp.mlps.20.fc0.fc.bias", "motion_mlp.mlps.20.norm0.alpha", "motion_mlp.mlps.20.norm0.beta", "motion_mlp.mlps.21.fc0.fc.weight", "motion_mlp.mlps.21.fc0.fc.bias", "motion_mlp.mlps.21.norm0.alpha", "motion_mlp.mlps.21.norm0.beta", "motion_mlp.mlps.22.fc0.fc.weight", "motion_mlp.mlps.22.fc0.fc.bias", "motion_mlp.mlps.22.norm0.alpha", "motion_mlp.mlps.22.norm0.beta", "motion_mlp.mlps.23.fc0.fc.weight", "motion_mlp.mlps.23.fc0.fc.bias", "motion_mlp.mlps.23.norm0.alpha", "motion_mlp.mlps.23.norm0.beta", "motion_mlp.mlps.24.fc0.fc.weight", "motion_mlp.mlps.24.fc0.fc.bias", "motion_mlp.mlps.24.norm0.alpha", "motion_mlp.mlps.24.norm0.beta", "motion_mlp.mlps.25.fc0.fc.weight", "motion_mlp.mlps.25.fc0.fc.bias", "motion_mlp.mlps.25.norm0.alpha", "motion_mlp.mlps.25.norm0.beta", "motion_mlp.mlps.26.fc0.fc.weight", "motion_mlp.mlps.26.fc0.fc.bias", "motion_mlp.mlps.26.norm0.alpha", "motion_mlp.mlps.26.norm0.beta", "motion_mlp.mlps.27.fc0.fc.weight", "motion_mlp.mlps.27.fc0.fc.bias", "motion_mlp.mlps.27.norm0.alpha", "motion_mlp.mlps.27.norm0.beta", "motion_mlp.mlps.28.fc0.fc.weight", "motion_mlp.mlps.28.fc0.fc.bias", "motion_mlp.mlps.28.norm0.alpha", "motion_mlp.mlps.28.norm0.beta", "motion_mlp.mlps.29.fc0.fc.weight", "motion_mlp.mlps.29.fc0.fc.bias", "motion_mlp.mlps.29.norm0.alpha", "motion_mlp.mlps.29.norm0.beta", "motion_mlp.mlps.30.fc0.fc.weight", "motion_mlp.mlps.30.fc0.fc.bias", "motion_mlp.mlps.30.norm0.alpha", "motion_mlp.mlps.30.norm0.beta", "motion_mlp.mlps.31.fc0.fc.weight", "motion_mlp.mlps.31.fc0.fc.bias", "motion_mlp.mlps.31.norm0.alpha", "motion_mlp.mlps.31.norm0.beta", "motion_mlp.mlps.32.fc0.fc.weight", "motion_mlp.mlps.32.fc0.fc.bias", "motion_mlp.mlps.32.norm0.alpha", "motion_mlp.mlps.32.norm0.beta", "motion_mlp.mlps.33.fc0.fc.weight", "motion_mlp.mlps.33.fc0.fc.bias", "motion_mlp.mlps.33.norm0.alpha", "motion_mlp.mlps.33.norm0.beta", "motion_mlp.mlps.34.fc0.fc.weight", "motion_mlp.mlps.34.fc0.fc.bias", "motion_mlp.mlps.34.norm0.alpha", "motion_mlp.mlps.34.norm0.beta", "motion_mlp.mlps.35.fc0.fc.weight", "motion_mlp.mlps.35.fc0.fc.bias", "motion_mlp.mlps.35.norm0.alpha", "motion_mlp.mlps.35.norm0.beta", "motion_mlp.mlps.36.fc0.fc.weight", "motion_mlp.mlps.36.fc0.fc.bias", "motion_mlp.mlps.36.norm0.alpha", "motion_mlp.mlps.36.norm0.beta", "motion_mlp.mlps.37.fc0.fc.weight", "motion_mlp.mlps.37.fc0.fc.bias", "motion_mlp.mlps.37.norm0.alpha", "motion_mlp.mlps.37.norm0.beta", "motion_mlp.mlps.38.fc0.fc.weight", "motion_mlp.mlps.38.fc0.fc.bias", "motion_mlp.mlps.38.norm0.alpha", "motion_mlp.mlps.38.norm0.beta", "motion_mlp.mlps.39.fc0.fc.weight", "motion_mlp.mlps.39.fc0.fc.bias", "motion_mlp.mlps.39.norm0.alpha", "motion_mlp.mlps.39.norm0.beta", "motion_mlp.mlps.40.fc0.fc.weight", "motion_mlp.mlps.40.fc0.fc.bias", "motion_mlp.mlps.40.norm0.alpha", "motion_mlp.mlps.40.norm0.beta", "motion_mlp.mlps.41.fc0.fc.weight", "motion_mlp.mlps.41.fc0.fc.bias", "motion_mlp.mlps.41.norm0.alpha", "motion_mlp.mlps.41.norm0.beta", "motion_mlp.mlps.42.fc0.fc.weight", "motion_mlp.mlps.42.fc0.fc.bias", "motion_mlp.mlps.42.norm0.alpha", "motion_mlp.mlps.42.norm0.beta", "motion_mlp.mlps.43.fc0.fc.weight", "motion_mlp.mlps.43.fc0.fc.bias", "motion_mlp.mlps.43.norm0.alpha", "motion_mlp.mlps.43.norm0.beta", "motion_mlp.mlps.44.fc0.fc.weight", "motion_mlp.mlps.44.fc0.fc.bias", "motion_mlp.mlps.44.norm0.alpha", "motion_mlp.mlps.44.norm0.beta", "motion_mlp.mlps.45.fc0.fc.weight", "motion_mlp.mlps.45.fc0.fc.bias", "motion_mlp.mlps.45.norm0.alpha", "motion_mlp.mlps.45.norm0.beta", "motion_mlp.mlps.46.fc0.fc.weight", "motion_mlp.mlps.46.fc0.fc.bias", "motion_mlp.mlps.46.norm0.alpha", "motion_mlp.mlps.46.norm0.beta", "motion_mlp.mlps.47.fc0.fc.weight", "motion_mlp.mlps.47.fc0.fc.bias", "motion_mlp.mlps.47.norm0.alpha", "motion_mlp.mlps.47.norm0.beta". 
        Unexpected key(s) in state_dict: "motion_transformer.transformer.0.fc0.fc.weight", "motion_transformer.transformer.0.fc0.fc.bias", "motion_transformer.transformer.0.norm0.alpha", "motion_transformer.transformer.0.norm0.beta", "motion_transformer.transformer.1.fc0.fc.weight", "motion_transformer.transformer.1.fc0.fc.bias", "motion_transformer.transformer.1.norm0.alpha", "motion_transformer.transformer.1.norm0.beta", "motion_transformer.transformer.2.fc0.fc.weight", "motion_transformer.transformer.2.fc0.fc.bias", "motion_transformer.transformer.2.norm0.alpha", "motion_transformer.transformer.2.norm0.beta", "motion_transformer.transformer.3.fc0.fc.weight", "motion_transformer.transformer.3.fc0.fc.bias", "motion_transformer.transformer.3.norm0.alpha", "motion_transformer.transformer.3.norm0.beta", "motion_transformer.transformer.4.fc0.fc.weight", "motion_transformer.transformer.4.fc0.fc.bias", "motion_transformer.transformer.4.norm0.alpha", "motion_transformer.transformer.4.norm0.beta", "motion_transformer.transformer.5.fc0.fc.weight", "motion_transformer.transformer.5.fc0.fc.bias", "motion_transformer.transformer.5.norm0.alpha", "motion_transformer.transformer.5.norm0.beta", "motion_transformer.transformer.6.fc0.fc.weight", "motion_transformer.transformer.6.fc0.fc.bias", "motion_transformer.transformer.6.norm0.alpha", "motion_transformer.transformer.6.norm0.beta", "motion_transformer.transformer.7.fc0.fc.weight", "motion_transformer.transformer.7.fc0.fc.bias", "motion_transformer.transformer.7.norm0.alpha", "motion_transformer.transformer.7.norm0.beta", "motion_transformer.transformer.8.fc0.fc.weight", "motion_transformer.transformer.8.fc0.fc.bias", "motion_transformer.transformer.8.norm0.alpha", "motion_transformer.transformer.8.norm0.beta", "motion_transformer.transformer.9.fc0.fc.weight", "motion_transformer.transformer.9.fc0.fc.bias", "motion_transformer.transformer.9.norm0.alpha", "motion_transformer.transformer.9.norm0.beta", "motion_transformer.transformer.10.fc0.fc.weight", "motion_transformer.transformer.10.fc0.fc.bias", "motion_transformer.transformer.10.norm0.alpha", "motion_transformer.transformer.10.norm0.beta", "motion_transformer.transformer.11.fc0.fc.weight", "motion_transformer.transformer.11.fc0.fc.bias", "motion_transformer.transformer.11.norm0.alpha", "motion_transformer.transformer.11.norm0.beta", "motion_transformer.transformer.12.fc0.fc.weight", "motion_transformer.transformer.12.fc0.fc.bias", "motion_transformer.transformer.12.norm0.alpha", "motion_transformer.transformer.12.norm0.beta", "motion_transformer.transformer.13.fc0.fc.weight", "motion_transformer.transformer.13.fc0.fc.bias", "motion_transformer.transformer.13.norm0.alpha", "motion_transformer.transformer.13.norm0.beta", "motion_transformer.transformer.14.fc0.fc.weight", "motion_transformer.transformer.14.fc0.fc.bias", "motion_transformer.transformer.14.norm0.alpha", "motion_transformer.transformer.14.norm0.beta", "motion_transformer.transformer.15.fc0.fc.weight", "motion_transformer.transformer.15.fc0.fc.bias", "motion_transformer.transformer.15.norm0.alpha", "motion_transformer.transformer.15.norm0.beta", "motion_transformer.transformer.16.fc0.fc.weight", "motion_transformer.transformer.16.fc0.fc.bias", "motion_transformer.transformer.16.norm0.alpha", "motion_transformer.transformer.16.norm0.beta", "motion_transformer.transformer.17.fc0.fc.weight", "motion_transformer.transformer.17.fc0.fc.bias", "motion_transformer.transformer.17.norm0.alpha", "motion_transformer.transformer.17.norm0.beta", "motion_transformer.transformer.18.fc0.fc.weight", "motion_transformer.transformer.18.fc0.fc.bias", "motion_transformer.transformer.18.norm0.alpha", "motion_transformer.transformer.18.norm0.beta", "motion_transformer.transformer.19.fc0.fc.weight", "motion_transformer.transformer.19.fc0.fc.bias", "motion_transformer.transformer.19.norm0.alpha", "motion_transformer.transformer.19.norm0.beta", "motion_transformer.transformer.20.fc0.fc.weight", "motion_transformer.transformer.20.fc0.fc.bias", "motion_transformer.transformer.20.norm0.alpha", "motion_transformer.transformer.20.norm0.beta", "motion_transformer.transformer.21.fc0.fc.weight", "motion_transformer.transformer.21.fc0.fc.bias", "motion_transformer.transformer.21.norm0.alpha", "motion_transformer.transformer.21.norm0.beta", "motion_transformer.transformer.22.fc0.fc.weight", "motion_transformer.transformer.22.fc0.fc.bias", "motion_transformer.transformer.22.norm0.alpha", "motion_transformer.transformer.22.norm0.beta", "motion_transformer.transformer.23.fc0.fc.weight", "motion_transformer.transformer.23.fc0.fc.bias", "motion_transformer.transformer.23.norm0.alpha", "motion_transformer.transformer.23.norm0.beta", "motion_transformer.transformer.24.fc0.fc.weight", "motion_transformer.transformer.24.fc0.fc.bias", "motion_transformer.transformer.24.norm0.alpha", "motion_transformer.transformer.24.norm0.beta", "motion_transformer.transformer.25.fc0.fc.weight", "motion_transformer.transformer.25.fc0.fc.bias", "motion_transformer.transformer.25.norm0.alpha", "motion_transformer.transformer.25.norm0.beta", "motion_transformer.transformer.26.fc0.fc.weight", "motion_transformer.transformer.26.fc0.fc.bias", "motion_transformer.transformer.26.norm0.alpha", "motion_transformer.transformer.26.norm0.beta", "motion_transformer.transformer.27.fc0.fc.weight", "motion_transformer.transformer.27.fc0.fc.bias", "motion_transformer.transformer.27.norm0.alpha", "motion_transformer.transformer.27.norm0.beta", "motion_transformer.transformer.28.fc0.fc.weight", "motion_transformer.transformer.28.fc0.fc.bias", "motion_transformer.transformer.28.norm0.alpha", "motion_transformer.transformer.28.norm0.beta", "motion_transformer.transformer.29.fc0.fc.weight", "motion_transformer.transformer.29.fc0.fc.bias", "motion_transformer.transformer.29.norm0.alpha", "motion_transformer.transformer.29.norm0.beta", "motion_transformer.transformer.30.fc0.fc.weight", "motion_transformer.transformer.30.fc0.fc.bias", "motion_transformer.transformer.30.norm0.alpha", "motion_transformer.transformer.30.norm0.beta", "motion_transformer.transformer.31.fc0.fc.weight", "motion_transformer.transformer.31.fc0.fc.bias", "motion_transformer.transformer.31.norm0.alpha", "motion_transformer.transformer.31.norm0.beta", "motion_transformer.transformer.32.fc0.fc.weight", "motion_transformer.transformer.32.fc0.fc.bias", "motion_transformer.transformer.32.norm0.alpha", "motion_transformer.transformer.32.norm0.beta", "motion_transformer.transformer.33.fc0.fc.weight", "motion_transformer.transformer.33.fc0.fc.bias", "motion_transformer.transformer.33.norm0.alpha", "motion_transformer.transformer.33.norm0.beta", "motion_transformer.transformer.34.fc0.fc.weight", "motion_transformer.transformer.34.fc0.fc.bias", "motion_transformer.transformer.34.norm0.alpha", "motion_transformer.transformer.34.norm0.beta", "motion_transformer.transformer.35.fc0.fc.weight", "motion_transformer.transformer.35.fc0.fc.bias", "motion_transformer.transformer.35.norm0.alpha", "motion_transformer.transformer.35.norm0.beta", "motion_transformer.transformer.36.fc0.fc.weight", "motion_transformer.transformer.36.fc0.fc.bias", "motion_transformer.transformer.36.norm0.alpha", "motion_transformer.transformer.36.norm0.beta", "motion_transformer.transformer.37.fc0.fc.weight", "motion_transformer.transformer.37.fc0.fc.bias", "motion_transformer.transformer.37.norm0.alpha", "motion_transformer.transformer.37.norm0.beta", "motion_transformer.transformer.38.fc0.fc.weight", "motion_transformer.transformer.38.fc0.fc.bias", "motion_transformer.transformer.38.norm0.alpha", "motion_transformer.transformer.38.norm0.beta", "motion_transformer.transformer.39.fc0.fc.weight", "motion_transformer.transformer.39.fc0.fc.bias", "motion_transformer.transformer.39.norm0.alpha", "motion_transformer.transformer.39.norm0.beta", "motion_transformer.transformer.40.fc0.fc.weight", "motion_transformer.transformer.40.fc0.fc.bias", "motion_transformer.transformer.40.norm0.alpha", "motion_transformer.transformer.40.norm0.beta", "motion_transformer.transformer.41.fc0.fc.weight", "motion_transformer.transformer.41.fc0.fc.bias", "motion_transformer.transformer.41.norm0.alpha", "motion_transformer.transformer.41.norm0.beta", "motion_transformer.transformer.42.fc0.fc.weight", "motion_transformer.transformer.42.fc0.fc.bias", "motion_transformer.transformer.42.norm0.alpha", "motion_transformer.transformer.42.norm0.beta", "motion_transformer.transformer.43.fc0.fc.weight", "motion_transformer.transformer.43.fc0.fc.bias", "motion_transformer.transformer.43.norm0.alpha", "motion_transformer.transformer.43.norm0.beta", "motion_transformer.transformer.44.fc0.fc.weight", "motion_transformer.transformer.44.fc0.fc.bias", "motion_transformer.transformer.44.norm0.alpha", "motion_transformer.transformer.44.norm0.beta", "motion_transformer.transformer.45.fc0.fc.weight", "motion_transformer.transformer.45.fc0.fc.bias", "motion_transformer.transformer.45.norm0.alpha", "motion_transformer.transformer.45.norm0.beta", "motion_transformer.transformer.46.fc0.fc.weight", "motion_transformer.transformer.46.fc0.fc.bias", "motion_transformer.transformer.46.norm0.alpha", "motion_transformer.transformer.46.norm0.beta", "motion_transformer.transformer.47.fc0.fc.weight", "motion_transformer.transformer.47.fc0.fc.bias", "motion_transformer.transformer.47.norm0.alpha", "motion_transformer.transformer.47.norm0.beta". 

Can you please tell me how can I solve that? thanks in advance.

复现需要多少epoch?设置哪些参数

请问需要多少epoch能够复现论文结果呢?,需要在哪里设置epoch?因为目前实验效果达不到论文,请告诉我准确的参数吧,十分感谢!
How many epochs are needed to reproduce the results of the paper?, Where do I need to set an epoch? Because the current experimental results cannot meet the requirements of the paper, please tell me the accurate parameters. Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.