Giter Club home page Giter Club logo

tpn's Introduction

Temporal Pyramid Network for Action Recognition

image [Paper] [Project Page]

License

The project is release under the Apache 2.0 license.

Model Zoo

Results and reference models are available in the model zoo.

Installation and Data Preparation

Please refer to INSTALL for installation and DATA for data preparation.

Get Started

Please refer to GETTING_STARTED for detailed usage.

Quick Demo

We provide test_video.py to inference a single video. Download the checkpoints and put them to the ckpt/. and run:

python ./test_video.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --video_file ${VIDOE_NAME} --label_file ${LABLE_FILE} --rendered_output ${RENDERED_NAME}

Arguments:

  • --video_file: Path for demo video, default is ./demo/demo.mp4
  • --label_file: The label file for pretrained model, default is demo/category.txt
  • --redndered_output: The output file name. If specified, the script will render output video with label name, default is demo/demo_pred.webm.

For example, we can predict for the demo video (download here and put it under demo/.) by running:

python ./test_video.py config_files/sthv2/tsm_tpn.py ckpt/sthv2_tpn.pth

The rendered output video:

image

Acknowledgement

We really appreciate developers of MMAction for such wonderful codebase. We also thank Yue Zhao for the insightful discussion.

Contact

This repo is currently maintained by Ceyuan Yang (@limbo0000) and Yinghao Xu (@justimyhxu).

Bibtex

@inproceedings{yang2020tpn,
  title={Temporal Pyramid Network for Action Recognition},
  author={Yang, Ceyuan and Xu, Yinghao and Shi, Jianping and Dai, Bo and Zhou, Bolei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
}

tpn's People

Contributors

doubledaibo avatar justimyhxu avatar limbo0000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tpn's Issues

About FLOPs of TPN

Hi,

I'm very impressed by your work.

I have a question.

Could you provide us with the computational cost (FLOPs) of TPN?

I3D + TPN or TSM + TPN models.

I wonder how much computational overhead is needed when TPN is adopted.

how to solve this problem

error: XDG_RUNTIME_DIR not set in the environment.
ALSA lib confmisc.c:855:(parse_card) cannot find card '0'
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_card_inum returned error: No such file or directory
ALSA lib confmisc.c:422:(snd_func_concat) error evaluating strings
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1334:(snd_func_refer) error evaluating name
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5701:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM default
ALSA lib confmisc.c:855:(parse_card) cannot find card '0'
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_card_inum returned error: No such file or directory
ALSA lib confmisc.c:422:(snd_func_concat) error evaluating strings
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1334:(snd_func_refer) error evaluating name
ALSA lib conf.c:5178:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5701:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM default
Traceback (most recent call last):
File "/content/TPN/./test_video.py", line 14, in
from mmcv.runner import load_checkpoint
ModuleNotFoundError: No module named 'mmcv.runner'

TypeError: save_for_backward can only save variables, but argument 1 is of type int

When I use the code to train, I met the problem.

Traceback (most recent call last):
File "/home/liuyh/pycharm-community-2021.2.2/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/liuyh/pycharm-community-2021.2.2/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/liuyh/demo/TPN-master/tools/train_recognizer.py", line 90, in
main()
File "/home/liuyh/demo/TPN-master/tools/train_recognizer.py", line 86, in main
logger=logger)
File "/home/liuyh/demo/TPN-master/mmaction/apis/train.py", line 60, in train_network
_non_dist_train(model, dataset, cfg, validate=validate)
File "/home/liuyh/demo/TPN-master/mmaction/apis/train.py", line 194, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/mmcv/runner/runner.py", line 264, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/home/liuyh/demo/TPN-master/mmaction/apis/train.py", line 37, in batch_processor
losses = model(**data)
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/liuyh/demo/TPN-master/mmaction/models/recognizers/base.py", line 39, in forward
return self.forward_train(num_modalities, img_meta, **kwargs)
File "/home/liuyh/demo/TPN-master/mmaction/models/recognizers/TSN2D.py", line 115, in forward_train
x = self.segmental_consensus(x)
File "/home/liuyh/miniconda3/envs/tpn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/liuyh/demo/TPN-master/mmaction/models/tenons/segmental_consensuses/simple_consensus.py", line 62, in forward
self.consensus_type)
TypeError: save_for_backward can only save variables, but argument 1 is of type int
python-BaseException

Process finished with exit code 1

Can you give me some suggestions?

Model Zoo for TPN 2D

Hello,

Thanks for your code base. I like your idea very much, so that I recode your TPN module and plug it into my own training project. Finally I got a good tpn-2d-r50 model with Prec@1 = 71.6%, while your result in paper is 73.5% like the flowing picture.

image

Could you share your Model Zoo with configuration files for TPN 2D? I want to check out which hyper parameter is different with yours, so that I got such a precision gaps.

Thank you very much!

Error when running test on demo video with kinetic models

I was trying to run model "kinetics400_tpn_r101f16s4.pth" on demo video by running the "test_video.py" scripts and I met following error, could you please help me? Is that because demo/category.txt is categories of dataset "something something"?
python test_video.py config_files/kinetics400/tpn/r101f16s4.py ckpt/ft_local/kinetics400_tpn_r101f16s4.pth
image

Could you explain some parameters in code?

Thanks for your contribution! I have read code but there are still some questions I can't understand.

The main question is the parameters to sample frames in mmaction/datasets/rawframes_datasets.py, from row 65 to row 72

# parameters for frame fetching
# number of segments
self.num_segments = num_segments
# number of consecutive frames
self.old_length = new_length * new_step
self.new_length = new_length
# number of steps (sparse sampling for efficiency of io)
self.new_step = new_step

Could I understand it as following descriptions?

  • The chosen video is divided into num_segments clips, each clip has record.num_frames // num_segments frames
  • Sample new_length frame(s) in a clip, totally sampling new_length * num_segments frames for a video
  • In a clip, sampling new_length frames with interval new_step

As a freshman, this really bother me a lot, I'm really appreciate it if you can help me!

Thanks again for your contribution!

optimizer must be a dict of torch.optim.Optimizers, but optimizer["type"] is a <class 'str'>

File "tools/train_recognizer.py", line 90, in
main()
File "tools/train_recognizer.py", line 80, in main
train_network(
File "/home/TPN-master/mmaction/apis/train.py", line 59, in train_network
_non_dist_train(model, dataset, cfg, validate=validate)
File "/home/TPN-master/mmaction/apis/train.py", line 182, in _non_dist_train
runner = Runner(model, batch_processor, cfg.optimizer, cfg.work_dir,
File "/home/site-packages/mmcv/runner/epoch_based_runner.py", line 184, in init
super().init(*args, **kwargs)
File "/home/site-packages/mmcv/runner/base_runner.py", line 83, in init
raise TypeError(
TypeError: optimizer must be a dict of torch.optim.Optimizers, but optimizer["type"] is a <class 'str'>

How to solve it?

How to install mmaction correctly?

I have installed mmaction from source, but when I run the code, the error raised:

*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as nee
ded. 
*****************************************
Traceback (most recent call last):
  File "./tools/train_recognizer.py", line 90, in <module>
    main()
  File "./tools/train_recognizer.py", line 77, in main
    cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 59, in build_recognizer
    dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 34, in build
    return _build_module(cfg, registry, default_args)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 26, in _build_module
    return obj_type(**args)
TypeError: __init__() got an unexpected keyword argument 'necks'
Traceback (most recent call last):                                                                                                                                                                  [0/1834]
  File "./tools/train_recognizer.py", line 90, in <module>
    main()
  File "./tools/train_recognizer.py", line 77, in main
    cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 59, in build_recognizer
    dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 34, in build
    return _build_module(cfg, registry, default_args)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 26, in _build_module
    return obj_type(**args)
TypeError: __init__() got an unexpected keyword argument 'necks'
Traceback (most recent call last):
  File "./tools/train_recognizer.py", line 90, in <module>
    main()
  File "./tools/train_recognizer.py", line 77, in main
    cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 59, in build_recognizer
    dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 34, in build
    return _build_module(cfg, registry, default_args)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 26, in _build_module
    return obj_type(**args)
TypeError: __init__() got an unexpected keyword argument 'necks'
Traceback (most recent call last):
  File "./tools/train_recognizer.py", line 90, in <module>
    main()
  File "./tools/train_recognizer.py", line 77, in main
    cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 59, in build_recognizer
    dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 34, in build
    return _build_module(cfg, registry, default_args)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 26, in _build_module
    return obj_type(**args)
TypeError: __init__() got an unexpected keyword argument 'necks'
2020-04-30 15:38:53,772 - INFO - Distributed training: True
Traceback (most recent call last):
  File "./tools/train_recognizer.py", line 90, in <module>
    main()
  File "./tools/train_recognizer.py", line 77, in main
    cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 59, in build_recognizer
    dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 34, in build
    return _build_module(cfg, registry, default_args)
  File "/mnt/shared_40t/cyh/data/action_recognization/mmaction/mmaction/models/builder.py", line 26, in _build_module
    return obj_type(**args)
TypeError: __init__() got an unexpected keyword argument 'necks'
Traceback (most recent call last):
  File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 246, in <module>
    main()
  File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 242, in main
    cmd=cmd)
subprocess.CalledProcessError: Command '['/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/bin/python', '-u', './tools/train_recognizer.py', '--local_rank=3', 'config_files/sthv1/tsm_tpn_ntu.py', '--launcher'
, 'pytorch', '--validate', '--work_dir', 'model']' returned non-zero exit status 1.

It seems to submodules "neck" not be installed. Please tell me how to solve it. Looking forward to your reply.

Use RWF-2000 to train TPN

How can I deal with RWF-2000 when following the instruction of Data Preparation? My dataset is RWF-2000 which contatins train and val folder,label is only Fight and NonFight.

The speed of tsm_tpn

Hello,
I want to use tsm_tpn model to solve an online video classification problem, so the speed of tsm_tpn is very important for me. Would you tell me the speed of tsm_tpn? THX.

Error when using demo code

Hello! I am trying to run the pre-trained kinetics version on a custom video. My command is the following:
python ./test_video.py config_files/kinetics400/tpn/r101f32s2.py kinetics.pth --video_file video.mp4
where kinetics.pth is the downloaded: kinetics400_tpn_r101f32s2.pth file.

I have some problems. First, I don't know exactly how should the labels file look like. The category.txt file in the demo folder does not match the Kinetics annotations.
On the other hand, the program crashes. Ar first I get this warning:
image
and after that it crashes with the following message:
image

Can you help me with this problem?

Thank you,
Petru

unexpected key in source state_dict: necks.aux_head.convs.conv.weight, necks.aux_head.convs.bn.weight

I have trained a model in r50f16s4.py, but when I test the model, there has a error.

unexpected key in source state_dict: fc.weight, fc.bias
missing keys in source state_dict: layer3.4.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, bn1.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked

Traceback (most recent call last):
File "tools/test_recognizer.py", line 259, in
main()
File "tools/test_recognizer.py", line 176, in main
load_checkpoint(model, args.checkpoint, strict=True, map_location='cpu')
File "/home/majian/anaconda3/envs/torch12/lib/python3.6/site-packages/mmcv/runner/checkpoint.py", line 162, in load_checkpoint
load_state_dict(model, state_dict, strict, logger)
File "/home/majian/anaconda3/envs/torch12/lib/python3.6/site-packages/mmcv/runner/checkpoint.py", line 86, in load_state_dict
raise RuntimeError(err_msg)
RuntimeError: unexpected key in source state_dict: necks.aux_head.convs.conv.weight, necks.aux_head.convs.bn.weight, necks.aux_head.convs.bn.bias, necks.aux_head.convs.bn.running_mean, necks.aux_head.convs.bn.running_var, necks.aux_head.convs.bn.num_batches_tracked, necks.aux_head.fc.weight, necks.aux_head.fc.bias

I didn't change any of the parameters and I haven't found the cause of this bug. Please help me slove this problem.

About data augmentation.

In the paper, i find you mentioned "Each frame is randomly cropped so that its short side ranges in [256, 320] pixels, as in [32, 5, 25]." But in the code, frames are randomly cropped by the area of [0.08,1]. Why are the differences please? Are those maybe-so-small cropped frames meaningful to train the model? And is color jittering necessary to the generalization ability of this task?
Thanks!!

Expected 5-dimensional input for 5-dimensional weight 64 3 1 7 7, but got 4-dimensional input of size [8, 3, 256, 256] instead

I have downloaded kinetics400_tpn_r50f32s2.pth in google drive which is provided and run this command:
python ./test_video.py config_files/kinetics400/tpn/r50f32s2.py ckpt/kinetics400_tpn_r50f32s2.pth
it showed this:
Expected 5-dimensional input for 5-dimensional weight 64 3 1 7 7, but got 4-dimensional input of size [8, 3, 256, 256] instead
But i have tried to run:
python ./test_video.py config_files/sthv2/tsm_tpn.py ckpt/sthv2_tpn.pth
it worked successfully.
so i want to know whether different models need different test videos ?
In addion, the two trys both showed:
The model and loaded state dict do not match exactly

How long have the models been trained?

Thanks for your contribution, as a freshman, I have no idea how long should it take to train a model? I have read this paper, but I still don’t know how long the training time of the model is.

Thanks again!

multiple people

TPN can recognize the action of multiple people in a frame?

Feature extraction

Hi, thanks for the great codebase.

Could you kindly provide the code to extract features from custom videos using pre-trained models?

--validate not working for non-distributed mode

I'm trying to train a model, and want to use validate mode, but in non-distributed mode, validate doesn't work since in /mmaction/apis/train.py there is no use of validate in the method _non_dist_train(model, dataset, cfg, validate=validate). Could you update the code for running validate in non-distributed mode?

Also, I could not run the code in distributed mode (with --launcher=pytorch, 2GPUs). Can you explain what is the difference in different modes and how to use those for running train in distributed mode?

Thanks,

About tsm101_tpn pretrained model on sth-sth-v2 dataset

Hello, may I ask you for your tsm101_tpn pretrained model on sth-sth-v2 dataset?
By the way, I am now change config_files/sthv2/tsm_tpn.py as follow to train a tsm101_tpn model on sth-sth-v2-dataset.
model = dict(
type='TSN2D',
backbone=dict(
type='ResNet',
pretrained='modelzoo://resnet101',
depth=101,
nsegments=8,
out_indices=(2, 3),
tsm=True,
bn_eval=False,
partial_bn=False),
As I use 2 gpus, I set my learning rate as 0.0025. However, the top1 precision remains around 56 after 45 epochs training, shall I decay my learning rate?
Looking forward to your reply and many thanks.

Why is the training code different from the test code?

Thanks for your contribution!
But i still have some questions ,like this !
why training code different from the test code in the cls_head.py's forward method?

   def forward(self, x):
        if not self.fcn_testing:
            if x.ndimension() == 4:
                x = x.unsqueeze(2)
            assert x.shape[1] == self.in_channels
            assert x.shape[2] == self.temporal_feature_size
            assert x.shape[3] == self.spatial_feature_size
            assert x.shape[4] == self.spatial_feature_size
            if self.with_avg_pool:
                x = self.avg_pool(x)
            if self.dropout is not None:
                x = self.dropout(x)
            x = x.view(x.size(0), -1)
            cls_score = self.fc_cls(x)
            return cls_score
        else:
            if self.with_avg_pool:
                x = self.avg_pool(x)
            if self.new_cls is None:
                self.new_cls = nn.Conv3d(self.in_channels, self.num_classes, 1, 1, 0).cuda()
                self.new_cls.weight.copy_(self.fc_cls.weight.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1))
                self.new_cls.bias.copy_(self.fc_cls.bias)
                self.fc_cls = None
            class_map = self.new_cls(x)
            # return class_map.mean([2,3,4])
            return class_map

Is the number of frames used in training and testing the same?

Thanks for your contribution.

In your paper there are 32 frames when evaluating Kinetics-400, both in R-50 and R-101. Could you please tell me did you also use 32 frames during training?

A very important question is, does your method use the same number of frames during training and testing?

Low score on Sth v2 using reported checkpoint

Hi, I tested SomethingSomething v2 TPN checkpoint on https://github.com/decisionforce/TPN/blob/master/MODELZOO.md.

However, the top-1 accuracy is much lower than the reported score. I guess there was some mismatch between the saved checkpoint and the model (fc), could you tell me how can I get reported score?

I tested on
python 3.5.6
pytorch 1.4.0 (num_batches_tracked may be because of the version of pytorch)

Here is the code that I ran and the log.

CONFIG_FILE='config_files/sthv2/tsm_tpn.py'
GPU_NUM=8
CHECKPOINT_FILE='pretrained/sthv2_tpn.pth'
python tools/test_recognizer.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --out 'test.pkl'  --ignore_cache --fcn_testing
args==>> Namespace(checkpoint='pretrained/sthv2_tpn.pth', config='config_files/sthv2/tsm_tpn.py', fcn_testing=True, flip=False, gpus=8, ignore_cache=True, launcher='none', local_rank=0, log=None, out='test.pkl', proc_per_gpu=1, tmpdir=None)
unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer4.2.bn2.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer1.0.bn3.num_batches_tracked, bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.2.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer2.0.bn3.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.2.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.0.bn3.num_batches_tracked, bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.1.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer1.2.bn2.num_batches_tracked, bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer1.0.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer1.1.bn1.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.3.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer1.1.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, bn1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked

[                                                  ] 0/24777, elapsed: 0s, ETA:unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer1.0.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked

[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 24777/24777, 15.9 task/s, elapsed: 1558s, ETA:     0s

writing results to test.pkl
results_length 24777
Mean Class Accuracy = 44.67
Top-1 Accuracy = 50.03
Top-5 Accuracy = 77.49
Non mean Class Accuracy [0.88793103 0.29605263 0.15686275 0.48543689 0.23076923 0.77631579
 0.74100719 0.54545455 0.51968504 0.54966887 0.57303371 0.68613139
 0.47008547 0.3        0.62105263 0.49361702 0.37055838 0.26973684
 0.30962343 0.33175355 0.40944882 0.31578947 0.40697674 0.53191489
 0.5        0.73684211 0.475      0.78217822 0.8        0.37931034
 0.67567568 0.6036036  0.86363636 0.0776699  0.03614458 0.015625
 0.77419355 0.7148289  0.12962963 0.58823529 0.5136612  0.73214286
 0.54929577 0.56913183 0.74324324 0.53481894 0.60542169 0.09045226
 0.20496894 0.68172043 0.60759494 0.         0.21875    0.6
 0.29166667 0.83333333 0.12328767 0.70754717 0.         0.6294964
 0.78       0.35294118 0.34177215 0.14953271 0.02702703 0.25
 0.38679245 0.36318408 0.79824561 0.12       0.32142857 0.47826087
 0.44117647 0.41025641 0.75471698 0.13157895 0.22222222 0.26923077
 0.22222222 0.62068966 0.09375    0.056      0.25531915 0.19148936
 0.69333333 0.55555556 0.88235294 0.816      0.16666667 0.15662651
 0.20289855 0.66666667 0.26315789 0.28       0.79237288 0.125
 0.08823529 0.11570248 0.6122449  0.6959707  0.24528302 0.44505495
 0.18103448 0.66071429 0.68503937 0.76296296 0.32191781 0.64039409
 0.08108108 0.72       0.32394366 0.         0.34532374 0.47222222
 0.65972222 0.39130435 0.40740741 0.53174603 0.25423729 0.45544554
 0.79527559 0.67045455 0.22167488 0.46153846 0.73076923 0.34862385
 0.39805825 0.28695652 0.23754789 0.3125     0.13571429 0.16438356
 0.49425287 0.28070175 0.7173913  0.29447853 0.05882353 0.41666667
 0.13761468 0.76076555 0.45348837 0.15625    0.53125    0.50324675
 0.67515924 0.48024316 0.57222222 0.03125    0.31380753 0.79245283
 0.53532609 0.20652174 0.2755102  0.89908257 0.64566929 0.52439024
 0.30769231 0.35869565 0.26146789 0.29090909 0.10091743 0.32894737
 0.51401869 0.275      0.83887468 0.90322581 0.92857143 0.91975309
 0.86666667 0.58666667 0.29901961 0.73717949 0.73023256 0.5620438 ]
saving non_mean acc

Details of Information Flow of TPN

Thanks for your awesome work.
I found the code in these lines are corresponding to the section 3.2 in the paper, which serves for fetching the information flow of TPN. I guess it is a Parallel Flow since in L316 and L325 the out is Intentionally assigned with temporal_modulation_outs. However, since = is a shadow copy, the temporal_modulation_outs changes before the second =, and it should be equal to out, resulting in a Cascade Flow.
Could you help me figure it out which the flow it should be ?

TypeError: forward() missing 1 required positional argument: 'img_meta'

Hi,
First of all, thank you for sharing the code! I am excited to try it on my dataset.

Just to make sure my code works, I am trying to pass the random input to the TPN model and getting this error 'TypeError: forward() missing 1 required positional argument: 'img_meta'. (link to the code shared below)

I am using the config file from this file: TPN/config_files/kinetics400/tpn/r101f16s4.py

pl find my code at:
https://knightsucfedu39751-my.sharepoint.com/:u:/g/personal/ishandave_knights_ucf_edu/EdLeO1o7QFlLizGP7KBg0WwBgIOOICwL8PEjy4yYZjLyxg?e=M9IdyZ

output of the code:
https://knightsucfedu39751-my.sharepoint.com/:t:/g/personal/ishandave_knights_ucf_edu/EVtXye2qsFxCvcBqkddxiwABbN2EXjYoGLBUEcSTcoJVHA?e=VwiypD

Hope to hear from you soon.

-Ishan

Pre-trained model for Epic-kitchens

The paper has results for epic-kitchens dataset as well. Although, I cannot find the pre-trained models for epic-kitchen. Are pre-trained models available for that dataset as well?

Train-val split details for Epic-Kitchen dataset

Hi, the paper mentions a split of random videos for train and val. Can you point me to the split details if it is already released or give me more details on the split to compare the results with?

Thanks,
Nirat

The ckpt for `tsm+tpn` is not able to inference.

I use this config to train, it work well. but it occur error when using it to test.
The error info is here:

  File "/mnt/lustre/linjintao/other/TPN/mmaction/models/recognizers/TSN2D.py", line 151, in forward_test
    x = self.spatial_temporal_module(x)
  File "/mnt/lustre/linjintao/other/TPN/mmaction/models/tenons/spatial_temporal_modules/simple_spatial_module.py", line 24, in forward
    return self.op(input)
  File "/mnt/lustre/linjintao/anaconda3/envs/other/lib/python3.7/site-packages/torch/nn/modules/pooling.py", line 563, in forward
    result = self.forward(*input, **kwargs)
RuntimeError    self.padding, self.ceil_mode, self.count_include_pad)
: invalid argument 2: non-empty 3D or 4D input tensor expected but got: [1 x 2048 x 6 x 8 x 8] at /opt/conda/conda-bld/pytorch_1556653215914/work/aten/src/THCUNN/generic/SpatialAveragePooling.cu:30

After some debugging, I found the dim of the data is 5 ([1 x 1024 x 6 x 8 x 8]), it was not squeezed by this line for the 3rd dim is not 1. but when training, it can be squeezed since the size is [1 x 1024 x 1 x 7 x 7].
What is the correct testing setting?

The model and loaded state dict do not match exactly

Hello, I am executing:
python tools/train_recognizer.py config_files/sthv2/tsm_tpn.py --work_dir chekpoint/sthv2_tpn.pth
Warning: WARNING-The model and loaded state dict do not match exactly
Error: TypeError: optimizer must be a dict of torch.optim.Optimizers, but optimizer["type"] is a <class'str'>
Please give me some suggestions, Thank you!

a error when train with TSN3D

Thanks for your great work!
when i follow the following command to train the TSN2D, it's ok:
python tools/train_recognizer.py config_files/sthv2/tsm_tpn.py

but after i chang the configure file to 3d network, as following:
python tools/train_recognizer.py config_files/kinetics400/tpn/r101f32s2.py

I got the error as following:

Traceback (most recent call last):
File "tools/train_recognizer.py", line 92, in
main()
File "tools/train_recognizer.py", line 88, in main
logger=logger)
File "/data/download/projects/clover_tpn_train/mmaction/apis/train.py", line 59, in train_network
_non_dist_train(model, dataset, cfg, validate=validate)
File "/data/download/projects/clover_tpn_train/mmaction/apis/train.py", line 191, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/mmcv/runner/runner.py", line 264, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/data/download/projects/clover_tpn_train/mmaction/apis/train.py", line 36, in batch_processor
losses = model(**data)
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/data/download/projects/clover_tpn_train/mmaction/models/recognizers/base.py", line 39, in forward
return self.forward_train(num_modalities, img_meta, **kwargs)
File "/data/download/projects/clover_tpn_train/mmaction/models/recognizers/TSN3D.py", line 100, in forward_train
x, aux_losses = self.necks(x, gt_label.squeeze())
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/data/download/projects/clover_tpn_train/mmaction/models/tenons/necks/tpn.py", line 324, in forward
topdownouts = self.level_fusion_op2(outs)
File "/root/anaconda3/envs/pytorch1.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/data/download/projects/clover_tpn_train/mmaction/models/tenons/necks/tpn.py", line 170, in forward
out = torch.cat(out, 1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 7 and 4 in dimension 3 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:71

在PyTorch 1.7下,运行test_video.py报错:RuntimeError: Legacy autograd function with non-static forward method is deprecated.

运行环境:

  • CPU: Intel i7-10700K
  • GPU: RTX3090
  • RAM: 64GB
  • 操作系统:Ubuntu 20.04.1 LTS
  • Python 3.6
  • Pytorch 1.7.0 with CUDA 11.0

背景&现象:

由于Pytorch 1.3以下版本,不支持CUDA11,无法兼容30系GPU,所以使用了Pytorch1.7,运行demo时,输出如下错误:

/home/lmz/anaconda3/envs/py36/bin/python /home/lmz/TPN/test_video.py config_files/sthv2/tsm_tpn.py ckpt/sthv2_tpn.pth
Extracting frames using ffmpeg...
The model and loaded state dict do not match exactly

missing keys in source state_dict: necks.aux_head.convs.conv.weight, necks.aux_head.convs.bn.weight, necks.aux_head.convs.bn.bias, necks.aux_head.convs.bn.running_mean, necks.aux_head.convs.bn.running_var, necks.aux_head.fc.weight, necks.aux_head.fc.bias

Traceback (most recent call last):
File "/home/lmz/TPN/test_video.py", line 148, in
results = inference_recognizer(model, seg_frames)
File "/home/lmz/TPN/test_video.py", line 75, in inference_recognizer
result = model(return_loss=False, rescale=True, **data)
File "/home/lmz/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lmz/TPN/mmaction/models/recognizers/base.py", line 41, in forward
return self.forward_test(num_modalities, img_meta, **kwargs)
File "/home/lmz/TPN/mmaction/models/recognizers/TSN2D.py", line 154, in forward_test
x = self.segmental_consensus(x)
File "/home/lmz/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lmz/TPN/mmaction/models/tenons/segmental_consensuses/simple_consensus.py", line 50, in forward
return _SimpleConsensus(self.consensus_type, self.dim)(input)
File "/home/lmz/anaconda3/envs/py36/lib/python3.6/site-packages/torch/autograd/function.py", line 160, in call
"Legacy autograd function with non-static forward method is deprecated. "
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

Process finished with exit code 1

了解到在pytorch1.3及以后的版本需要规定forward方法为静态方法,给TPN/mmaction/models/tenons/segmental_consensuses/simple_consensus.py 中的forward方法加上静态修饰,报错:

/home/lmz/anaconda3/envs/py36/bin/python /home/lmz/TPN/test_video.py config_files/sthv2/tsm_tpn.py ckpt/sthv2_tpn.pth
Extracting frames using ffmpeg...
The model and loaded state dict do not match exactly

missing keys in source state_dict: necks.aux_head.convs.conv.weight, necks.aux_head.convs.bn.weight, necks.aux_head.convs.bn.bias, necks.aux_head.convs.bn.running_mean, necks.aux_head.convs.bn.running_var, necks.aux_head.fc.weight, necks.aux_head.fc.bias

Traceback (most recent call last):
File "/home/lmz/TPN/test_video.py", line 148, in
results = inference_recognizer(model, seg_frames)
File "/home/lmz/TPN/test_video.py", line 75, in inference_recognizer
result = model(return_loss=False, rescale=True, **data)
File "/home/lmz/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lmz/TPN/mmaction/models/recognizers/base.py", line 41, in forward
return self.forward_test(num_modalities, img_meta, **kwargs)
File "/home/lmz/TPN/mmaction/models/recognizers/TSN2D.py", line 154, in forward_test
x = self.segmental_consensus(x)
File "/home/lmz/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'input'

Process finished with exit code 1

mmcv seems to be for python 3.7

Hi,

I ran python set_up.py develop the following INSTALL and run tools/train_recognizer.py.

I found that there are several syntax errors which seems that mmcv is programed for python 3.7. Should I use python 3.7+? Or could you specify the version of mmcv so that I can run on python 3.5.6?

Thank you.

Traceback (most recent call last):
  File "tools/train_recognizer.py", line 4, in <module>
    from mmcv import Config
  File "/home/jdhwang/anaconda3/envs/something/lib/python3.5/site-packages/mmcv-0.5.0-py3.5-linux-x86_64.egg/mmcv/__init__.py", line 3, in <module>
    from .arraymisc import *
  File "/home/jdhwang/anaconda3/envs/something/lib/python3.5/site-packages/mmcv-0.5.0-py3.5-linux-x86_64.egg/mmcv/arraymisc/__init__.py", line 2, in <module>
    from .quantization import dequantize, quantize
  File "/home/jdhwang/anaconda3/envs/something/lib/python3.5/site-packages/mmcv-0.5.0-py3.5-linux-x86_64.egg/mmcv/arraymisc/quantization.py", line 20
    f'levels must be a positive integer, but got {levels}')
                                                         ^
SyntaxError: invalid syntax

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.