Giter Club home page Giter Club logo

Comments (82)

shuangshuangguo avatar shuangshuangguo commented on July 21, 2024 29

I have finished a complicated script for transferring caffemodel to pytorch. When I use the transferred pytorchmodel on tsn-pytorch code, I have got the same result as paper. If you desire pytorchmodel immediately, please see ‘’https://github.com/gss-ucas/caffe2pytorch-tsn'‘

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024 8

We don't have all pretrained UCF101/HMDB51 models in the PyTorch format for download. I can convert the Caffe pretraned models in the original TSN codebase to PyTorch format in next few weeks.

from tsn-pytorch.

SmartPorridge avatar SmartPorridge commented on July 21, 2024 2

@utsavgarg extract_gpu use TVL1 optical flow and it leads to a better accuracy. I did a experiment for that.

from tsn-pytorch.

SinghGauravKumar avatar SinghGauravKumar commented on July 21, 2024 1

@Ivorra Installing the right version of caffe has been a pain. Wondering if @yjxiong , @gss-ucas can provide the kinetics pretrained models for pytorch?

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024

Good point. I will give the models on UCF101 split1 for your references.

from tsn-pytorch.

nationalflag avatar nationalflag commented on July 21, 2024

Thanks! I‘m very expect for it!

from tsn-pytorch.

shuangshuangguo avatar shuangshuangguo commented on July 21, 2024

@yjxiong Hi, thank you for your wonderful job!!!
Could you tell me where to download the pretrained models in pytorch???

from tsn-pytorch.

ntuyt avatar ntuyt commented on July 21, 2024

@yjxiong Hi, Xiong.
Also need the pretrained models in pytorch.
Thanks so much!

from tsn-pytorch.

ntuyt avatar ntuyt commented on July 21, 2024

@gss-ucas Thanks

from tsn-pytorch.

scenarios avatar scenarios commented on July 21, 2024

@yjxiong Thanks for your efforts.
I recently parsed your pretrained caffemodel (ucf split 1) into tensorflow with google protobuf. And I constructed the rgb stream in tensorflow with every layer identical to the one in your caffe train-val protocal. However, the accuracy is still 10% lower than the caffe version. (Padding strategy is different in caffe and tf, but it is properly solved by manually padding in tf.)
I'm wondering is there any other details I should take care? Thanks!

BTW, @gss-ucas it seems that maxpooling with floor mode is used in your implementation, which is not consistent with the caffe version. But strange you can still reproduce the results lol

from tsn-pytorch.

victorhcm avatar victorhcm commented on July 21, 2024

from tsn-pytorch.

scenarios avatar scenarios commented on July 21, 2024

@victorhcm Actually I'm not finetuning. I simply initialize the my BN-Inception model in tensorflow with the parameter released by TSN (it is a caffemodel, I parsed it using protobuf) and do online test without any training. At the very begining, the accuracy is only around 0.6 with 3 segment. Then I realized there is a slight difference between caffe and tensorflow in padding (both for convolution and pooling). After modifying the pading , the accuracy increased to 71%, which is still 10% lower than the caffe resuts.(I test the same model with TSN home-brewed caffe and got 0.81 accuracy with 3 segment).
I have double-checked every layer and for sure they are identical to the model definition in train_val protocal in TSN (or there must be errors when loading the parameter).
Still confused why..

from tsn-pytorch.

scenarios avatar scenarios commented on July 21, 2024

@victorhcm And for testing, I resize the frame to 256x340 and central crop to 224x224. Mean is subtracted. And I also change rgb image to bgr format for consistence.

from tsn-pytorch.

SmartPorridge avatar SmartPorridge commented on July 21, 2024

@yjxiong I use the default command you provided to train my RGBDiff、RGB and Flow model with tsn-pytorch, could you please tell me if they were initialized on ImageNet?

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024

@JiqiangZhou Yes.

from tsn-pytorch.

SmartPorridge avatar SmartPorridge commented on July 21, 2024

@yjxiong thank you! I will read the code carefully.

from tsn-pytorch.

3DMM-ICME2023 avatar 3DMM-ICME2023 commented on July 21, 2024

I run the default command and obtain a lower performance than the paper(85.12 vs 85.7) on the rgb stream. The model is available at http://pan.baidu.com/s/1eSvo8BS
. Hope it helps.

from tsn-pytorch.

utsavgarg avatar utsavgarg commented on July 21, 2024

@yjxiong Even I trained the models using default settings for split-1 of the UCF-101 dataset and am getting a lower performance than reported. Below are the numbers I got:

RGB - 85.57%
Flow - 84.26%
RGB + Flow - 90.44%

The main difference seems to be because of the Flow stream, what could be the reason for this?

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024

@utsavgarg
Please follow the instructions in Caffe TSN to extract optical flow.

#30

from tsn-pytorch.

utsavgarg avatar utsavgarg commented on July 21, 2024

@yjxiong I had done that, I extracted optical flows using the extract_optical_flow.sh script included with Caffe TSN but I used extract_cpu instead of extract_gpu in build_of.py, will that cause this difference in performance?

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024

Yes that's the problem. Always use extract_gpu. The optical flow algorithm you need is not available in extract_cpu.

from tsn-pytorch.

utsavgarg avatar utsavgarg commented on July 21, 2024

Okay thanks.

from tsn-pytorch.

Tord-Zhang avatar Tord-Zhang commented on July 21, 2024

Did anyone get the pytorch pretrained models on ActivityNet or Kinetics?
@yjxiong @nationalflag @gss-ucas @ntuyt @scenarios

from tsn-pytorch.

Fairasname avatar Fairasname commented on July 21, 2024

Hello,
are there any available models trained in ufc?

Thanks!

from tsn-pytorch.

shuangshuangguo avatar shuangshuangguo commented on July 21, 2024

@Ivorra
please see https://github.com/gss-ucas/caffe2pytorch-tsn

from tsn-pytorch.

Fairasname avatar Fairasname commented on July 21, 2024

@gss-ucas Thanks for kindly answering, I am having troubles in "test_models.py", as after loading your model "ucf101_rgb.pth" it seems that some keys do not exist, for with the command example:

python test_models.py ucf101 RGB <ucf101_rgb_val_list> ucf101_rgb.pth --arch BNInception --save_scores <score_file_name>

I get the error:

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py:514: UserWarning: src is not broadcastable to dst, but they have the same number of elements. Falling back to deprecated pointwise behavior.
own_state[name].copy_(param)
Traceback (most recent call last):
File "test_models.py", line 54, in
print("model epoch {} best prec@1: {}".format(checkpoint['epoch'], checkpoint['best_prec1']))
KeyError: 'epoch'

And if I comment that line, another key error appears:

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py:514: UserWarning: src is not broadcastable to dst, but they have the same number of elements. Falling back to deprecated pointwise behavior.
own_state[name].copy_(param)
Traceback (most recent call last):
File "test_models.py", line 56, in
base_dict = {'.'.join(k.split('.')[1:]): v for k,v in list(checkpoint['state_dict'].items())}
KeyError: 'state_dict'

Thanks!

from tsn-pytorch.

poppingcode avatar poppingcode commented on July 21, 2024

I wonder how to fuse? who can help me? thanks

from tsn-pytorch.

Fairasname avatar Fairasname commented on July 21, 2024

@gss-ucas Alrights, I did not noticed the test_models.py file in your repository. I just substituted it buy the one this repository has.
Moreover, accuracy for UCF101 - Fold 1 works as it should, given your provided converted models and the reference at the official project page:

  • RGB accuracy: 86.013%
  • Flow accuracy: 87.698%
  • Final accuracy: 93.798%

Thanks!

from tsn-pytorch.

Fairasname avatar Fairasname commented on July 21, 2024

@poppingcode I followed the instructions at the original TSN repository. Just copied the eval_scores.py file provided there in conjunction with the folder pyActionRecog which is needed as a dependency.

Hope it works fine for you!

from tsn-pytorch.

SinghGauravKumar avatar SinghGauravKumar commented on July 21, 2024

@gss-ucas Hi, did you manage to convert kinetics pretrained models (as shown at http://yjxiong.me/others/kinetics_action/ ) to pytorch too?

from tsn-pytorch.

Fairasname avatar Fairasname commented on July 21, 2024

Maybe this Caffe --> PyTorch model convertor is worth looking at:

https://github.com/marvis/pytorch-caffe

from tsn-pytorch.

shuangshuangguo avatar shuangshuangguo commented on July 21, 2024

@SinghGauravKumar You can follow the instructions to convert, as in [https://github.com/gss-ucas/caffe2pytorch-tsn]

Modify some code should work.

from tsn-pytorch.

sipan17 avatar sipan17 commented on July 21, 2024

@yjxiong is there any pre-trained model on HMDB51 dataset?

from tsn-pytorch.

shuangshuangguo avatar shuangshuangguo commented on July 21, 2024

@sipan17 Please see https://github.com/shuangshuangguo/caffe2pytorch-tsn

from tsn-pytorch.

sipan17 avatar sipan17 commented on July 21, 2024

@shuangshuangguo thank you, will try that.

from tsn-pytorch.

TiJoy avatar TiJoy commented on July 21, 2024

I run the default command and obtain a lower performance than the paper(85.12 vs 85.7) on the rgb stream. The model is available at http://pan.baidu.com/s/1eSvo8BS
. Hope it helps.

I have a question that there is a error when I decompression my ucf101_bninception__rgb_checkpoint.pth.tar.
tar: This does not look like a tar archive tar: Skipping to next header tar: Exiting with failure status due to previous errors
what should I do?

from tsn-pytorch.

yjxiong avatar yjxiong commented on July 21, 2024

@TiJoy

You don't need to uncompress it. pth.tar is just the extension of model files used by PyTorch.

from tsn-pytorch.

TiJoy avatar TiJoy commented on July 21, 2024

@TiJoy

You don't need to uncompress it. pth.tar is just the extension of model files used by PyTorch.

Thank you, I know that I just need to delet .tar in its file name.

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.

Thank you!

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@liu666666 Hello, I have tried your model but I only get the result of 76.13 for RGB. I think there is something wrong with my code. Could you please give me your parameter of args?

from tsn-pytorch.

cbasemaster avatar cbasemaster commented on July 21, 2024

hallo i use kinetics pretrain shared above to be fined tune with ucf split 1, however the accuracy is still low like this, Testing Results: Prec@1 56.732 Prec@5 84.967 Loss 2.49515

while training is

Loss 0.2026 (0.3246) Prec@1 93.750 (90.549) Prec@5 98.438 (97.904)
after 80 epoch 64 sized minibatch

do you know where is gone wrong here?

Screen Shot 2019-08-02 at 18 44 10

from tsn-pytorch.

imnotk avatar imnotk commented on July 21, 2024

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.

Thank you!

I can only get 78% RGB accuracy using split 1 on UCF101, how do you solve it? It's very wierd.

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.
Thank you!

I can only get 78% RGB accuracy using split 1 on UCF101, how do you solve it? It's very wierd.

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.
Thank you!

I can only get 78% RGB accuracy using split 1 on UCF101, how do you solve it? It's very wierd.

I have not change the data sample in the original code. In the original code the data sample was set as [b*crops,c,h,w], What do you mean [b,t,c,h,w]?
I only found the input_mean = [104,117,123]. The input_std has not been found. Could you please provide it to me? Thank you!

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@imnotk by the way, the trainded model give 97.8% accuracy on the training set ucf101-split1. It means this model and basic parameter in the code is useful? I still have not found the reason why it work badly on the test set...

from tsn-pytorch.

nanhui69 avatar nanhui69 commented on July 21, 2024

@liu666666 have you extract the ucf101 RGB-Frames by 'bash scripts/extract_optical_flow.sh SRC_FOLDER OUT_FOLDER NUM_WORKER' when i do it ,enconter lots of problems, could you share the RGB-frame dataset of ucf101 ?

from tsn-pytorch.

nanhui69 avatar nanhui69 commented on July 21, 2024

@linshuheng6 Are you reproduce the TSN project successful ?could i communicate with you by QQ or others?

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@nanhui69 I used the ffmpeg command to extract the frame in the video, not the script provided by the quthor. I have not reproduced TSN, I just used the source code and run the test document. You can contact me by send email to [email protected].

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.
Thank you!

I can only get 78% RGB accuracy using split 1 on UCF101, how do you solve it? It's very wierd.

@imnotk Have you solved this problem?
I just got the same accuracy 78% for RGB using split 1 on UCF101.

from tsn-pytorch.

linshuheng6 avatar linshuheng6 commented on July 21, 2024

@Shumpei-Kikuta I have sovled this problem by extract frames from video by opencv command instead of ffmpeg

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@linshuheng6 Thank you for sharing.
I got stuck in extracting frames as that repository shows, so I've used this one: http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data
Anyway, thank you!

from tsn-pytorch.

imnotk avatar imnotk commented on July 21, 2024

@Ivorra Hello, I have tried the converted models from caffe. But I only got 75% accuracy on ucf101-split1 with only RGB model. Could you share the parameters of args with me? And how did you extract frame from video? I just used ffmepg command.
Thank you!

I can only get 78% RGB accuracy using split 1 on UCF101, how do you solve it? It's very wierd.

@imnotk Have you solved this problem?
I just got the same accuracy 78% for RGB using split 1 on UCF101.

No, always get stuck in 78% with resnet in pytorch.

from tsn-pytorch.

uname0x96 avatar uname0x96 commented on July 21, 2024

Hi @yjxiong, I'm tried to convert PyTorch model to onnx model. But I got this error: RuntimeError: Attempted to trace SegmentConsensus, but tracing of legacy functions is not supported. Have any solution for this?

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

@imnotk @Shumpei-Kikuta @linshuheng6 hello,i want to run main.py. But I don't know how it loads the frames which were extracted before. The <ucf101_flow_train_list> is referred to trainlist01.txt path, is that true? Thanks for any help!

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@YuLengChuanJiang Hello, I am also doing this, can we contact and communicate

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@YuLengChuanJiang @shijubushiju Have you extracted optical flow from videos and make lists?
Otherwise, you should follow this repository to do this.
This repo is the original version of TSN, coded by Caffe.

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta If I want to make my own dataset for action recognition, what kind of list should I generate

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju You should follow here.
Specifically, the trainlist file should be formed as frame path, video frame number, and video groundtruth class.

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta Thank you. I have carefully read relevant documents and am ready to give it a try. Thank you very much for your advice

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@YuLengChuanJiang My email is [email protected], we can discuss these issues

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@YuLengChuanJiang The list you get after extracting optical flow is somewhat different from the list you mentioned.
Its columns looks like frame path, video frame number, and video groundtruth class.
If you can't make lists, I recommend using docker image the repo owner's created.

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta I ran the code for pytorch, and now I have calculated the RGB and the optoflow diagram using other methods. I ran the generated list script command bash scripts/build_file_list.sh ucf101 FRAME_PATH for caffe alone. Can I generate a list of files suitable for pytorch

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju You need to build caffe to use scripts/build_file_list.sh.
Otherwise, you need to write the appropriate list file, whose columns are frame path, video frame number, and video groundtruth class.
It is not hard.

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta
Such as my frame path '/ home/dl123 / data/RGB/action/kick/'
Video path for '/ home/dl123 / data/video/action/kick. Avi'
Optical Flow Images path for '/ home/dl123 / data/flow/action/kick_x/' and'/home/dl123 / data/flow/action/kick_y/'
I have 6 types of action, kick is one of them, its frame number is 123, if I write my own list, is that right:
/ home/dl123 / data/RGB/action/kick / 123 6

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju It seems right to me.
Have you run the program?

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta
Not yet. Since I failed to install mmaction and denseflow when extracting the optical flow diagram, I tried other methods to get the optical flow image, but I encountered difficulties when generating the train_list in the next step. I was trying to solve this problem. Do you have a good way to help me?

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju I understand.
You just try to use Docker image the repo owner provides us.
You should extract optical flow and make lists as README says.
If you don't follow the optical flow extraction like this repo, you won't get compatible accuracy.

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta
Do you mean that I use the Docker image provided by the author to generate my list? I noticed that there is a build_file_list.py file in the tools folder in caffe code. Can I use it to generate my training list?

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta
Do you mean that I use the Docker image provided by the author to generate my list? I noticed that there is a build_file_list.py file in the tools folder in caffe code. Can I use it to generate my training list?

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju Sorry, i mean you should use the docker image.

from tsn-pytorch.

shijubushiju avatar shijubushiju commented on July 21, 2024

@Shumpei-Kikuta
I have successfully installed Docker and nvidia-docker, and built the container with the command :docker pull bitxiong/ TSN.
How can I use it for my own datasets?

from tsn-pytorch.

Shumpei-Kikuta avatar Shumpei-Kikuta commented on July 21, 2024

@shijubushiju Congratulations!
Just run the docker container and enter it by executing docker exec -it [container id] /bin/bash.
I think you should mount the volume of your local files to the docker container by using -v option when running.
Once entering, you can follow README.

from tsn-pytorch.

vivva avatar vivva commented on July 21, 2024

@poppingcode I followed the instructions at the original TSN repository. Just copied the eval_scores.py file provided there in conjunction with the folder pyActionRecog which is needed as a dependency.

Hope it works fine for you!

Have you changed anything? I did as you said, but it would report an error.
this is my print
score_npz_files: [119547037146038801333356, 119547037146038801333356]

the error
Traceback (most recent call last): File "tools/eval_scores.py", line 33, in <module> score_list = [x['scores'][:, 0] for x in score_npz_files] #init File "tools/eval_scores.py", line 33, in <listcomp> score_list = [x['scores'][:, 0] for x in score_npz_files] #init TypeError: 'int' object is not subscriptable
Thank you very much!!

from tsn-pytorch.

poppingcode avatar poppingcode commented on July 21, 2024

@vivva can you print x['scores'],and check it can be indexed

from tsn-pytorch.

nishanthrachakonda avatar nishanthrachakonda commented on July 21, 2024

KeyError
@Fairasname how did you fix this error ?

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

zhudongwork avatar zhudongwork commented on July 21, 2024

Hi @yjxiong, I'm tried to convert PyTorch model to onnx model. But I got this error: RuntimeError: Attempted to trace SegmentConsensus, but tracing of legacy functions is not supported. Have any solution for this?

I had same problem

from tsn-pytorch.

YuLengChuanJiang avatar YuLengChuanJiang commented on July 21, 2024

from tsn-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.