Giter Club home page Giter Club logo

semantics-assistedvideocaptioning's Introduction

Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy

Github Watchers GitHub stars GitHub forks License

Table of Contents

  1. Description
  2. Dependencies
  3. Manual
  4. Data
  5. Results
    1. Comparison on Youtube2Text
    2. Comparison on MSR-VTT
  6. Results and Data for the Final Version of the Paper
  7. Citation

Description

This repo contains the code of Semantics-Assisted Video Captioning Model, based on the paper "Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy". It is under review at Frontiers in Robotics and AI.

We propose three ways to improve the video captioning model. First of all, we utilize both spatial features and dynamic spatio-temporal features as inputs for semantic detection network in order to generate meaningful semantic features for videos. Then, we propose a scheduled sampling strategy which gradually transfers the training phase from a teacher guiding manner towards a more self teaching manner. At last, the ordinary logarithm probability loss function is leveraged by sentence length so that short sentence inclination is alleviated. Our model achieves state-of-the-art results on the Youtube2Text dataset and is competitive with the state-of-the-art models on the MSR-VTT dataset.

The overall structure of our model looks like this overall structure. Here is some captions generated by our model. captions


If you need a newer and more powerful model, please refer to Delving-Deeper-into-the-Decoder-for-Video-Captioning.


Dependencies

  • Python3.6
  • TensorFlow 1.13
  • NumPy
  • sklearn
  • pycocoevalcap(Python3)

Manual

  1. Make sure you have installed all the required packages.
  2. Download pycocoevalcap and put it along with msrvttt, msvd, tagging folders.
  3. Download files in the Data section.
  4. cd path_to_directory_of_model; mkdir saves
  5. run_model.sh is used for training models and test_model.sh is used for testing models. Specify the GPU you want to use by modifying CUDA_VISIBLE_DEVICES value. Specify the needed data paths by modifying corpus, ecores, tag and ref values. The words will be sampled by argmax strategy if argmax is 1 and they will be sampled by multinomial strategy if argmax is 0. name is the name which you give to the model. test refers to the path of the saved model which is to be tested. Do not give a parameter to test if you want to train a model.
  6. After completing the configuration of the bash file, then bash run_model.sh for training, bash test_model.sh for testing.

Results

Comparison on Youtube2Text

Model B-4 C M R Overall
LSTM-E 45.3 31.0
h-RNN 49.9 65.8 32.6
aLSTMs 50.8 74.8 33.3
SCN 51.1 77.7 33.5
MTVC 54.5 92.4 36.0 72.8 0.9198
ECO 53.5 85.8 35.0
SibNet 54.2 88.2 34.8 71.7 0.8969
Our Model 61.8 103.0 37.8 76.8 1.0000

Comparison on MSR-VTT

Model B-4 C M R Overall
v2t_navigator 40.8 44.8 28.2 60.9 0.9325
Aalto 39.8 45.7 26.9 59.8 0.9157
VideoLAB 39.1 44.1 27.7 60.6 0.9140
MTVC 40.8 47.1 28.8 60.2 0.9459
CIDEnt-RL 40.5 51.7 28.4 61.4 0.9678
SibNet 40.9 47.5 27.5 60.2 0.9374
HACA 43.4 49.7 29.5 61.8 0.9856
TAMoE 42.2 48.9 29.4 62.0 0.9749
Our Model 43.8 51.4 28.9 62.4 0.9935

Data

MSVD

  • MSVD tag index2word and word2index mappings(ExternalRepo)
    • We use the same word-index mapping in semantic tag to the code in this link.

MSRVTT

  • MSR-VTT Dataset:

    • train_val_test_annotation.zip (GoogleDrive)
      • SHA-256: ce2d97dd82d03e018c6f9ee69c96eb784397d1c83f734fdb8c17aafa5e27da31
    • msr-vtt-v1.part1.rar (GoogleDrive)
      • SHA-256: 3445e0d1bffda3739110dfcf14182b63222731af8a4d7153f0ac09dbec39a0d3
    • msr-vtt-v1.part2.rar (GoogleDrive)
      • SHA-256: b550997526272ab68a42f1bd93315aa2bbb521c71f33d0cb922fbbfb86f15aae
    • msr-vtt-v1.part3.rar (GoogleDrive)
      • SHA-256: debbd0e535e77d9927ffb375299c08990519e22ba7dac542b464b70d440ef515
  • Data and Models for both MSVD and MSR-VTT

    • data.zip
    • SHA-256: fadd721eaa0f13aff7c3505e4784a003514c33ffa5a934a9dcf13955285df11f

ECO

  • Source Code: GitHub.
  • ECO_full_kinetics.caffemodel (GoogleDrive)
    • MD5 31ed18d5eadfd59cb65b7dcdadc310b4
    • SHA-1 b749384d2dac102b8035965566e3030fce465c20

Results and Data for the Final Version of the Paper (Updating)

Results

  1. MSVD Results MSVD Results

  2. MSR-VTT Results MSR-VTT Results

Data and Models

GoogleDrive

  • SHA256: d2a731794ef1bc90c9ccd6c7fe5e92fa7ad104f9e9188ac751c984b23d3a939b

Citation

@ARTICLE{10.3389/frobt.2020.475767,
    AUTHOR={Chen, Haoran and Lin, Ke and Maye, Alexander and Li, Jianmin and Hu, Xiaolin},   
    TITLE={A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling},      
    JOURNAL={Frontiers in Robotics and AI},      
    VOLUME={7},      
    PAGES={129},     
    YEAR={2020},      
    URL={https://www.frontiersin.org/article/10.3389/frobt.2020.475767},       
    DOI={10.3389/frobt.2020.475767},      
    ISSN={2296-9144},   
    ABSTRACT={Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, the Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, leading to poor performance. Third, current video captioning models are prone to generate relatively short captions that express video contents inappropriately. Toward resolving these three problems, we suggest three corresponding improvements. First of all, we propose a metric to compare the quality of semantic features, and utilize appropriate features as input for a semantic detection network (SDN) with adequate complexity in order to generate meaningful semantic features for videos. Then, we apply a scheduled sampling strategy that gradually transfers the training phase from a teacher-guided manner toward a more self-teaching manner. Finally, the ordinary logarithm probability loss function is leveraged by sentence length so that the inclination of generating short sentences is alleviated. Our model achieves better results than previous models on the YouTube2Text dataset and is competitive with the previous best model on the MSR-VTT dataset.}
}

semantics-assistedvideocaptioning's People

Contributors

wingsbrokenangel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

semantics-assistedvideocaptioning's Issues

ECO features

Thank you for sharing your amazing work.

I need to extract ECO features only. Can you help me how to do that? Specifically, I need to just extract ECO features of videos. Do I have to run all models of ECO in his GitHub repository, or what?

Tagging checkpoint

Hello,

How can I find the msvd equivalent of the checkpoint of tagging/test.py referrred at line 15
( saver.restore(sess, './saves/msrvtt_tag_model_1000_resnext_eco.ckpt'))?

How to generate these pkl file in the Data Section?

Thank you for kindly share your code to the github, Could you tell me how to generate these pkl file file in the Data Section, as to say How to do the preprocess of the two dataset : MSVD and the MSR-VTT ?Thank for your attention!

关于ResNeXt101和ECO提取特征的疑问以及用于训练语义检测网络的两个npy文件

首先非常感谢作者,您的工作和开源代码对我很有帮助。在学习过程中仍然有些疑问想请教您:
1、在Semantics-AssistedVideoCaptioning-master/tagging文件夹下的train_tag_net.py中,命令行参数有
(1)msvd_resnext_eco.npy:msvd数据集的视频特征-->1970x3584
(2)msvd_tag_gt_4_msvd.npy:从msvd选取300个词对msvd做标注,得到的1970x300的真实语义标注
(3)msrvtt_resnext_eco.npy:msrvtt数据集的视频特征-->10000x3584
(4)msrvtt_tag_gt_4_msrvtt.npy:从msrvtt选取300个词对msrvtt做标注,得到的10000x300的真实语义标注
不知道我上面的理解正确吗?
另外,在Data-->MSVD部分,除了提供了msvd_tag_gt_4_msvd.npy,还有一个msrvtt_tag_gt_4_msvd.npy(如下图),我看它的shape是10000x300,请问这个文件是用msvd的300个词对msrvtt做的真实语义标注吗?
下面还有一句"The previous two files are used to train the tagging network."是想说用这两个文件针对msvd数据集做一个语义检测网络吗?但是我看train_tag_net.py中,是用msvd_tag_gt_4_msvd.npy和msrvtt_tag_gt_4_msrvtt.npy训练了一个统一的语义检测网络啊,这是怎么回事?
image

2、在提取eco特征时,用到了caffemodel,想知道net = caffe.Net(model_file, model_def_file, caffe.TEST),model_def_file是您提供的ECO_full_kinetics.caffemodel,那model_file呢,我看网上说是一个deploy.prototxt文件,但您提供的所有文件里并没有这一项,由于我对caffe那一套不了解,不知道具体是怎么回事,还请指教。

3、我注意到,产生resnext特征的文件,generate_res_feat.py,最后产生的是一个1970x32x2048的张量(针对msvd),并把张量写入一个npy文件,而在您的文章里,是把它们按空间维度(即32所在的维度)进行了平均池化,最终得到1970x2048的特征。
平均池化操作是使用tf.layers.average_pooling3d进行的吗?

4、我看文章中,视频特征是把Ei(第i视频的动态特征)堆叠到Ri(第i视频的静态特征)上去,得到3584维特征。我想知道在您的代码处理过程中,每个视频的3584维特征,具体是eco+resnext(1536+2048),还是resnext+eco(2048+1536)呢?
5、不好意思,在github上用中文提issues,因为问的东西比较多,还望您能指点一二,谢谢。

About test

Hello dear author:
When I test the method you gave, why does the model still retrain?

请问sentence length loss在代码中体现在哪里

非常感谢您开源代码,我仔细阅读了您的文章并重新复现了代码,收获很多。但是也存在一些疑问:
提取300个单词作为语义分支的训练标签,提取语义特征,该特征被映射到0-1之间。
问题1:语义特征和lstm的视觉输入、前一时刻的word-embedding,前一时刻的隐藏层h(t-1)分别对应相乘,达到语义融合的目的吗?
问题2:文章提到人工选取k个单词,那么选取的标准是什么呢?为什么选择300个,如何找到这300个?(我在您的代码中,去除了schedule sampling仅使用semantic,发现提升很大 )
问题3:(最困惑的问题)文章提到的sentence-length-related loss function具体体现在代码的哪里呢,我仔细阅读了您的源码,没有找到这部分内容,请您指点,非常感谢!

以下是我在您源码的基础上做的实验(没有找到sentence loss源码,所以无法去除进行对比):
Base(仅使用LSTM),Base + schedule sampling,Base + semantic,参数未作改变,50epoch结果如下:

MSRVTT | B-4 | C | M | R | Overall | Time
BASE | 40.3 | 41.3 | 26.1 | 60.1 | 0.896 | 19epo/3900s/10478s
BASE+Samp | 39.1 | 41.8 | 25.9 | 60.1 | 0.915 | 22epo/4500s/10210s
BASE+Semantic | 44.0 | 50.2 | 28.4 | 62.4 | 0.948 | 50epoch/11113.7s

Semantic的作用非常大。
针对上述三个问题尤其问题3,希望得到您的帮助

msrvtt_resnext_eco feats

Thank you very much for your code! It has been very useful to me!
But, when I attempt to load the features in msrvtt_resnext_eco.npy file from the link that you included in the README, ValueError: cannot reshape array of size 13238145 into shape (10000,3584) occurs. Is it the correct file?

Question about MSR-VTT.

Hey man, thanks for your excellent work! Could you please tell me what resolution you use for MSR-VTT?

Nan Values Generated by the tagging network

Hi, I tried reproducing your results but the files generated by test.py in the tagging model contain ndarrays with nan values.
image

I guess, in the tagging network when you mention keep_prob, you actually meant rate.

implementation detail

Hey @WingsBrokenAngel nice work thanks for making code publicly for further research.

I have a query regarding implementation, since MSR-VTT have 20 captions for each video. How you have deal with them during?
Did you took random caption for video in each epoch or you have just repeated the features for each caption?
By look the implementation i think you have took all the caption with repeated features. Am I right ?

Dataset Split

Is the msr-vtt v1.part & v2.part & v3.part corresponds to train/val/test respectively?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.