Giter Club home page Giter Club logo

cbvr's Introduction

Feature Re-Learning with Data Augmentation for Content-based Video Recommendation

We release source code of our winning entry for the Hulu Content-based Video Relevance Prediction Challenge at the ACM Multimedia 2018 conference. The solution is mainly a feature re-learning model enhanced by data augmentation that works for both frame-level and video-level features.

Requirements

Required Packages

  • python 2.7
  • PyTorch 0.3.1
  • tensorboard_logger for tensorboard visualization

We used virtualenv to setup a deep learning workspace that supports PyTorch. Run the following script to install the required packages.

virtualenv --system-site-packages ~/cbvr
source ~/cbvr/bin/activate
pip install -r requirements.txt
deactivate

Required Data

  1. Download track_1_shows(6G) and track_2_movies(9.0G) datasets from Google Drive or Baidu Pan or here. If you have already downloaded the datasets provided by Hulu organizers, use the script do_feature_convert.sh to convert the dataset to fit for our code.
  2. Run the following script to extract the downloaded data. The extracted data is placed in $HOME/VisualSearch/.
ROOTPATH=$HOME/VisualSearch
mkdir -p $ROOTPATH
# extract track_1_shows and track_2_movies datasets
tar zxf track_1_shows.tar.gz -C $ROOTPATH
tar zxf track_2_movies.tar.gz -C $ROOTPATH

Getting started

Augmentation for frame-level features

image

Run the following script to train and evaluate the model with augmentation for frame-level features.

source ~/cbvr/bin/activate
# on track_1_shows with stride=4
./do_all_frame_level.sh track_1_shows inception-pool3 4
# on track_2_shows with stride=4
./do_all_frame_level.sh track_2_movies inception-pool3 4
deactive

Running the script will do the following things:

  1. Generate augmented frame-level features and operate mean pooling to obtain video-level features in advance.
  2. Train the feature re-learning model with augmentation for frame-level features and select a checkpoint that performs best on the validation set as the final model.
  3. Evaluate the final model on the validate set and generate predicted results on the test set. Note that we as participants have no access to the ground-truth of the test set. Please contact the task organizers in case you may want to evaluate our model or your own model on the test set.

Augmentation for video-level features

image

Run the following script to train and evaluate the model with augmentation for video-level features.

source ~/cbvr/bin/activate
# on track_1_shows
./do_all_video_level.sh track_1_shows c3d-pool5
# on track_2_movies
./do_all_video_level.sh track_2_movies c3d-pool5
deactive

Running the script will do the following things:

  1. Train the feature re-learning model with augmentation for video-level features and select a checkpoint that performs best on the validation set as the final model. (The augmented video-level features are generated on the fly.)
  2. Evaluate the final model on the validate set and generate predicted results on the test set.

How to perform the proposed augmentation for other video-related tasks?

The proposed augmentation essentially can be used for other video-related tasks. This note shows

  • How to perform data augmentation over frame-level features?
  • How to perform data augmentation over video-level features?

Citation

If you find the package useful, please consider citing our MM'18 paper:

@inproceedings{mm2018-cbvrp-dong,
title = {Feature Re-Learning with Data Augmentation for Content-based Video Recommendation},
author = {Jianfeng Dong and Xirong Li and Chaoxi Xu and Gang Yang and Xun Wang},
doi = {10.1145/3240508.3266441},
year = {2018},
booktitle = {ACM Multimedia},
}

Acknowledgements

We are grateful to HULU organizers for the challenge organization effort.

@article{liu2018content,
  title={Content-based Video Relevance Prediction Challenge: Data, Protocol, and Baseline},
  author={Liu, Mengyi and Xie, Xiaohui and Zhou, Hanning},
  journal={arXiv preprint arXiv:1806.00737},
  year={2018}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.