Giter Club home page Giter Club logo

strcf's Introduction

Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking (CVPR 2018)

Matlab implementation of our Spatial-Temporal Regularized Correlation Filters (STRCF) tracker.

Abstract

Discriminative Correlation Filters (DCF) are efficient in visual tracking but suffer from unwanted boundary effects. Spatially Regularized DCF (SRDCF) has been suggested to resolve this issue by enforcing spatial penalty on DCF coefficients, which, inevitably, improves the tracking performance at the price of increasing complexity. To tackle online updating, SRDCF formulates its model on multiple training images, further adding difficulties in improving efficiency. In this work, by introducing temporal regularization to SRDCF with single sample, we present our spatial-temporal regularized correlation filters (STRCF). Motivated by online Passive-Agressive (PA) algorithm, we introduce the temporal regularization to SRDCF with single sample, thus resulting in our spatial-temporal regularized correlation filters (STRCF). The STRCF formulation can not only serve as a reasonable approximation to SRDCF with multiple training samples, but also provide a more robust appearance model than SRDCF in the case of large appearance variations.Besides, it can be efficiently solved via the alternating direction method of multipliers (ADMM). By incorporating both temporal and spatial regularization, our STRCF can handle boundary effects without much loss in efficiency and achieve superior performance over SRDCF in terms of accuracy and speed. Experiments are conducted on three benchmark datasets: OTB-2015, Temple-Color, and VOT-2016. Compared with SRDCF, STRCF with hand-crafted features provides a ~5 times speedup and achieves a gain of 5.4% and 3.6% AUC score on OTB-2015 and Temple-Color, respectively. Moreover, STRCF combined with CNN features also performs favorably against state-of-the-art CNN-based trackers and achieves an AUC score of 68.3% on OTB-2015.

Publication

Details about the STRCF tracker can be found in our CVPR 2018 paper:

Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang and Ming-Hsuan Yang.
Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

The paper link is: https://arxiv.org/abs/1803.08679

Please cite the above publication if you find STRCF useful in your research. The bibtex entry is:

@Inproceedings{Li2018STRCF,
title={Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking},
author={Li, Feng and Tian, Cheng and Zuo, Wangmeng and Zhang, Lei and Yang, Ming Hsuan},
booktitle={CVPR},
year={2018},
}

Contact

Feng Li

Email: [email protected]

Installation

Using git clone

  1. Clone the GIT repository:

    $ git clone https://github.com/lifeng9472/STRCF.git

  2. Clone the submodules.
    In the repository directory, run the commands:

    $ git submodule init
    $ git submodule update

  3. Start Matlab and navigate to the repository.
    Run the install script:

    |>> install

  4. Run the demo script to test the tracker:

    |>> demo_STRCF

Note: This package requires matconvnet [1], if you want to use CNN features, and PDollar Toolbox [2], if you want to use HOG features. Both these externals are included as git submodules and should be installed by following step 2. above.

Description and Instructions

How to run

The files in root directory are used to run the tracker in OTB and Temple-Color datasets.

These files are included:

  • run_STRCF.m - runfile for the STRCF tracker with hand-crafted features (i.e., HOG+CN).

  • run_DeepSTRCF.m - runfile for the DeepSTRCF tracker with CNN features.

Tracking performance on the OTB-2015, Temple-Color is given as follows,

Hand-crafted features on OTB-2015CNN features on OTB-2015all features on Temple-Color

Results on the VOT-2016 dataset are also provided.

  • tracker_DeepSTRCF.m - this file integrates the tracker into the VOT-2016 toolkit.

Note:

To run the tracker on VOT-2016 dataset, two things need to be taken:

  1. Change the location of the pre-trained CNN with absolute path rather than the relative path in feature_extraction/load_CNN.m.

  2. Change the location of the STRCF tracker in tracker_DeepSTRCF.m.

ECO SRDCF SRDCFDecon BACF DeepSRDCF ECO-HC STRCF DeepSTRCF
EAO 0.375 0.247 0.262 0.223 0.276 0.322 0.279 0.313
Accuracy 0.53 0.52 0.53 0.56 0.51 0.54 0.53 0.55
Robustness 0.73 1.5 1.42 1.88 1.17 1.08 1.32 0.92

Features

  1. Deep CNN features. It uses matconvnet [1], which is included as a git submodule in external_libs/matconvnet/. The imagenet-vgg-m-2048 network available at http://www.vlfeat.org/matconvnet/pretrained/ was used. You can try other networks, by placing them in the feature_extraction/networks/ folder.

  2. HOG features. It uses the PDollar Toolbox [2], which is included as a git submodule in external_libs/pdollar_toolbox/.

  3. Lookup table features. These are implemented as a lookup table that directly maps an RGB or grayscale value to a feature vector.

  4. Colorspace features. Currently grayscale and RGB are implemented.

Acknowledgements

We thank for Dr. Martin Danelljan and Hamed Kiani for their valuable help on our work. In this work, we have borrowed the feature extraction modules from the ECO tracker (https://github.com/martin-danelljan/ECO) and the parameter settings from BACF (www.hamedkiani.com/bacf.html).

References

[1] Webpage: http://www.vlfeat.org/matconvnet/
GitHub: https://github.com/vlfeat/matconvnet

[2] Piotr Dollár.
"Piotr’s Image and Video Matlab Toolbox (PMT)."
Webpage: https://pdollar.github.io/toolbox/
GitHub: https://github.com/pdollar/toolbox

strcf's People

Contributors

lifeng9472 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

strcf's Issues

deep OTB

OTB-2015 deep feature 效果比文中低较多

parameter setting?

If I set the maximum of gamma to 100 as the paper, why the tracker does not work?

VOT数据集测试问题

请问除了上述 1.修改相对路径为绝对路径;2.更改STRCF路径之外还需要做什么操作呢?拷贝tracker_DeepSTRCF.m到vot-toolkit中也不能够正常运行。需要怎么做才能在VOT数据集上跑起来呢?非常感谢:)

STRCF tracking results

Hi,
Could you please put your STRCF tracking results (results on OTB, TC128, and VOT datasets) on a link for download, and cite them on your GitHub?
With appreciation,
S. M. Marvasti-Zadeh

运行demo_STRCF报错

成功运行install,在运行demo_STRCF时,在get_fhog中出现为定义变量fhog,等一系列缺少变量的错误,请问我是哪里出现了问题吗

the problem of cur_AUC?

After the improvement of video = 'Human3', cur_AUC becomes smaller, can it represent improvement failure?
thx advance.

Interpolation Function

Hi,
After add interpolation function in the C-COT when the the gaussian label function donot change ,why the performance will reduce ?
thx advance

subproblem f

论文q中表述为gammag-gammah,但是代码中-(1/(gamma + mu))* (bsxfun(@times, model_xf, Shx_f)) +(gamma/(gamma + mu))* (bsxfun(@times, model_xf, Sgx_f)),h开头没有*gamma,这是为什么?

HOG特征

您好,我测试单HOG特征效果没达到论文中那么好的效果。请问单HOG特征参数的设置是怎样的?

改进了这个算法

我用了一个工程方法改进了一下这个算法,点提高了,有兴趣交流一下吗

How to change the opencv version that STRCF uses?

Hello! I need to run your STRCF to test a new dataset, but I encounter problems. It's hard to install opencv 2.4, because it only support cuda 8.0 or a lower version , but GPUs now such as GTX 3090 need cuda 11.1 or higher version. So I think the only way to run it is changing the version of opencv that the ASRCF uses. Honestly, I don't know how to change it. So I want to ask the author this question.

FPS与论文不符

我电脑运行ECO-HC 39FPS,而STRCF的demo只有10FPS。

论文中给出的对比速度是ECO-HC 42FPS、STRCF 24FPS。

是哪里配置不同吗?给出的FPS是不是偏高了?我测试OTB时只有极少部分序列超过20FPS

ADMM求解g的代码

tracker.m
% subproblem g g_f = fft2(argmin_g(reg_window{k}, gamma, real(ifft2(gamma * f_f+ h_f)), g_f));
这里的为什么不是gamma * (f_f+ h_f),是不是和论文不太一样?

T

B = S_xx + T * (gamma + mu);,其中这个T的引入不懂,麻烦告知一下

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.