Giter Club home page Giter Club logo

transductive-vos.pytorch's Introduction

A Transductive Approach for Video Object Segmentation

This repo contains the pytorch implementation for the CVPR 2020 paper A Transductive Approach for Video Object Segmentation.

Pretrained Models and Results

We provide three pretrained models of ResNet50. They are trained from DAVIS 17 training set, combined DAVIS 17 training and validation set and YouTube-VOS training set.

Our pre-computed results can be downloaded here.

Our results on DAVIS17 and YouTube-VOS:

Dataset J F
DAVIS17 validation 69.9 74.7
DAVIS17 test-dev 58.8 67.4
YouTube-VOS (seen) 67.1 69.4
YouTube-VOS (unseen) 63.0 71.6

Usage

  • Install python3, pytorch >= 0.4, and PIL package.

  • Clone this repo:

    git clone https://github.com/microsoft/transductive-vos.pytorch
  • Prepare DAVIS 17 train-val dataset:

    # first download the dataset
    cd /path-to-data-directory/
    wget https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip
    # unzip
    unzip DAVIS-2017-trainval-480p.zip
    # split train-val dataset
    python /VOS-Baseline/dataset/split_trainval.py -i ./DAVIS
    # clean up
    rm -rf ./DAVIS

    Now, your data directory should be structured like this:

    .
    |-- DAVIS_train
        |-- JPEGImages/480p/
            |-- bear
            |-- ...
        |-- Annotations/480p/
    |-- DAVIS_val
        |-- JPEGImages/480p/
            |-- bike-packing
            |-- ...
        |-- Annotations/480p/ 
    
  • Training on DAVIS training set:

    python -m torch.distributed.launch --master_port 12347 --nproc_per_node=4 main.py --data /path-to-your-davis-directory/

    All the training parameters are set to our best setting to reproduce the ResNet50 model as default. In this setting you need to have 4 GPUs with 16 GB CUDA memory each. Feel free to contact the author on parameter settings if you want to train on a single or more GPUs.

    If you want to change some parameters, you can see comments in main.py or

    python main.py -h
  • Inference on DAVIS validation set, 1 GPU with 12 GB CUDA memory is needed:

    python inference.py -r /path-to-pretrained-model -s /path-to-save-predictions

    Same as above, all the inference parameters are set to our best setting on DAVIS validation set as default, which is able to reproduce our result with a J-mean of 0.699. The saved predictions can be directly evaluated by DAVIS evaluation code.

Further Improvements

This approach is simple with clean implementations, if you add few tiny tricks, the performance will be furhter improved. For exmaple,

  • If performing epoch test, i.e., selecting the best-performing epoch, you can further get ~1.5 points absolute performance improvements on DAVIS17 dataset.
  • Pretraining the model on other image datasets with mask annotation, such as semantic segmentation and salient object detection, may bring further improvements.
  • ... ...

Contact

For any questions, please feel free to reach

Yizhuo Zhang: [email protected]
Zhirong Wu: [email protected]

Citations

@inproceedings{zhang2020a,
  title={A Transductive Approach for Video Object Segmentation}
  author={Zhang, Yizhuo and Wu, Zhirong and Peng, Houwen and Lin, Stephen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

transductive-vos.pytorch's People

Contributors

eastoffice avatar microsoft-github-policy-service[bot] avatar penghouwen avatar zhirongw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transductive-vos.pytorch's Issues

About Youtube-VOS dataset

Thanks for sharing your excellent code!
Do you have plan to release the training and inference code on Youtube-VOS dataset?

Oh, someone has asked the same question. Sorry to bother.

AssertionError

Hi,Thanks for your excellent work.

I encounter a problem when I follow your indication "python -m torch.distributed.launch --master_port 12347 --nproc_per_node=4 main.py --data /path-to-your-davis-directory/" during training.

Here is my problem:

  • AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [3], output_device None, and module parameters {device(type='cuda', index=3), device(type='cpu')}.

My device information is showed below:

  • GeForce RTX 2080 Ti
  • Ubuntu 18.04
  • pytorch 1.5.0 & torchvision 0.6.0
  • 4 GPUs with 11GB Memory

How should I solve this problem?
Thank you again.
Best wishes.

train on a custom dataset

Hi, Thank you for producing this repository. It is very impressive that you are able to achieve these results with such small sample size.
I'd like to train a similar model on my dataset, which includes only three classes, but the object I want to detect is very difficult to see (it's very small and moves quickly and sporadically). I need to use multiple annotated videos for each class, but in the Davis dataset there is only one video for each class. How could I use multiple videos for each class instead?
Also, since the object I'm looking for is very small and hard to see, could I provide some videos where there is no object at all (the segmentation mask would be blank). I'm not clear on how I would do this given the current configuration.

the problems of pretrainedmodel

I've downloaded the davis_train.tar and youtube_train.tar, but I can't successful unzip the tar package, it seems that some data is corrupted, I don't know where the problems come from...

Problems with high-resolution input

How to inference under the input of high-resolution images (1080p), I often have memory overflow problems, do you have any good ideas? Thank you!

Unable to reproduce on Davis2017 validation

Hi,

I've trained and did inference using the given commands in this repo with 4 GPUs. I trained on DAVIS17 training set and evaluated using the DAVIS17 validation set. I used this repo https://github.com/davisvideochallenge/davis2017-evaluation for evaluation.

but I was only able to get the following result:

--------------------------- Global results for val ---------------------------                                                                                                                                     
 J&F-Mean    J-Mean  J-Recall   J-Decay    F-Mean  F-Recall   F-Decay                                                                                                                                              
  0.66228  0.642125  0.758698  0.198392  0.682435  0.807384  0.242924   

I trained for 240 epochs as mentioned and evaluated on the 240th epoch. I also evaluated a few checkpoints around 240th but the result was very similar.

Could you please help me point out what could be the issues?

cudn version

What is the cuda version of your project?
This is my error:
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:32

How to calculate fps?

Hello!In the log.txt file during training, the time on the training set and validation set is displayed. And how is the fps obtained? Is time divided by 25? Thank you very much!

No annotation_centroids.npy file

Thanks for sharing your codes. I tried to train a model on DAVIS2017 dataset, but I found there's no file named "annotation_centroids.npy", which should be loaded here. Do you plan to share this file and what's the meaning of it?

centroids = np.load("./dataset/annotation_centroids.npy")

Can anyone reproduce the reported results? (Please do not close my issue, if it is addressed, I will close it by myself)

The previous discussion is here: #19

I rerun with a smaller learning rate, and use the latest code. However the results are still not being matched.

Per-sequence results saved in outputs//per-sequence_results-DAVIS17val.csv
--------------------------- Global results for DAVIS17val ---------------------------
 J&F-Mean    J-Mean  J-Recall   J-Decay    F-Mean  F-Recall   F-Decay
 0.663763  0.644109  0.754703  0.202716  0.683417   0.80778  0.248114

The paper reports 72 J&F-Mean.

If anyone reproduces it, please leave your results as a reference. And I am very curious that the Sync-BN can boost 6 points in total? Thanks in advance.

By the way, I am not here for provocation's sake, instead it is a problem worthing to be discussed. I will not use TVOS as my baseline.

What does the "centroids" mean?

Hi, Thank you for your excellent work!

I want to know what the 'annotation_centroids.npy' and the centroids means.
To be specific,I can't find something in paper related to torch.argmin(torch.sqrt(torch.sum((img.unsqueeze(1) - centroids) ** 2, 2)), 1) in function 'rgb2class'.

Best wishes.

Here is my question in Chinese,which is consistent with the above:

'centroids'这个变量以及在dataset中的 'annotation_centroids.npy' 发挥了什么作用呢,我在论文中没有看到对应的信息。
关于'rgb2class'函数中的 torch.argmin(torch.sqrt(torch.sum((img.unsqueeze(1) - centroids) ** 2, 2)), 1) 具体含义麻烦您能解释一下吗

祝好。

Inference without the motion prior

Thanks for sharing such a neat implementation

When I try to understand the effect of the motion prior, I ran inference without the motion prior by commenting off the following lines:

if frame_idx > 15:
continuous_frame = 4
# interval frames
global_similarity[:-continuous_frame] *= weight_sparse
# continuous frames
global_similarity[-continuous_frame:] *= weight_dense
else:
global_similarity = global_similarity.mul(weight_dense)

However, the final score was not the same as what was reported in the paper (J=69.0 in Table 2). Here is what I got:
+--------------+--------+----------+---------+--------+----------+---------+
| Method | J_mean | J_recall | J_decay | F_mean | F_recall | F_decay |
+--------------+--------+----------+---------+--------+----------+---------+
| val-no-motion | 0.618 | 0.714 | 0.224 | 0.675 | 0.772 | 0.253 |
+--------------+--------+----------+---------+--------+----------+---------+

Could you please help me understand what went wrong? What was the setting for the ablation study in the paper? Thank you!

I cannot reproduce the results

Hi, thanks for releasing the codes and I run your code completely but the results cannot be reproduced at all.

I use DAVIS 2017 as my training set, and evaluate the checkpoints on DAVIS 2017 validation set.

However the results are:

G/J/F: 67.2/65.2/69.2

But in your paper:

G/J/F: 72.3/69.9/74.7

I notice you use 4 GPUS with 16GB memory each and here I only has 4 GPUs with 11GB memory each. I think the hardware difference should NOT make such a significant difference. Could you please have an explanation on that because there are a few fellows issuing the same problems below.

Thanks in advance.

And what's more, your paper says it is a transductive method, however, your codes are TOTALLY different with the equations in your paper, and the masks are not semi-supervised, it is fully supervised by cross-entropy loss.

Please explain the issue which I think it is an essential problem in your paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.