Giter Club home page Giter Club logo

etc-real-time-per-frame-semantic-video-segmentation's People

Contributors

irfanicmll avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

etc-real-time-per-frame-semantic-video-segmentation's Issues

R.I.P.

So sad to know you this way.
Hope the other world is nicer and warmer.
R.I.P.

lower mIoU score than reported

Thanks for your great work and code release. I have tested mIoU scores with demo.py on two released weights (e.g., eval_para_base.pth and eval_para_ETC.pth). The mIoU scores are 68.21 (base) vs. 71.66 (ETC), which is lower than the reported scores, 69.79 (base) vs. 73.06 (ETC). Please tell me if I have missed something. Thaks a lot!

Details about the evaluation of temporal consistency

Thanks for the excellent work.

(1) The loaded FlowNet is at "train", why not "eval"?
https://github.com/irfanICMLL/ETC-Real-time-Per-frame-Semantic-video-segmentation/blob/master/tool/eval_tc.py#L202

(2) Why not ignore the occlusion region?
https://github.com/irfanICMLL/ETC-Real-time-Per-frame-Semantic-video-segmentation/blob/master/tool/eval_tc.py#L218

(3) I am interested in "the pre-trained optical flow model appears in both training and evaluation". In such a case, the evaluation depends on a fitting model, and the training with the truth optical flow (if available) will under-perform the training with that pre-trained optical flow. But this evaluation approach seems just what we can do. What's the opinion of the reviewers?

How to get the TC scores

I noticed that there is no TC scores reported in the eval_tc.py. So could you please give an demo to calculate the TC in your paper?

Got an error when run python tool/demo.py

Any idea why I got an error below:

Traceback (most recent call last):
File "tool/demo.py", line 278, in
main()
File "tool/demo.py", line 160, in main
a, b = model.load_state_dict(student_ckpt, strict=False)
TypeError: cannot unpack non-iterable NoneType object

Thanks,

video temporal consistency

你好,请问temporal consistency指标的测试代码没有公开吗?在哪里可以找到这部分代码 谢谢

Questions about data for for flow loss training

Hi,
Thank you for sharing the great work.
I have two questions related to the data.

  1. As mentioned in the paper, training triplet is used for training, and I found this part implemented in dataset.py “VideoLongDatA”. But in the training code provided, “VideoData” is used for training which only provide pair data for training. I’m curious about what is the influence of these two different impl on the training results. Is this just simply a trade-off between training time and accuracy?
  2. I’m curious about the“frame gap” parameter on the results. In the config provided, 3 is the default setting. Any study about this parameter?

I’m looking forward to your reply.
Thank you.

Install problem

Can you tell the accurate pytorch version. Because Flownet2.0 use pytorch==0.4.0. While your code use pytorch>=1.0.0. Don't you get something wrong in compiling the Flownet2.0. My pytorch==1.5.0

Pretrain about the backbone ResNet

Hi,

In model/resnet.py, I find that the pretrained backbone ResNet are load from ./initmodel/resnet18.pth instead of the url path path provided by PyTorch in model_zoo.
Would you like to release the weight of the specific pretrained backbone ResNet-18?
Besides, I also find that for ResNet-50/101/152, you use the ResNet-v1 code to load the weight of ResNet-v2.
Is there any specific concern for this utilization?

CamVid config file

Thanks for your code. Would you please provide the training code for the CamVid dataset? For instance, the config files?

Could you please provide more config like 'psp' and other 'initmodel' ?

I noticed that there are only config files for psp18. and when I run python tool/train_with_flow.py, I got this:

-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/dancer/ETC-Real-time-Per-frame-Semantic-video-segmentation/tool/train_with_flow.py", line 146, in main_worker
BatchNorm=BatchNorm, flow=True)
File "/home/dancer/ETC-Real-time-Per-frame-Semantic-video-segmentation/model/pspnet_18.py", line 48, in init
resnet = models.resnet18(deep_base=False, pretrained=pretrained)
File "/home/dancer/ETC-Real-time-Per-frame-Semantic-video-segmentation/model/resnet.py", line 176, in resnet18
model.load_state_dict(torch.load(model_path), strict=False)
File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/serialization.py", line 571, in load
with _open_file_like(f, 'rb') as opened_file:
File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/serialization.py", line 229, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/serialization.py", line 210, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './initmodel/resnet18.pth'

So, could you please provide more corresponding initmodels and config files? Thanks in advance!

License

Great work! and really through and interesting paper
Very applicable :)
Is it under MIT license? I didn't see a license file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.