Giter Club home page Giter Club logo

lsd-seg's People

Contributors

swamiviv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

lsd-seg's Issues

Array size mismatch when calculating cross_entropy2d

This happens when executing nll_loss located in code/torchfcn/utils.py under the mode of sourceonly. The training data is from GTA5.

The error occurs because the snippet inside the cross_entropy2d() first use a mask to exclude elements whose values are less than 0 in target(that is, labels). In other words, mislabeled pixels are not involved when calculating cross entropy.

However, the corresponding prediction values for those removed pixels still exist in log_p, which leads to the array size conflict.

Validate the results and Test on other datasets

Hi

Has anyone validated the results on the given 3 datasets and then tried to test with other datasets ?

I have tried the implementation with some datasets I have available but there is no improvement in the F networks performance on Target dataset after training for few epochs. The generator and discriminator learn to produce realistic images though. Does changing the adv_weight in F network loss improve the chances of domain adaptation in practice?

Thanks in advance.

results of FCN8s-vgg source only

Hi @swamiviv , thanks for your code. Your results of source only vgg16-fcn8s with GTA to cityscapes is 29.6, which is much higher than Curriculum domain adaptation with 22.0. What is the difference ?

The evaluation results on GTA2cityscapes

Hi, I run your code on GTA2Cityscapes but failed to reproduce the evaluations claimed in your paper. During the training, I also noticed that the F's loss is not optimized at all, meaning that the domain shift is not address well? Here is what I got and it is worse than sourceonly. Any ideas? Did I do something wrong?
===>road: 83.28
===>sidewalk: 28.78
===>building: 61.49
===>wall: 6.52
===>fence: 0.19
===>pole: 8.44
===>light: 2.17
===>sign: 0.48
===>vegetation: 58.63
===>terrain: 14.29
===>sky: 55.88
===>person: 18.45
===>rider: 0.11
===>car: 53.92
===>truck: 2.9
===>bus: 1.52
===>train: 0.0
===>motocycle: 0.12
===>bicycle: 0.03
===> mIoU: 20.91

SYNTHIA dataset version

Hi, could you please tell which exact version of SYNTHIA dataset you used for the experiments.
Is it SYNTHIA-RAND-CVPR16 or SYNTHIA-RAND-CITYSCAPES?
Thank you.

out of memory when set image size to 1024 512

Hi,

when I set the image size to be 1024,512 then there appears the out of memory error. I was using TITAN X gpu. Did you run into the same problem? how did you train with image size 1024,5112?

Request comment on the IoU code in eval_cityscapes.py

Please attach comment on the code related to IoU.
The following code is sort of hard to understand:

def fast_hist(a, b, n):
k = (a >= 0) & (a < n)
return np.bincount(n * a[k].astype(int) + b[k], minlength=n ** 2).reshape(n, n)
def per_class_iu(hist):
return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist))

BTW, I got bad evaluation results on cityscapes dataset after training on GTA5 dataset.
While the final mean accuracy for validation on GTA5 is 54.8536%, we got the following evaluation results:
image

These are quite different from those listed in your paper:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.