Giter Club home page Giter Club logo

Comments (6)

marcellacornia avatar marcellacornia commented on July 26, 2024

Hi @MartaCollPol,
thanks for downloading our code.

Which SALICON version are you using? In 2017, a new version of this dataset was released but we did not use it in our experiments. If you want to replicate our results, you have to use the 2015 version of the SALICON dataset.

The weights we provide were obtained by training our ML-Net on the SALICON dataset only.

from mlnet.

MartaCollPol avatar MartaCollPol commented on July 26, 2024

Oh I see, I've been using the 2017 version I'm going to change to the 2015 one.
I'm interested in training the model to obtain similar results as you, for the different metrics. Therefore I don't need to use the MLnet weights you provide.
On your paper its said that you fine-tuned with MIT300, that is why I was wondering if the code, like you have it published right now, is preapered for this fine tuning, or it's prepeared for training with SALICON.

from mlnet.

marcellacornia avatar marcellacornia commented on July 26, 2024

For the results on the MIT300 dataset, we finetuned the network, trained on the SALICON, on 900 randomly selected images of the MIT1003, as suggested by the MIT Saliency Benchmark.

The code is the same used for training the SALICON dataset. You just have to change the image paths and the number of images used for training and validation in the config.py file.

from mlnet.

MartaCollPol avatar MartaCollPol commented on July 26, 2024

I've trained mlnet using the 2015 dataset and I'm still getting a score of 0.813 for the AUC Judd metric which is lower than the score I get when using your weights, can you confirm me that the SALICON version you used is the "previous release" in http://salicon.net/challenge-2017/ ? Or do you have any idea what could have gone wrong? (I haven't changed any parameter)

from mlnet.

marcellacornia avatar marcellacornia commented on July 26, 2024

Hi @MartaCollPol,
sorry for the late reply.

Yes, our results were obtained using the previous release of the SALICON dataset. Which evaluation code are you using? For the SALICON dataset, we did not write our own code but we submitted the predicted maps to this CodaLab page.

from mlnet.

MartaCollPol avatar MartaCollPol commented on July 26, 2024

Hi @marcellacornia,

I used the Python implementation of the evaluation metrics provided in the MIT saliency benchmark.
I'm no longer trying to reproduce the results so I'm closing the issue, thank you for your help!

from mlnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.