Giter Club home page Giter Club logo

Comments (6)

AlbinSou avatar AlbinSou commented on May 26, 2024 1

Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.

I didn't remember this. Then I think it's better to provide independent baselines.

For now I will try to take similar hyperparameters than in the survey and see if I get close results with the avalanche replay plugin

from continual-learning-baselines.

AlbinSou avatar AlbinSou commented on May 26, 2024

I can add these if you want. The only thing I was worrying about is the setting, maybe we can do two settings no ?

  • The online setting, with OnlineScenario and where data from the current task is also replayed during the learning of that task, since buffer is updated at each mini-batch.
  • The non-online one, where we can add more epochs and some hyperparameters change that we have to precise.
  • Split-Cifar 10, Split-Cifar 100, and Split-Mnist should be a good start
  • What memory amounts ?

I guess the problem with replay compared to other methods is that they are baselines, so there is no reference paper that we are trying to reproduce the hyperparamters of, so we have to precise them and choose them in a sensible manner.

from continual-learning-baselines.

AntonioCarta avatar AntonioCarta commented on May 26, 2024

I agree with having both batch and online settings. I would still try to follow a paper if possible:

Otherwise, just pick a reasonable memory size for the benchmark.

from continual-learning-baselines.

AndreaCossu avatar AndreaCossu commented on May 26, 2024

I agree with what you already said. In case there is no paper that we can use as exact reference, we can still check that the performance makes sense and that it does not decrease in future releases.

from continual-learning-baselines.

AlbinSou avatar AlbinSou commented on May 26, 2024

I agree with having both batch and online settings. I would still try to follow a paper if possible:

Otherwise, just pick a reasonable memory size for the benchmark.

Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.

from continual-learning-baselines.

AntonioCarta avatar AntonioCarta commented on May 26, 2024

Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.

I didn't remember this. Then I think it's better to provide independent baselines.

from continual-learning-baselines.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.