Comments (6)
Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.
I didn't remember this. Then I think it's better to provide independent baselines.
For now I will try to take similar hyperparameters than in the survey and see if I get close results with the avalanche replay plugin
from continual-learning-baselines.
I can add these if you want. The only thing I was worrying about is the setting, maybe we can do two settings no ?
- The online setting, with OnlineScenario and where data from the current task is also replayed during the learning of that task, since buffer is updated at each mini-batch.
- The non-online one, where we can add more epochs and some hyperparameters change that we have to precise.
- Split-Cifar 10, Split-Cifar 100, and Split-Mnist should be a good start
- What memory amounts ?
I guess the problem with replay compared to other methods is that they are baselines, so there is no reference paper that we are trying to reproduce the hyperparamters of, so we have to precise them and choose them in a sensible manner.
from continual-learning-baselines.
I agree with having both batch and online settings. I would still try to follow a paper if possible:
- there is M.Masana CIL review, which should be easy to reproduce. I don't remember which replay policy they use though.
- replay with tiny memories could also be a good choice
Otherwise, just pick a reasonable memory size for the benchmark.
from continual-learning-baselines.
I agree with what you already said. In case there is no paper that we can use as exact reference, we can still check that the performance makes sense and that it does not decrease in future releases.
from continual-learning-baselines.
I agree with having both batch and online settings. I would still try to follow a paper if possible:
- there is M.Masana CIL review, which should be easy to reproduce. I don't remember which replay policy they use though.
- replay with tiny memories could also be a good choice
Otherwise, just pick a reasonable memory size for the benchmark.
Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.
from continual-learning-baselines.
Marc uses a policy that is not really implemented in the framework yet, I don't know if it might be worth incorporating it. I guess we can decide after seeing the results. The way he does if I remember well is to take a dataset which is the concatenation of current task data and memory data, and build the train loader on top of it. So it ensures you see all the task data and memory, however if you have small memory it does not ensure that you see in every batch a memory batch and a current batch.
I didn't remember this. Then I think it's better to provide independent baselines.
from continual-learning-baselines.
Related Issues (20)
- disable deterministic runs HOT 1
- Close the performance gap for available strategy
- Experiments failed to be reproduced HOT 2
- Table notation for reproducibility HOT 10
- GEM results are shown in the table as class-incremental HOT 1
- Experiments failed to be reproduced
- update cope results HOT 3
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Something wrong about import... Can you provide some details about your envinronment configs?
- Question about Synaptic intelligence baseline on the SplitMNIST dataset HOT 2
- Experiments failed to be reproduced
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from continual-learning-baselines.