Comments (10)
This should be enough: https://github.com/ContinualAI/avalanche/blob/master/examples/icarl.py
from continual-learning-baselines.
Hi @qsunyuan , can you please open a new issue? @rudysemola maybe you can help answering?
from continual-learning-baselines.
I gladly try it
from continual-learning-baselines.
Hi! sorry to interrupt you.
I reproduced iCarl results (default args), but I cannot achieve the trarget results of 0.62 in this link, even I tried some different random seed.
I would like to know where did the value of avearge-acc 0.62 come from?
I also checked the original papaer. The last exp top1-test-acc is about 0.49 (amazing!)
I tried my own implementation code and the avalanche library. I can't reach such high results. Could u pls give some insights? I'm so confused right now.
Hope to get ur early replay. Thx.
from continual-learning-baselines.
Thx for ur quick replay. I will open a new issue.
from continual-learning-baselines.
Hi @qsunyuan.
If I remember well, the last result in the plot shown to you is the accuracy achieved for the last experience (0.49).
The metric used here is the average incremental accuracy defined by the authors in Section 4. (Experiments, Benchmark protocol part).
If you see the image, taken from the paper (Table 1 a), the result achieved in the paper achieved is 0.641 for 10 classes using this definition of the metric, not 0.49.
0.62 was achieved using the same definition in the paper.
from continual-learning-baselines.
If you want another similar code to reproduce the results in the paper (near to 0.641) you can use this old code, is basically the same but you never know maybe can work and help you ;)
from continual-learning-baselines.
Thx for ur help. It really helps me a lot. I will try the codes right now.
Thx again. Best wishes. Have a good day.
from continual-learning-baselines.
I tried ur link @rudysemola and this one https://github.com/ContinualAI/avalanche/blob/master/examples/icarl.py
I achieved a result of about 0.62.
Unfortunately I also did not achieve the result of 0.64.
from continual-learning-baselines.
Hi :) Minor changes in the performance with respect to the original paper are often due to slightly different training modalities (e.g. learning rate scheduler) which are not always easy to investigate or are not disclose in the paper. Therefore, as a policy, we allow for a 2% slack in accuracy during tests. Our target result for iCarl is 0.62. If you manage to close this gap, please let us know!
from continual-learning-baselines.
Related Issues (20)
- disable deterministic runs HOT 1
- Close the performance gap for available strategy
- Experiments failed to be reproduced HOT 2
- Table notation for reproducibility HOT 10
- GEM results are shown in the table as class-incremental HOT 1
- Experiments failed to be reproduced
- update cope results HOT 3
- ADD replay baselines HOT 6
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Experiments failed to be reproduced
- Something wrong about import... Can you provide some details about your envinronment configs?
- Question about Synaptic intelligence baseline on the SplitMNIST dataset HOT 2
- Experiments failed to be reproduced
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from continual-learning-baselines.