Giter Club home page Giter Club logo

continualai / continual-learning-baselines Goto Github PK

View Code? Open in Web Editor NEW
239.0 8.0 34.0 244 KB

Continual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.

Home Page: https://avalanche.continualai.org/

License: MIT License

Python 99.76% Shell 0.24%
continual-learning lifelong-learning assessment experiments literature reproduction synaptic-intelligence elastic-weight-consolidation ewc gem

continual-learning-baselines's Introduction

Continual Learning Baselines

Avalanche Website | Avalanche Repository

This project provides a set of examples with popular continual learning strategies and baselines. You can easily run experiments to reproduce results from original paper or tweak the hyperparameters to get your own results. Sky is the limit!

To guarantee fair implementations, we rely on the Avalanche library, developed and maintained by ContinualAI. Feel free to check it out and support the project!

Experiments

The tables below describes all the experiments currently implemented in the experiments folder, along with their result. The tables are not meant to compare different methods but rather as a reference for their performance. Different methods may use slightly different setups (e.g., starting from a pre-trained model or from scratch), so it does not always make sense to compare them.

If an experiment reproduces exactly the results of a paper in terms of Performance (even if with different hyper-parameters), it is marked with ✅ on the Reproduced column. Otherwise, it is marked with ❌.
Avalanche means that we could not find any specific paper as reference and we used the performance of Avalanche obtained when the strategy was first add to the library.
If the Performance is much worse than the expected one, the bug tag is used in the Reproduced column.
Finally, the Reference column reports the expected performance, together with a link to the associated paper (if any). Note that the link does not always point to the paper which introduced the strategy, since it sometimes differs from the one we used to get the target performance.

ACC means the Average Accuracy on all experiences after training on the last experience.

First, we report the results for the non-online continual learning case (a.k.a. batch continual learning). Then, we report the results for the online continual learning case.

Batch Continual Learning (non-online)

Benchmarks Strategy Scenario Performance Reference Reproduced
Permuted MNIST Less-Forgetful Learning (LFL) Domain-Incremental ACC=0.88 ACC=0.88 Avalanche
Permuted MNIST Elastic Weight Consolidation (EWC) Domain-Incremental ACC=0.83 ACC=0.94
Permuted MNIST Synaptic Intelligence (SI) Domain-Incremental ACC=0.83 ACC=0.95
Split CIFAR-100 LaMAML Task-Incremental ACC=0.70 ACC=0.70
Split CIFAR-100 iCaRL Class-Incremental ACC=0.48 ACC=0.50
Split CIFAR-100 Replay Class-Incremental ACC=0.32 ACC=0.32 Avalanche
Split MNIST RWalk Task-Incremental ACC=0.99 ACC=0.99
Split MNIST Synaptic Intelligence (SI) Task-Incremental ACC=0.97 ACC=0.97
Split MNIST GDumb Class-Incremental ACC=0.97 ACC=0.97
Split MNIST GSS_greedy Class-Incremental ACC=0.82 ACC=0.78
Split MNIST Generative Replay (GR) Class-Incremental ACC=0.75 ACC=0.75
Split MNIST Learning without Forgetting (LwF) Class-Incremental ACC=0.23 ACC=0.23
Split Tiny ImageNet LaMAML Task-Incremental ACC=0.54 ACC=0.66
Split Tiny ImageNet Learning without Forgetting (LwF) Task-Incremental ACC=0.44 ACC=0.44
Split Tiny ImageNet Memory Aware Synapses (MAS) Task-Incremental ACC=0.40 ACC=0.40
Split Tiny ImageNet PackNet Task-Incremental ACC=0.46 ACC=0.47 (Table 4 SMALL)

Online Continual Learning

Benchmarks Strategy Scenario Performance Reference Reproduced
CORe50 Deep Streaming LDA (DSLDA) Class-Incremental ACC=0.79 ACC=0.79
Permuted MNIST GEM Domain-Incremental ACC=0.80 ACC=0.83
Split CIFAR-10 Online Replay Class-Incremental ACC=0.50 ACC=0.50 Avalanche
Split CIFAR-10 ER-AML Class-Incremental ACC=0.47 ACC=0.47
Split CIFAR-10 ER-ACE Class-Incremental ACC=0.45 ACC=0.52
Split CIFAR-10 Supervised Contrastive Replay (SCR) Class-Incremental ACC=0.36 ACC=0.48 Avalanche
Permuted MNIST Average GEM (AGEM) Domain-Incremental ACC=0.81 ACC=0.81
Split CIFAR-100 GEM Task-Incremental ACC=0.63 ACC=0.63
Split CIFAR-100 Average GEM (AGEM) Task-Incremental ACC=0.62 ACC=0.62
Split CIFAR-100 ER-ACE Class-Incremental ACC=0.24 ACC=0.25
Split CIFAR-100 ER-AML Class-Incremental ACC=0.24 ACC=0.24
Split CIFAR-100 Online Replay Class-Incremental ACC=0.21 ACC=0.21 Avalanche
Split MNIST CoPE Class-Incremental ACC=0.93 ACC=0.93
Split MNIST Online Replay Class-Incremental ACC=0.92 ACC=0.92 Avalanche

Python dependencies for experiments

Outside Python standard library, the main packages required to run the experiments are PyTorch, Avalanche and Pandas.

  • Avalanche: The latest version of this repo requires the latest Avalanche version (from master branch): pip install git+https://github.com/ContinualAI/avalanche.git. The CL baselines repo is tagged with the supported Avalanche version (you can browse the tags to check out all the versions). You can install the corresponding Avalanche versions with pip install avalanche-lib==[version number], where [version number] is of the form 0.1.0. For some strategies (e.g., LaMAML) you may need to install Avalanche with extra packages, like pip install avalanche-lib[extra]. For more details on how to install Avalanche, please check out the complete guide here.
  • PyTorch: we recommend to follow the official guide.
  • Pandas: pip install pandas. Official guide.

Run experiments with Python

Place yourself into the project root folder.

Experiments can be run with a python script by simply importing the function from the experiments folder and executing it.
By default, experiments will run on GPU, when available.

The input argument to each experiment is an optional dictionary of parameters to be used in the experiments. If None, default parameters (taken from original paper) will be used.

from experiments.split_mnist import synaptic_intelligence_smnist  # select the experiment

 # can be None to use default parameters
custom_hyperparameters = {'si_lambda': 0.01, 'cuda': -1, 'seed': 3}

# run the experiment
result = synaptic_intelligence_smnist(custom_hyperparameters)

# dictionary of avalanche metrics
print(result)  

Command line experiments

Place yourself into the project root folder.
You should add the project root folder to your PYTHONPATH.

For example, on Linux you can set it up globally:

export PYTHONPATH=${PYTHONPATH}:/path/to/continual-learning-baselines

or just for the current command:

PYTHONPATH=${PYTHONPATH}:/path/to/continual-learning-baselines command to be executed

You can run experiments directly through console with the default parameters.
Open the console and run the python file you want by specifying its path.

For example, to run Synaptic Intelligence on Split MNIST:

python experiments/split_mnist/synaptic_intelligence.py

To execute experiment with custom parameters, please refer to the previous section.

Run tests

Place yourself into the project root folder.

You can run all tests with

python -m unittest

or you can specify a test by providing the test name in the format tests.strategy_class_name.test_benchmarkname.

For example to run Synaptic Intelligence on Split MNIST you can run:

python -m unittest tests.SynapticIntelligence.test_smnist

Cite

If you used this repo you automatically used Avalanche, please remember to cite our reference paper published at the CLVision @ CVPR2021 workshop: "Avalanche: an End-to-End Library for Continual Learning". This will help us make Avalanche better known in the machine learning community, ultimately making it a better tool for everyone:

@InProceedings{lomonaco2021avalanche,
    title={Avalanche: an End-to-End Library for Continual Learning},
    author={Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L. Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido van de Ven and Martin Mundt and Qi She and Keiland Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German I. Parisi and Fabio Cuzzolin and Andreas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},
    booktitle={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition},
    series={2nd Continual Learning in Computer Vision Workshop},
    year={2021}
}

Contribute to the project

We are always looking for new contributors willing to help us in the challenging mission of providing robust experiments to the community. Would you like to join us? The steps are easy!

  1. Take a look at the opened issues and find yours
  2. Fork this repo and write an experiment (see next section)
  3. Submit a PR and receive support from the maintainers
  4. Merge the PR, your contribution is now included in the project!

Write an experiment

  1. Create the appropriate script into experiments/benchmark_folder. If the benchmark is not present, you can add one.
  2. Fill the experiment.py file with your code, following the style of the other experiments. The script should return the metrics used by the related test.
  3. Add to tests/target_results.csv the expected result for your experiment. You can add a number or a list of numbers.
  4. Write the unit test in tests/strategy_folder/experiment.py. Follow the very simple structure of existing tests.
  5. Update table in README.md.

Find the avalanche commit which produced a regression

  1. Place yourself into the avalanche folder and make sure you are using the avalanche version from that repository in your python environment (it is usually enough to add /path/to/avalanche to your PYTHONPATH).
  2. Use the gitbisect_test.sh (provided in this repository) in combination with git bisect to retrieve the avalanche commit introducing the regression.
    git bisect start HEAD v0.1.0 -- # HEAD (current version) is bad, v0.1.0 is good
    git bisect run /path/to/gitbisect_test.sh /path/to/continual-learning-baselines optional_test_name
    git bisect reset
  3. The gitbisect_test.sh script requires a mandatory parameter pointing to the continual-learning-baselines directory and an optional parameter specifying the path to a particular unittest (e.g., tests.EWC.test_pmnist). If the second parameter is not given, all the unit tests will be run.
  4. The terminal output will tell you which commit introduced the bug
  5. You can change the HEAD and v0.1.0 ref to any avalanche commit.

continual-learning-baselines's People

Contributors

albinsou avatar andreacossu avatar antoniocarta avatar geremiapompei avatar hamedhemati avatar rmassidda avatar rudysemola avatar tachyonicclock avatar travela avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

continual-learning-baselines's Issues

Reference papers

Papers which are used as baseline comparison should be referenced in the code

E.g from https://github.com/ContinualAI/continual-learning-baselines/blob/main/experiments/permuted_mnist/synaptic_intelligence.py


def synaptic_intelligence_pmnist(override_args=None):
    
    @article{zenke_continual_2017,
	    title = {Continual {Learning} {Through} {Synaptic} {Intelligence}},
	    url = {http://arxiv.org/abs/1703.04200},
	    journal = {arXiv:1703.04200 [cs, q-bio, stat]},
	    author = {Zenke, Friedemann and Poole, Ben and Ganguli, Surya},
	    month = jun,
	    year = {2017},
    }

   args = create_default_args({'cuda': 0, 'si_lambda': 0.1, 'si_eps': 0.1, 'epochs': 20,
                                'learning_rate': 0.001, 'train_mb_size': 256, 'seed': 0}, override_args)

Something wrong about import... Can you provide some details about your envinronment configs?

avalanche-lib Version: 0.4.0a0

When I use:
python experiments/split_mnist/synaptic_intelligence.py
or
python -m unittest tests.SynapticIntelligence.test_smnist

I get something like:
Traceback (most recent call last): File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/synaptic_intelligence.py", line 7, in <module> from experiments.utils import set_seed, create_default_args File "/home/avabaseline/continual-learning-baselines/experiments/__init__.py", line 1, in <module> from . import split_mnist File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/__init__.py", line 1, in <module> from .synaptic_intelligence import synaptic_intelligence_smnist File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/synaptic_intelligence.py", line 7, in <module> from experiments.utils import set_seed, create_default_args File "/home/avabaseline/continual-learning-baselines/experiments/utils.py", line 9, in <module> from avalanche.benchmarks.utils import AvalancheSubset ImportError: cannot import name 'AvalancheSubset' from 'avalanche.benchmarks.utils' (/home/avalanche/avalanche/benchmarks/utils/__init__.py)

avalanche-lib Version: 0.1.0

When I use:
python experiments/split_mnist/synaptic_intelligence.py
or
python -m unittest tests.SynapticIntelligence.test_smnist

I get something like:
Traceback (most recent call last): File "experiments/split_mnist/naive.py", line 1, in <module> import avalanche as avl File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/__init__.py", line 1, in <module> from avalanche import benchmarks File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/benchmarks/__init__.py", line 13, in <module> from .classic import * File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/benchmarks/classic/__init__.py", line 1, in <module> from .ccifar10 import * File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/benchmarks/classic/ccifar10.py", line 19, in <module> from avalanche.benchmarks.datasets import default_dataset_location File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/benchmarks/datasets/__init__.py", line 11, in <module> from .torchvision_wrapper import * File "/opt/conda/envs/avabase/lib/python3.7/site-packages/avalanche/benchmarks/datasets/torchvision_wrapper.py", line 40, in <module> from torchvision.datasets import Kinetics400 as torchKinetics400 ImportError: cannot import name 'Kinetics400' from 'torchvision.datasets' (/opt/conda/envs/avabase/lib/python3.7/site-packages/torchvision/datasets/__init__.py)

Okay, something wrong with torch version,
pip install torch==1.9.0 torchvision==0.10.0

Then new problem is :
Traceback (most recent call last): File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/naive.py", line 7, in <module> from experiments.utils import set_seed, create_default_args File "/home/avabaseline/continual-learning-baselines/experiments/__init__.py", line 1, in <module> from . import split_mnist File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/__init__.py", line 3, in <module> from .gss import gss_smnist File "/home/avabaseline/continual-learning-baselines/experiments/split_mnist/gss.py", line 3, in <module> from avalanche.benchmarks import CLExperience ImportError: cannot import name 'CLExperience' from 'avalanche.benchmarks' (/opt/conda/envs/ab/lib/python3.9/site-packages/avalanche/benchmarks/__init__.py)

I also try other version of avalanche-lib. And different errors happened, such as: ImportError: cannot import name 'MIRPlugin' from 'avalanche.training.plugins' (/opt/conda/envs/ab/lib/python3.9/site-packages/avalanche/training/plugins/__init__.py)

OMG!
I don't know how to solve it and I'm upset. Maybe I made some silly mistakes. Please forgive me and help me.
Give me some details about correct envinronments plz.
THX!!!

Close the performance gap for available strategy

Currently, we still face a performance gap for some of the existing strategies.
The expected performance can be found in the comments of the related experiments folder.

Any help in closing the gap is welcome. Just comment this issue and I will assign you to that strategy.

List of strategies to "fix":

  • Elastic Weight Consolidation on Permuted MNIST
  • Synaptic Intelligence on Permuted MNIST
  • iCarl on Split CIFAR-100
  • RWalk on Split MNIST
  • GSS on Split MNIST
  • COPE on Split MNIST (most likely a bug on COPE)
  • LaMAML on Split Tiny-ImageNet

Synaptic Intelligence vs Naive Finetuning Comparison

Hi, I am running some experiments to compare synaptic intelligence and naive fine-tuning on different benchmarks, including SplitMNIST, PermutedMNIST, and a custom dataset benchmark of non-iid datasets. I observed that the performance of synaptic intelligence mirrors exactly the performance of the naive fine-tuning strategy. Is this performance expected?

ADD replay baselines

We are missing baselines for Replay with Reservoir Sampling and Class-Balanced Reservoir Sampling.

disable deterministic runs

Right now we have determinism enabled. This results in slower experiments. I think it would be better to disable it by default and only use it for the unit tests.

Reproducing LwF experiments

I noticed something strange in my own experiments, related to a change I made a while ago to LwF in the avalanche master branch. Basically, right now the distillation is applied only to the previously active units. Formally, this is the closest solution to the original paper (which only uses multiheads). However, distilling on all the units (as we did previously) results in a better accuracy, probably because of the additional penalization to the new units.

Question about Synaptic intelligence baseline on the SplitMNIST dataset

First off, I would like to express my gratitude for creating this repository on continual learning baseline and the well put avalanche library. I am currently studying synaptic intelligence and trying to replicate the results, particularly on the SplitMNIST dataset. The page reports 97% accuracy which is described as the average accuracy across all experience after training on the last experience. However, when I run the code, the final accuracy is only 19.27%, which is exactly the same as when I manually evaluate the performance of the trained model against the full MNIST test dataset. Is the 19.27% accuracy correct? Or am I missing something?

All the best,

Experiments failed to be reproduced

F

FAIL: test_smnist (tests.lwf.experiment.LwF)
Split MNIST benchmark

Traceback (most recent call last):
File "/home/acossu/continual-learning-baselines/tests/lwf/experiment.py", line 33, in test_smnist
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.31 != 0.1944 within 0.03 delta (0.11560000000000001 difference)


Ran 1 test in 124.929s

FAILED (failures=1)

One Little Typo in experiments.slipt_mnist.naive

In experiments.slipt_mnist.naive module, the 'task_incremental': False in line 18 is not corresponding to the return_task_id=args.task_incremental in line 24, which will cause AttributeError: 'types.SimpleNamespace' object has no attribute 'task_incremental' on my computer. I think it's a typo in line 18, but it has been here for a quite long time.
I am using Python 3.9, and I suppose it can be fixed in one second, literally.

Experiments failed to be reproduced

/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.agem.AGEMPlugin object at 0x7f52f7dbe700> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.AGEM object at 0x7f52f7dbe6d0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.agem.AGEMPlugin object at 0x7f52f77ff3a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.AGEM object at 0x7f52f7803610>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.cope.CoPEPlugin object at 0x7f531d1eb3a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.Naive object at 0x7f531d1eb640>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/reproducible-continual-learning/strategies/dslda/experiment.py:59: UserWarning: The Deep SLDA example is not perfectly aligned with the paper implementation since it does not use a base initialization phase and instead starts streming from pre-trained weights. Performance should still match.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.ewc.EWCPlugin object at 0x7f5367f4e910> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.EWC object at 0x7f5367f4e610>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gdumb.GDumbPlugin object at 0x7f5367b414f0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GDumb object at 0x7f5367b41d60>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gem.GEMPlugin object at 0x7f5367af79a0> implements incompatible callbacks for template <strategies.gem.experiment.GEM_reduced object at 0x7f5367af7c10>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gem.GEMPlugin object at 0x7f53675ab4c0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GEM object at 0x7f53675ab8b0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gss_greedy.GSS_greedyPlugin object at 0x7f5367c33c10> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GSS_greedy object at 0x7f5367c33c40>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f5367d1c6d0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f5367d1c280>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f5367a539a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f5367a53220>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f5367b41040> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f5368474250>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.mas.MASPlugin object at 0x7f536760a820> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.MAS object at 0x7f536760a3a0>. This may result in errors.
warnings.warn(

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.09it/s]
8%|▊ | 4/50 [00:00<00:04, 9.99it/s]
10%|█ | 5/50 [00:00<00:04, 9.94it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.60it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.01it/s]
20%|██ | 10/50 [00:00<00:03, 10.20it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.35it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.48it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.55it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.58it/s]
40%|████ | 20/50 [00:01<00:02, 10.59it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.50it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.55it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.59it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.61it/s]
60%|██████ | 30/50 [00:02<00:01, 10.66it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.71it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.73it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.71it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.65it/s]
80%|████████ | 40/50 [00:03<00:00, 10.66it/s]
84%|████████▍ | 42/50 [00:03<00:00, 10.67it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.70it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.70it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.61it/s]
100%|██████████| 50/50 [00:04<00:00, 10.62it/s]
100%|██████████| 50/50 [00:04<00:00, 10.53it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.33it/s]
8%|▊ | 4/50 [00:00<00:04, 10.32it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.29it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.26it/s]
20%|██ | 10/50 [00:00<00:03, 10.24it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.29it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.32it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.32it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.27it/s]
40%|████ | 20/50 [00:01<00:02, 10.28it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.27it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.31it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.31it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.28it/s]
60%|██████ | 30/50 [00:02<00:01, 10.32it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.29it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.28it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.29it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.29it/s]
80%|████████ | 40/50 [00:03<00:00, 10.31it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.34it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.39it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.39it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.38it/s]
100%|██████████| 50/50 [00:04<00:00, 10.38it/s]
100%|██████████| 50/50 [00:04<00:00, 10.32it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:04, 9.91it/s]
6%|▌ | 3/50 [00:00<00:04, 10.26it/s]
10%|█ | 5/50 [00:00<00:04, 10.37it/s]
14%|█▍ | 7/50 [00:00<00:04, 10.39it/s]
18%|█▊ | 9/50 [00:00<00:03, 10.40it/s]
22%|██▏ | 11/50 [00:01<00:03, 10.43it/s]
26%|██▌ | 13/50 [00:01<00:03, 10.47it/s]
30%|███ | 15/50 [00:01<00:03, 10.45it/s]
34%|███▍ | 17/50 [00:01<00:03, 10.46it/s]
38%|███▊ | 19/50 [00:01<00:02, 10.47it/s]
42%|████▏ | 21/50 [00:02<00:02, 10.44it/s]
46%|████▌ | 23/50 [00:02<00:02, 10.44it/s]
50%|█████ | 25/50 [00:02<00:02, 10.48it/s]
54%|█████▍ | 27/50 [00:02<00:02, 10.49it/s]
58%|█████▊ | 29/50 [00:02<00:02, 10.45it/s]
62%|██████▏ | 31/50 [00:02<00:01, 10.45it/s]
66%|██████▌ | 33/50 [00:03<00:01, 10.42it/s]
70%|███████ | 35/50 [00:03<00:01, 10.36it/s]
74%|███████▍ | 37/50 [00:03<00:01, 10.38it/s]
78%|███████▊ | 39/50 [00:03<00:01, 10.38it/s]
82%|████████▏ | 41/50 [00:03<00:00, 10.40it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.40it/s]
90%|█████████ | 45/50 [00:04<00:00, 10.39it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.40it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.38it/s]
100%|██████████| 50/50 [00:04<00:00, 10.41it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.26it/s]
8%|▊ | 4/50 [00:00<00:04, 10.26it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.31it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.37it/s]
20%|██ | 10/50 [00:00<00:03, 10.35it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.38it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.38it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.40it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.42it/s]
40%|████ | 20/50 [00:01<00:02, 10.40it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.40it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.40it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.37it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.35it/s]
60%|██████ | 30/50 [00:02<00:01, 10.36it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.33it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.32it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.28it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.26it/s]
80%|████████ | 40/50 [00:03<00:00, 10.24it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.26it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.24it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.30it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.33it/s]
100%|██████████| 50/50 [00:04<00:00, 10.34it/s]
100%|██████████| 50/50 [00:04<00:00, 10.34it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:05, 9.55it/s]
4%|▍ | 2/50 [00:00<00:05, 9.51it/s]
6%|▌ | 3/50 [00:00<00:04, 9.49it/s]
8%|▊ | 4/50 [00:00<00:04, 9.44it/s]
10%|█ | 5/50 [00:00<00:04, 9.45it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.38it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.41it/s]
16%|█▌ | 8/50 [00:00<00:04, 9.37it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.40it/s]
20%|██ | 10/50 [00:01<00:04, 9.36it/s]
22%|██▏ | 11/50 [00:01<00:04, 9.40it/s]
24%|██▍ | 12/50 [00:01<00:04, 9.24it/s]
26%|██▌ | 13/50 [00:01<00:04, 9.24it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.22it/s]
30%|███ | 15/50 [00:01<00:03, 9.33it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.38it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.26it/s]
36%|███▌ | 18/50 [00:01<00:03, 9.35it/s]
40%|████ | 20/50 [00:02<00:03, 9.76it/s]
44%|████▍ | 22/50 [00:02<00:02, 9.96it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.09it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.10it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.19it/s]
60%|██████ | 30/50 [00:03<00:01, 10.27it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.35it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.38it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.39it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.39it/s]
80%|████████ | 40/50 [00:04<00:00, 10.42it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.42it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.41it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.42it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.46it/s]
100%|██████████| 50/50 [00:05<00:00, 10.46it/s]
100%|██████████| 50/50 [00:05<00:00, 10.00it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.14it/s]
8%|▊ | 4/50 [00:00<00:04, 9.98it/s]
10%|█ | 5/50 [00:00<00:04, 9.95it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.92it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.90it/s]
16%|█▌ | 8/50 [00:00<00:04, 9.88it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.87it/s]
20%|██ | 10/50 [00:01<00:04, 9.87it/s]
22%|██▏ | 11/50 [00:01<00:03, 9.89it/s]
24%|██▍ | 12/50 [00:01<00:03, 9.86it/s]
26%|██▌ | 13/50 [00:01<00:03, 9.86it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.87it/s]
30%|███ | 15/50 [00:01<00:03, 9.86it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.89it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.65it/s]
38%|███▊ | 19/50 [00:01<00:03, 9.89it/s]
42%|████▏ | 21/50 [00:02<00:02, 10.05it/s]
46%|████▌ | 23/50 [00:02<00:02, 10.20it/s]
50%|█████ | 25/50 [00:02<00:02, 10.30it/s]
54%|█████▍ | 27/50 [00:02<00:02, 10.32it/s]
58%|█████▊ | 29/50 [00:02<00:02, 10.36it/s]
62%|██████▏ | 31/50 [00:03<00:01, 10.37it/s]
66%|██████▌ | 33/50 [00:03<00:01, 10.38it/s]
70%|███████ | 35/50 [00:03<00:01, 10.38it/s]
74%|███████▍ | 37/50 [00:03<00:01, 10.37it/s]
78%|███████▊ | 39/50 [00:03<00:01, 10.38it/s]
82%|████████▏ | 41/50 [00:04<00:00, 10.39it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.37it/s]
90%|█████████ | 45/50 [00:04<00:00, 10.37it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.42it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.35it/s]
100%|██████████| 50/50 [00:04<00:00, 10.19it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.34it/s]
8%|▊ | 4/50 [00:00<00:04, 10.39it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.37it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.42it/s]
20%|██ | 10/50 [00:00<00:03, 10.41it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.42it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.42it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.39it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.38it/s]
40%|████ | 20/50 [00:01<00:02, 10.42it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.43it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.41it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.41it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.44it/s]
60%|██████ | 30/50 [00:02<00:01, 10.47it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.47it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.49it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.46it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.44it/s]
80%|████████ | 40/50 [00:03<00:00, 10.44it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.42it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.41it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.44it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.47it/s]
100%|██████████| 50/50 [00:04<00:00, 10.47it/s]
100%|██████████| 50/50 [00:04<00:00, 10.43it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.17it/s]
8%|▊ | 4/50 [00:00<00:04, 10.26it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.31it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.37it/s]
20%|██ | 10/50 [00:00<00:03, 10.43it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.39it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.38it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.37it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.37it/s]
40%|████ | 20/50 [00:01<00:02, 10.39it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.39it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.35it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.31it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.31it/s]
60%|██████ | 30/50 [00:02<00:01, 10.26it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.26it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.27it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.30it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.32it/s]
80%|████████ | 40/50 [00:03<00:00, 10.33it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.33it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.33it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.35it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.36it/s]
100%|██████████| 50/50 [00:04<00:00, 10.36it/s]
100%|██████████| 50/50 [00:04<00:00, 10.34it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.33it/s]
8%|▊ | 4/50 [00:00<00:04, 10.26it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.32it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.32it/s]
20%|██ | 10/50 [00:00<00:03, 10.33it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.35it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.29it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.28it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.30it/s]
40%|████ | 20/50 [00:01<00:02, 10.32it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.36it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.38it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.39it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.40it/s]
60%|██████ | 30/50 [00:02<00:01, 10.42it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.43it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.44it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.43it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.44it/s]
80%|████████ | 40/50 [00:03<00:00, 10.48it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.47it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.48it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.49it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.41it/s]
100%|██████████| 50/50 [00:04<00:00, 10.40it/s]
100%|██████████| 50/50 [00:04<00:00, 10.39it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:04, 9.85it/s]
4%|▍ | 2/50 [00:00<00:04, 9.87it/s]
6%|▌ | 3/50 [00:00<00:04, 9.91it/s]
8%|▊ | 4/50 [00:00<00:04, 9.93it/s]
10%|█ | 5/50 [00:00<00:04, 9.85it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.86it/s]
16%|█▌ | 8/50 [00:00<00:04, 9.88it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.87it/s]
20%|██ | 10/50 [00:01<00:04, 9.89it/s]
22%|██▏ | 11/50 [00:01<00:03, 9.82it/s]
24%|██▍ | 12/50 [00:01<00:03, 9.86it/s]
26%|██▌ | 13/50 [00:01<00:03, 9.81it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.84it/s]
30%|███ | 15/50 [00:01<00:03, 9.88it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.88it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.86it/s]
36%|███▌ | 18/50 [00:01<00:03, 9.83it/s]
38%|███▊ | 19/50 [00:01<00:03, 9.84it/s]
40%|████ | 20/50 [00:02<00:03, 9.88it/s]
42%|████▏ | 21/50 [00:02<00:02, 9.85it/s]
44%|████▍ | 22/50 [00:02<00:02, 9.81it/s]
46%|████▌ | 23/50 [00:02<00:02, 9.83it/s]
48%|████▊ | 24/50 [00:02<00:02, 9.79it/s]
50%|█████ | 25/50 [00:02<00:02, 9.83it/s]
52%|█████▏ | 26/50 [00:02<00:02, 9.86it/s]
56%|█████▌ | 28/50 [00:02<00:02, 9.85it/s]
58%|█████▊ | 29/50 [00:02<00:02, 9.61it/s]
62%|██████▏ | 31/50 [00:03<00:01, 9.89it/s]
66%|██████▌ | 33/50 [00:03<00:01, 10.11it/s]
70%|███████ | 35/50 [00:03<00:01, 10.28it/s]
74%|███████▍ | 37/50 [00:03<00:01, 10.28it/s]
78%|███████▊ | 39/50 [00:03<00:01, 10.36it/s]
82%|████████▏ | 41/50 [00:04<00:00, 10.43it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.46it/s]
90%|█████████ | 45/50 [00:04<00:00, 10.48it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.47it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.48it/s]
100%|██████████| 50/50 [00:04<00:00, 10.09it/s]
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/synaptic_intelligence.py:65: UserWarning: The Synaptic Intelligence plugin is in an alpha stage and is not perfectly aligned with the paper implementation. Please use at your own risk!
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.synaptic_intelligence.SynapticIntelligencePlugin object at 0x7f5367af7b20> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.SynapticIntelligence object at 0x7f5367af7310>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/synaptic_intelligence.py:65: UserWarning: The Synaptic Intelligence plugin is in an alpha stage and is not perfectly aligned with the paper implementation. Please use at your own risk!
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.synaptic_intelligence.SynapticIntelligencePlugin object at 0x7f536803c280> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.SynapticIntelligence object at 0x7f536803cbb0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lr_scheduling.LRSchedulerPlugin object at 0x7f5367b41e50> implements incompatible callbacks for template <avalanche.training.supervised.icarl.ICaRL object at 0x7f5367b41730>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.agem.AGEMPlugin object at 0x7f5367a533a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.AGEM object at 0x7f5367a53460>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.agem.AGEMPlugin object at 0x7f5367d1c160> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.AGEM object at 0x7f5367d1c670>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.cope.CoPEPlugin object at 0x7f52f80d3850> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.Naive object at 0x7f52f80d3e50>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/reproducible-continual-learning/strategies/dslda/experiment.py:59: UserWarning: The Deep SLDA example is not perfectly aligned with the paper implementation since it does not use a base initialization phase and instead starts streming from pre-trained weights. Performance should still match.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.ewc.EWCPlugin object at 0x7f52f77ff370> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.EWC object at 0x7f52f77ff2e0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gdumb.GDumbPlugin object at 0x7f530411c0a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GDumb object at 0x7f530411cc40>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gem.GEMPlugin object at 0x7f531d1e2850> implements incompatible callbacks for template <strategies.gem.experiment.GEM_reduced object at 0x7f531d1e20d0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gem.GEMPlugin object at 0x7f53040f18b0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GEM object at 0x7f53040f10d0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.gss_greedy.GSS_greedyPlugin object at 0x7f533a13eb20> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.GSS_greedy object at 0x7f533a13ea90>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/evaluation.py:85: UserWarning: No benchmark provided to the evaluation plugin. Metrics may be computed on inconsistent portion of streams, use at your own risk.
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lr_scheduling.LRSchedulerPlugin object at 0x7f531d1e2940> implements incompatible callbacks for template <avalanche.training.supervised.icarl.ICaRL object at 0x7f53666dffa0>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f53040f18e0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f53040f1100>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f5367b52850> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f5367b522e0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.lwf.LwFPlugin object at 0x7f52f84f6370> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.LwF object at 0x7f52f84f6f70>. This may result in errors.
warnings.warn(
F/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.mas.MASPlugin object at 0x7f530411c490> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.MAS object at 0x7f530411ceb0>. This may result in errors.
warnings.warn(

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.35it/s]
8%|▊ | 4/50 [00:00<00:04, 10.48it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.56it/s]
16%|█▌ | 8/50 [00:00<00:03, 10.62it/s]
20%|██ | 10/50 [00:00<00:03, 10.58it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.59it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.63it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.65it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.63it/s]
40%|████ | 20/50 [00:01<00:02, 10.63it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.62it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.62it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.61it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.63it/s]
60%|██████ | 30/50 [00:02<00:01, 10.64it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.69it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.69it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.66it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.61it/s]
80%|████████ | 40/50 [00:03<00:00, 10.60it/s]
84%|████████▍ | 42/50 [00:03<00:00, 10.63it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.65it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.63it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.62it/s]
100%|██████████| 50/50 [00:04<00:00, 10.60it/s]
100%|██████████| 50/50 [00:04<00:00, 10.62it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.45it/s]
8%|▊ | 4/50 [00:00<00:04, 10.48it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.47it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.44it/s]
20%|██ | 10/50 [00:00<00:03, 10.42it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.46it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.49it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.50it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.50it/s]
40%|████ | 20/50 [00:01<00:02, 10.51it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.49it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.52it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.53it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.52it/s]
60%|██████ | 30/50 [00:02<00:01, 10.54it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.48it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.45it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.46it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.45it/s]
80%|████████ | 40/50 [00:03<00:00, 10.46it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.51it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.54it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.54it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.52it/s]
100%|██████████| 50/50 [00:04<00:00, 10.49it/s]
100%|██████████| 50/50 [00:04<00:00, 10.49it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.52it/s]
8%|▊ | 4/50 [00:00<00:04, 10.56it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.57it/s]
16%|█▌ | 8/50 [00:00<00:03, 10.58it/s]
20%|██ | 10/50 [00:00<00:03, 10.58it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.60it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.51it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.53it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.55it/s]
40%|████ | 20/50 [00:01<00:02, 10.54it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.53it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.57it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.59it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.54it/s]
60%|██████ | 30/50 [00:02<00:01, 10.58it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.57it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.53it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.50it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.50it/s]
80%|████████ | 40/50 [00:03<00:00, 10.51it/s]
84%|████████▍ | 42/50 [00:03<00:00, 10.52it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.52it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.53it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.52it/s]
100%|██████████| 50/50 [00:04<00:00, 10.50it/s]
100%|██████████| 50/50 [00:04<00:00, 10.54it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:05, 9.72it/s]
4%|▍ | 2/50 [00:00<00:04, 9.77it/s]
6%|▌ | 3/50 [00:00<00:04, 9.85it/s]
8%|▊ | 4/50 [00:00<00:04, 9.87it/s]
10%|█ | 5/50 [00:00<00:04, 9.87it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.97it/s]
16%|█▌ | 8/50 [00:00<00:04, 9.90it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.84it/s]
20%|██ | 10/50 [00:01<00:04, 9.83it/s]
24%|██▍ | 12/50 [00:01<00:03, 9.91it/s]
26%|██▌ | 13/50 [00:01<00:03, 9.83it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.84it/s]
30%|███ | 15/50 [00:01<00:03, 9.82it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.83it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.82it/s]
36%|███▌ | 18/50 [00:01<00:03, 9.83it/s]
38%|███▊ | 19/50 [00:01<00:03, 9.80it/s]
40%|████ | 20/50 [00:02<00:03, 9.80it/s]
42%|████▏ | 21/50 [00:02<00:02, 9.81it/s]
44%|████▍ | 22/50 [00:02<00:02, 9.81it/s]
46%|████▌ | 23/50 [00:02<00:02, 9.83it/s]
48%|████▊ | 24/50 [00:02<00:02, 9.81it/s]
50%|█████ | 25/50 [00:02<00:02, 9.81it/s]
52%|█████▏ | 26/50 [00:02<00:02, 9.79it/s]
54%|█████▍ | 27/50 [00:02<00:02, 9.80it/s]
56%|█████▌ | 28/50 [00:02<00:02, 9.81it/s]
58%|█████▊ | 29/50 [00:02<00:02, 9.85it/s]
60%|██████ | 30/50 [00:03<00:02, 9.82it/s]
62%|██████▏ | 31/50 [00:03<00:01, 9.81it/s]
64%|██████▍ | 32/50 [00:03<00:01, 9.81it/s]
66%|██████▌ | 33/50 [00:03<00:01, 9.80it/s]
68%|██████▊ | 34/50 [00:03<00:01, 9.82it/s]
70%|███████ | 35/50 [00:03<00:01, 9.81it/s]
72%|███████▏ | 36/50 [00:03<00:01, 9.83it/s]
74%|███████▍ | 37/50 [00:03<00:01, 9.83it/s]
76%|███████▌ | 38/50 [00:03<00:01, 9.85it/s]
78%|███████▊ | 39/50 [00:03<00:01, 9.81it/s]
80%|████████ | 40/50 [00:04<00:01, 9.83it/s]
82%|████████▏ | 41/50 [00:04<00:00, 9.85it/s]
84%|████████▍ | 42/50 [00:04<00:00, 9.87it/s]
86%|████████▌ | 43/50 [00:04<00:00, 9.80it/s]
88%|████████▊ | 44/50 [00:04<00:00, 9.82it/s]
90%|█████████ | 45/50 [00:04<00:00, 9.84it/s]
92%|█████████▏| 46/50 [00:04<00:00, 9.88it/s]
94%|█████████▍| 47/50 [00:04<00:00, 9.85it/s]
96%|█████████▌| 48/50 [00:04<00:00, 9.85it/s]
98%|█████████▊| 49/50 [00:04<00:00, 9.82it/s]
100%|██████████| 50/50 [00:05<00:00, 9.85it/s]
100%|██████████| 50/50 [00:05<00:00, 9.83it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:04, 9.87it/s]
4%|▍ | 2/50 [00:00<00:05, 9.50it/s]
6%|▌ | 3/50 [00:00<00:04, 9.69it/s]
8%|▊ | 4/50 [00:00<00:04, 9.80it/s]
10%|█ | 5/50 [00:00<00:04, 9.84it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.88it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.89it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.93it/s]
20%|██ | 10/50 [00:01<00:04, 9.95it/s]
22%|██▏ | 11/50 [00:01<00:03, 9.96it/s]
26%|██▌ | 13/50 [00:01<00:03, 9.98it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.97it/s]
30%|███ | 15/50 [00:01<00:03, 9.96it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.98it/s]
38%|███▊ | 19/50 [00:01<00:03, 10.00it/s]
42%|████▏ | 21/50 [00:02<00:02, 10.01it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.00it/s]
46%|████▌ | 23/50 [00:02<00:02, 9.97it/s]
48%|████▊ | 24/50 [00:02<00:02, 9.97it/s]
50%|█████ | 25/50 [00:02<00:02, 9.93it/s]
54%|█████▍ | 27/50 [00:02<00:02, 9.97it/s]
58%|█████▊ | 29/50 [00:02<00:02, 10.00it/s]
62%|██████▏ | 31/50 [00:03<00:01, 10.04it/s]
66%|██████▌ | 33/50 [00:03<00:01, 9.98it/s]
70%|███████ | 35/50 [00:03<00:01, 10.02it/s]
74%|███████▍ | 37/50 [00:03<00:01, 9.99it/s]
76%|███████▌ | 38/50 [00:03<00:01, 9.98it/s]
80%|████████ | 40/50 [00:04<00:00, 10.00it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.01it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.00it/s]
90%|█████████ | 45/50 [00:04<00:00, 9.99it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.04it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.05it/s]
100%|██████████| 50/50 [00:05<00:00, 9.98it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.07it/s]
8%|▊ | 4/50 [00:00<00:04, 10.03it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.04it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.06it/s]
20%|██ | 10/50 [00:00<00:03, 10.05it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.07it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.04it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.03it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.06it/s]
40%|████ | 20/50 [00:01<00:03, 9.99it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.04it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.08it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.03it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.04it/s]
60%|██████ | 30/50 [00:02<00:01, 10.05it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.08it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.07it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.03it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.03it/s]
80%|████████ | 40/50 [00:03<00:00, 10.07it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.04it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.04it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.01it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.03it/s]
100%|██████████| 50/50 [00:04<00:00, 10.02it/s]
100%|██████████| 50/50 [00:04<00:00, 10.04it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 9.94it/s]
8%|▊ | 4/50 [00:00<00:04, 9.99it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.04it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.06it/s]
20%|██ | 10/50 [00:00<00:03, 10.05it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.06it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.07it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.03it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.01it/s]
40%|████ | 20/50 [00:01<00:02, 10.04it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.03it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.00it/s]
50%|█████ | 25/50 [00:02<00:02, 10.00it/s]
54%|█████▍ | 27/50 [00:02<00:02, 10.03it/s]
58%|█████▊ | 29/50 [00:02<00:02, 10.06it/s]
62%|██████▏ | 31/50 [00:03<00:01, 10.05it/s]
66%|██████▌ | 33/50 [00:03<00:01, 10.05it/s]
70%|███████ | 35/50 [00:03<00:01, 10.07it/s]
74%|███████▍ | 37/50 [00:03<00:01, 10.05it/s]
78%|███████▊ | 39/50 [00:03<00:01, 10.05it/s]
82%|████████▏ | 41/50 [00:04<00:00, 10.03it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.02it/s]
90%|█████████ | 45/50 [00:04<00:00, 9.99it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.06it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.08it/s]
100%|██████████| 50/50 [00:04<00:00, 10.04it/s]

0%| | 0/50 [00:00<?, ?it/s]
4%|▍ | 2/50 [00:00<00:04, 10.01it/s]
8%|▊ | 4/50 [00:00<00:04, 10.02it/s]
12%|█▏ | 6/50 [00:00<00:04, 10.05it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.06it/s]
20%|██ | 10/50 [00:00<00:03, 10.06it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.04it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.94it/s]
30%|███ | 15/50 [00:01<00:03, 9.93it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.93it/s]
36%|███▌ | 18/50 [00:01<00:03, 9.99it/s]
38%|███▊ | 19/50 [00:01<00:03, 9.94it/s]
42%|████▏ | 21/50 [00:02<00:02, 10.00it/s]
46%|████▌ | 23/50 [00:02<00:02, 9.95it/s]
48%|████▊ | 24/50 [00:02<00:02, 9.93it/s]
50%|█████ | 25/50 [00:02<00:02, 9.91it/s]
54%|█████▍ | 27/50 [00:02<00:02, 10.02it/s]
58%|█████▊ | 29/50 [00:02<00:02, 10.00it/s]
62%|██████▏ | 31/50 [00:03<00:01, 9.99it/s]
66%|██████▌ | 33/50 [00:03<00:01, 10.00it/s]
70%|███████ | 35/50 [00:03<00:01, 10.01it/s]
74%|███████▍ | 37/50 [00:03<00:01, 10.03it/s]
78%|███████▊ | 39/50 [00:03<00:01, 10.04it/s]
82%|████████▏ | 41/50 [00:04<00:00, 10.06it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.05it/s]
90%|█████████ | 45/50 [00:04<00:00, 10.04it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.06it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.06it/s]
100%|██████████| 50/50 [00:04<00:00, 10.01it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:04, 9.86it/s]
4%|▍ | 2/50 [00:00<00:04, 9.84it/s]
8%|▊ | 4/50 [00:00<00:04, 9.85it/s]
10%|█ | 5/50 [00:00<00:04, 9.88it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.89it/s]
14%|█▍ | 7/50 [00:00<00:04, 9.91it/s]
16%|█▌ | 8/50 [00:00<00:04, 9.88it/s]
18%|█▊ | 9/50 [00:00<00:04, 9.90it/s]
20%|██ | 10/50 [00:01<00:04, 9.85it/s]
24%|██▍ | 12/50 [00:01<00:03, 9.90it/s]
26%|██▌ | 13/50 [00:01<00:03, 9.86it/s]
28%|██▊ | 14/50 [00:01<00:03, 9.85it/s]
32%|███▏ | 16/50 [00:01<00:03, 9.93it/s]
34%|███▍ | 17/50 [00:01<00:03, 9.88it/s]
36%|███▌ | 18/50 [00:01<00:03, 9.83it/s]
40%|████ | 20/50 [00:02<00:03, 9.91it/s]
42%|████▏ | 21/50 [00:02<00:03, 9.58it/s]
44%|████▍ | 22/50 [00:02<00:02, 9.59it/s]
46%|████▌ | 23/50 [00:02<00:02, 9.65it/s]
48%|████▊ | 24/50 [00:02<00:02, 9.72it/s]
50%|█████ | 25/50 [00:02<00:02, 9.79it/s]
52%|█████▏ | 26/50 [00:02<00:02, 9.83it/s]
54%|█████▍ | 27/50 [00:02<00:02, 9.86it/s]
56%|█████▌ | 28/50 [00:02<00:02, 9.89it/s]
60%|██████ | 30/50 [00:03<00:02, 9.94it/s]
62%|██████▏ | 31/50 [00:03<00:01, 9.95it/s]
64%|██████▍ | 32/50 [00:03<00:01, 9.95it/s]
66%|██████▌ | 33/50 [00:03<00:01, 9.96it/s]
68%|██████▊ | 34/50 [00:03<00:01, 9.95it/s]
70%|███████ | 35/50 [00:03<00:01, 9.96it/s]
72%|███████▏ | 36/50 [00:03<00:01, 9.95it/s]
74%|███████▍ | 37/50 [00:03<00:01, 9.93it/s]
78%|███████▊ | 39/50 [00:03<00:01, 9.99it/s]
82%|████████▏ | 41/50 [00:04<00:00, 10.04it/s]
86%|████████▌ | 43/50 [00:04<00:00, 10.04it/s]
90%|█████████ | 45/50 [00:04<00:00, 10.05it/s]
94%|█████████▍| 47/50 [00:04<00:00, 10.03it/s]
98%|█████████▊| 49/50 [00:04<00:00, 10.00it/s]
100%|██████████| 50/50 [00:05<00:00, 10.00it/s]
100%|██████████| 50/50 [00:05<00:00, 9.90it/s]

0%| | 0/50 [00:00<?, ?it/s]
2%|▏ | 1/50 [00:00<00:04, 9.97it/s]
4%|▍ | 2/50 [00:00<00:04, 9.61it/s]
8%|▊ | 4/50 [00:00<00:04, 9.87it/s]
12%|█▏ | 6/50 [00:00<00:04, 9.96it/s]
16%|█▌ | 8/50 [00:00<00:04, 10.00it/s]
20%|██ | 10/50 [00:01<00:03, 10.04it/s]
24%|██▍ | 12/50 [00:01<00:03, 10.06it/s]
28%|██▊ | 14/50 [00:01<00:03, 10.05it/s]
32%|███▏ | 16/50 [00:01<00:03, 10.06it/s]
36%|███▌ | 18/50 [00:01<00:03, 10.07it/s]
40%|████ | 20/50 [00:01<00:02, 10.05it/s]
44%|████▍ | 22/50 [00:02<00:02, 10.04it/s]
48%|████▊ | 24/50 [00:02<00:02, 10.03it/s]
52%|█████▏ | 26/50 [00:02<00:02, 10.04it/s]
56%|█████▌ | 28/50 [00:02<00:02, 10.06it/s]
60%|██████ | 30/50 [00:02<00:01, 10.07it/s]
64%|██████▍ | 32/50 [00:03<00:01, 10.07it/s]
68%|██████▊ | 34/50 [00:03<00:01, 10.09it/s]
72%|███████▏ | 36/50 [00:03<00:01, 10.06it/s]
76%|███████▌ | 38/50 [00:03<00:01, 10.02it/s]
80%|████████ | 40/50 [00:03<00:00, 10.07it/s]
84%|████████▍ | 42/50 [00:04<00:00, 10.08it/s]
88%|████████▊ | 44/50 [00:04<00:00, 10.07it/s]
92%|█████████▏| 46/50 [00:04<00:00, 10.07it/s]
96%|█████████▌| 48/50 [00:04<00:00, 10.06it/s]
100%|██████████| 50/50 [00:04<00:00, 10.05it/s]
100%|██████████| 50/50 [00:04<00:00, 10.04it/s]
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/synaptic_intelligence.py:65: UserWarning: The Synaptic Intelligence plugin is in an alpha stage and is not perfectly aligned with the paper implementation. Please use at your own risk!
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.synaptic_intelligence.SynapticIntelligencePlugin object at 0x7f52f78036a0> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.SynapticIntelligence object at 0x7f52f7803fd0>. This may result in errors.
warnings.warn(
./home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/plugins/synaptic_intelligence.py:65: UserWarning: The Synaptic Intelligence plugin is in an alpha stage and is not perfectly aligned with the paper implementation. Please use at your own risk!
warnings.warn(
/home/acossu/miniconda3/envs/repr/lib/python3.8/site-packages/avalanche/training/templates/base.py:205: UserWarning: Plugin <avalanche.training.plugins.synaptic_intelligence.SynapticIntelligencePlugin object at 0x7f536794f070> implements incompatible callbacks for template <avalanche.training.supervised.strategy_wrappers.SynapticIntelligence object at 0x7f52c010bca0>. This may result in errors.
warnings.warn(
.

FAIL: test_scifar100 (strategies.agem.experiment.AGEM)
Split CIFAR-100 benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/agem/experiment.py", line 97, in test_scifar100
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.57 != 0.5329411764705884 within 0.03 delta (0.03705882352941159 difference)

======================================================================
FAIL: test_smnist (strategies.cope.experiment.COPE)
Split MNIST benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/cope/experiment.py", line 69, in test_smnist
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.93 != 0.2126 within 0.03 delta (0.7174 difference)

======================================================================
FAIL: test_stinyimagenet (strategies.lwf.experiment.LwF)
Split Tiny ImageNet benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/lwf/experiment.py", line 140, in test_stinyimagenet
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.42 != 0.20620000000000002 within 0.03 delta (0.21379999999999996 difference)

======================================================================
FAIL: test_scifar100 (strategies.agem.experiment.AGEM)
Split CIFAR-100 benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/agem/experiment.py", line 97, in test_scifar100
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.57 != 0.5329411764705884 within 0.03 delta (0.03705882352941159 difference)

======================================================================
FAIL: test_smnist (strategies.cope.experiment.COPE)
Split MNIST benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/cope/experiment.py", line 69, in test_smnist
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.93 != 0.2126 within 0.03 delta (0.7174 difference)

======================================================================
FAIL: test_scifar100 (strategies.iCARL.experiment.iCARL)
scifar100 with 10 batches

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/iCARL/experiment.py", line 114, in test_scifar100
self.assertAlmostEqual(target_acc, avg_ia, delta=0.03)
AssertionError: 0.62 != 0.4885769444444444 within 0.03 delta (0.1314230555555556 difference)

======================================================================
FAIL: test_stinyimagenet (strategies.lwf.experiment.LwF)
Split Tiny ImageNet benchmark

Traceback (most recent call last):
File "/home/acossu/reproducible-continual-learning/strategies/lwf/experiment.py", line 140, in test_stinyimagenet
self.assertAlmostEqual(target_acc, avg_stream_acc, delta=0.03)
AssertionError: 0.42 != 0.20620000000000002 within 0.03 delta (0.21379999999999996 difference)


Ran 32 tests in 89924.799s

FAILED (failures=7)

RUN reproducibility on VPS

As Vincenzo noticed last week, we have access to a currently unused VPS. Maybe we should use it to periodically check Avalanche master still reproduces the results?

Table notation for reproducibility

I propose to switch the notation. Right now we have:

  • ✅ Reproduced
  • ❌ Custom setup
  • bug for bugs

IMO, this is very confusing at a first glance. If I see a big red cross I immediately think there is a problem with the strategy. In this case, everything is actually correct, we just changed some hyperparameters or tested a new benchmark.

Instead we could have two separate columns:

  • Reproduced: ✅ if correct, ❌ if bugged
  • Reference: link to the paper, or link to avalanche or custom tag if not using any paper.

Reproduce Generative Replay results

The target results depend on the generator that is being used (as well as other factors, such as how much replay data is generated and whether the replay data is class-balanced or not).

This paper lists the various results for different generative models.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.