Giter Club home page Giter Club logo

clear-continual_learning_benchmark's Introduction

The CLEAR Benchmark: Continual LEArning on Real-World Imagery

drawing

Zhiqiu Lin1, Jia Shi1, Deepak Pathak*1, Deva Ramanan*1,2

1Carnegie Mellon University 2Argo AI

Link to paper: (NeurIPS 2021 Datasets and Benchmarks Track)

Link to project page: https://clear-benchmark.github.io/ (include dataset download link)

Link to original avalanche repo: (Please also credit them accordingly if you are refering to avalanche library in our codebase)

Continual learning (CL) is widely regarded as crucial challenge for lifelong AI. However, existing CL benchmarks, e.g. Permuted-MNIST and Split-CIFAR, make use of artificial temporal variation and do not align with or generalize to the real-world. In this paper, we introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014). We build CLEAR from existing large-scale image collections (YFCC100M) through a novel and scalable low-cost approach to visio-linguistic dataset curation. Our pipeline makes use of pre-trained vision-language models (e.g. CLIP) to interactively build labeled datasets, which are further validated with crowd-sourcing to remove errors and even inappropriate images (hidden in original YFCC100M). The major strength of CLEAR over prior CL benchmarks is the smooth temporal evolution of visual concepts with real-world imagery, including both high-quality labeled data along with abundant unlabeled samples per time period for continual semi-supervised learning. We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms that only utilize fully-supervised data. Our analysis also reveals that mainstream CL evaluation protocols that train and test on iid data artificially inflate performance of CL system. To address this, we propose novel "streaming" protocols for CL that always test on the (near) future. Interestingly, streaming protocols (a) can simplify dataset curation since today’s test-set can be repurposed for tomorrow’s train-set and (b) can produce more generalizable models with more accurate estimates of performance since all labeled data from each time-period is used for both training and testing (unlike classic iid train-test splits).

Codebase

This repo contain codebase for all classification experiments in our paper. For easier start, please refer to the avalanche integration of our dataset Link to original avalanche example:

configuration:

data_folder_path : path for all image (train+test)

data_train_path : path for all training image

data_test_path : path for all testing image

If both data_train_path and data_test_path are provided, it will overwrite data_folder_path and use data_train_path and data_test_path as train/test input path. If not, it will auto train/test split data_folder_path as ratio in test_split(default 0.3)

feature_path : root path for pre-train image feature

split: experiment name, don't affect the program function

load_prev: whether to restore the experiment from previous bucket

image_train and feature_train are use only when running experiment on image directly or on pre-train image feature. These two setting are mutually exclusive(which mean you only need to specify one of them accordingly.

max_memory_size Maximum number of instance store in the buffer(with replay base method/ reservoir/ bias reservoir). Default buffer size is set to the number of instance in one bucket of timestamp.

num_instance_each_class and num_instance_each_class_test: number of instance in each class in each bucket, randomly removed extra instance from training if there's more instance in the specified folder

random_seed: random seed for experiment(like train/test split, randomly sampling...). Often for testify metric robustness by averaging results from different random seed experiments.

Training:

Specify pretrain_feature under feature_train setting will automatically parse image feature into feature_path , if not exists.

pretrain_feature naming convention : prefix(for differentiate setting), pre-train model dataset, pre-train model architecture, dataset name, version of clear dataset, and end with 'feature'. For instance, moco_resnet50_clear_100_feature or test_moco_resnet50_clear_10_feature

For training experiment, run

  python train.sh --yaml

An example would be:

  python train.sh clear100/clear100_feature_resnet50_moco.yaml

For parsing metric, run

python parse_log_to_result.py --split --verbose[to print out the result matrix as well] --move[move to main server to plot] 

An example would be:

python parse_log_to_result.py --split clear100_feature_resnet50_moco --verbose 1 --move 1

For plotting the result matrix, like one in our paper, first need to specify --move 1 in running parse_log_to_result.py, and then run

python get_metric_all.py --plot 1

Contact

Please contact [email protected] with any question. Please also follow our website https://clear-benchmark.github.io/ for latest update.

clear-continual_learning_benchmark's People

Contributors

elvishelvis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

clear-continual_learning_benchmark's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.