Giter Club home page Giter Club logo

uncertainty-baselines's Introduction

Uncertainty Baselines

Travis

The goal of Uncertainty Baselines is to provide a template for researchers to build on. The baselines can be a starting point for any new ideas, applications, and/or for communicating with other uncertainty and robustness researchers. This is done in three ways:

  1. Provide high-quality implementations of standard and state-of-the-art methods on standard tasks.
  2. Have minimal dependencies on other files in the codebase. Baselines should be easily forkable without relying on other baselines and generic modules.
  3. Prescribe best practices for training and evaluating uncertainty models.

Motivation. There are many uncertainty implementations across GitHub. However, they are typically one-off experiments for a specific paper (many papers don't even have code). This raises three problems. First, there are no clear examples that uncertainty researchers can build on to quickly prototype their work. Everyone must implement their own baseline. Second, even on standard tasks such as CIFAR-10, projects differ slightly in their experiment setup, whether it be architectures, hyperparameters, or data preprocessing. This makes it difficult to compare properly across methods. Third, there is no clear guidance on which ideas and tricks necessarily contribute to getting best performance and/or are generally robust to hyperparameters.

All of our baselines are (so far) in TF2 Keras Models with tf.data pipelines. We welcome Jax and PyTorch users to use our datasets, for example via Python for loops:

for batch in tfds.as_numpy(ds):
  train_step(batch)

Although note that tfds.as_numpy calls tensor.numpy() which invokes an unnecessary copy compared to tensor._numpy():

for batch in iter(ds):
  train_step(jax.tree_map(lambda y: y._numpy(), batch))

Installation

To install the latest development version, run

pip install "git+https://github.com/google/uncertainty-baselines.git#egg=uncertainty_baselines"

There is not yet a stable version (nor an official release of this library). All APIs are subject to change.

Usage

Access Uncertainty Baselines' API via import uncertainty_baselines as ub. To run end-to-end examples with strong performance, see the baselines/ directory. For example, baselines/cifar/determinstic.py is a Wide ResNet 28-10 obtaining 96.0% test accuracy on CIFAR-10.

The experimental/ directory is for active research projects.

Below we outline modules in Uncertainty Baselines.

Datasets

The ub.datasets module consists of datasets following the tf.data.Dataset and TFDS APIs. Typically, they add minimal logic on top of TensorFlow Datasets such as default data preprocessing. Access it as:

dataset_builder = ub.datasets.Cifar10Dataset(
    split='train', validation_percent=0.1)  # Use 5000 validation images.
train_dataset = dataset_builder.load(batch_size=FLAGS.batch_size)

Alternatively, use the getter command:

dataset_builder = ub.datasets.get(
    dataset_name,
    split=split,
    **dataset_kwargs)

Supported datasets include:

  • CIFAR-10
  • CIFAR-100
  • Civil Comments Toxicity Classification, download
  • CLINC Intent Detection, download
  • Criteo Ads, download
  • GLUE, download
  • ImageNet
  • MNIST
  • MNLI
  • Wikipedia Talk Toxicity Classification, download

Adding a new dataset.

  1. Add the bibtex reference to references.md.
  2. Add the dataset definition to the datasets/ dir. Every file should have a subclass of datasets.base.BaseDataset, which at a minimum requires implementing a constructor, a tfds.core.DatasetBuilder, and _create_process_example_fn.
  3. Add a test that at a minimum constructs the dataset and checks the shapes of elements.
  4. Add the dataset to datasets/datasets.py for easy access.
  5. Add the dataset class to datasets/__init__.py.

For an example of adding a dataset, see this pull request.

Models

The ub.models module consists of models following the tf.keras.Model API. Access it as:

model = ub.models.ResNet20Builder(batch_size=FLAGS.batch_size, l2_weight=None)

Alternatively, use the getter command:

model = ub.models.get(FLAGS.model_name, batch_size=FLAGS.batch_size)

Supported models include:

  • ResNet-20 v1
  • ResNet-50 v1
  • Wide ResNet--
  • Criteo MLP
  • Text CNN
  • BERT

Adding a new model.

  1. Add the bibtex reference to references.md.

  2. Add the model definition to the models/ dir. Every file should have a create_model function with the following signature:

    def create_model(
        batch_size: int,
        ...
        **unused_kwargs: Dict[str, Any])
        -> tf.keras.models.Model:
  3. Add a test that at a minimum constructs the model and does a forward pass.

  4. Add the model to models/models.py for easy access.

  5. Add the create_model function to models/__init__.py.

Methods

The end-to-end baseline training scripts can be found in baselines/.

Supported methods include:

  • Deterministic
  • BatchEnsemble
  • Ensemble
  • Hyper-batch Ensemble
  • Hyper-deep Ensemble (Quick Intro Notebook)
  • MIMO
  • Rank-1 BNN
  • SNGP
  • Monte Carlo Dropout
  • Variational Inference

Metrics

We define metrics used across datasets below. All results are reported by roughly 3 significant digits and averaged over 10 runs.

  1. # Parameters. Number of parameters in the model to make predictions after training.

  2. Train/Test Accuracy. Accuracy over the train and test sets respectively. For a dataset of N input-output pairs (xn, yn) where the label yn takes on 1 of K values, the accuracy is

    1/N \sum_{n=1}^N 1[ \argmax{ p(yn | xn) } = yn ],

    where 1 is the indicator function that is 1 when the model's predicted class is equal to the label and 0 otherwise.

  3. Train/Test Cal. Error. Expected calibration error (ECE) over the train and test sets respectively (Naeini et al., 2015). ECE discretizes the probability interval [0, 1] under equally spaced bins and assigns each predicted probability to the bin that encompasses it. The calibration error is the difference between the fraction of predictions in the bin that are correct (accuracy) and the mean of the probabilities in the bin (confidence). The expected calibration error averages across bins.

    For a dataset of N input-output pairs (xn, yn) where the label yn takes on 1 of K values, ECE computes a weighted average

    \sum_{b=1}^B n_b / N | acc(b) - conf(b) |,

    where B is the number of bins, n_b is the number of predictions in bin b, and acc(b) and conf(b) is the accuracy and confidence of bin b respectively.

  4. Train/Test NLL. Negative log-likelihood over the train and test sets respectively (measured in nats). For a dataset of N input-output pairs (xn, yn), the negative log-likelihood is

    -1/N \sum_{n=1}^N \log p(yn | xn).

    It is equivalent up to a constant to the KL divergence from the true data distribution to the model, therefore capturing the overall goodness of fit to the true distribution (Murphy, 2012). It can also be intepreted as the amount of bits (nats) to explain the data (Grunwald, 2004).

  5. Train/Test Runtime. Training runtime is the total wall-clock time to train the model, including any intermediate test set evaluations. Wall-clock Test Runtime refers to the wall time of testing a batch of inputs. Compute Test Runtime refers to the time it takes to run a forward pass on the GPU/TPU i.e. the duration for which the device is not idle. Compute Test Runtime is lower than Wall-clock Test Runtime becuase it does not include the time it takes to schedule the job on the GPU/TPU and fetch the data.

Viewing metrics. Uncertainty Baselines writes TensorFlow summaries to the model_dir which can be consumed by TensorBoard. This included the TensorBoard hyperparameters plugin, which can be used to analyze hyperparamter tuning sweeps.

If you wish to upload to the PUBLICLY READABLE tensorboard.dev you can use the following command:

tensorboard dev upload --logdir MODEL_DIR --plugins "scalars,graphs,hparams" --name "My experiment" --description "My experiment details"

Contributors

Contributors (past and present):

  • Angelos Filos
  • Dustin Tran
  • Florian Wenzel
  • Ghassen Jerfel
  • Jeremiah Liu
  • Jeremy Nixon
  • Jie Ren
  • Josip Djolonga
  • Marton Havasi
  • Michael W. Dusenberry
  • Neil Band
  • Rodolphe Jenatton
  • Sebastian Farquhar
  • Shreyas Padhy
  • Tim G. J. Rudner
  • Yarin Gal
  • Yeming Wen
  • Zachary Nado

uncertainty-baselines's People

Contributors

znado avatar jereliu avatar dustinvtran avatar shreyaspadhy avatar nband avatar zi-lin avatar ghassenj avatar rodolphejenatton avatar yongwhan avatar ywen666 avatar filangelos avatar jeremynixon avatar dusenberrymw avatar dfurrer avatar conchylicultor avatar gpleiss avatar caisq avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.