Giter Club home page Giter Club logo

sustainbench's Introduction

Datasets | Website | Raw Data | OpenReview

SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning

Christopher Yeh, Chenlin Meng, Sherrie Wang, Anne Driscoll, Erik Rozi, Patrick Liu, Jihyeon Lee, Marshall Burke, David B. Lobell, Stefano Ermon

California Institute of Technology, Stanford University, and UC Berkeley

SustainBench is a collection of 15 benchmark tasks across 7 SDGs, including tasks related to economic development, agriculture, health, education, water and sanitation, climate action, and life on land. Datasets for 11 of the 15 tasks are released publicly for the first time. Our goals for SustainBench are to

  1. lower the barriers to entry for the machine learning community to contribute to measuring and achieving the SDGs;
  2. provide standard benchmarks for evaluating machine learning models on tasks across a variety of SDGs; and
  3. encourage the development of novel machine learning methods where improved model performance facilitates progress towards the SDGs.

Table of Contents

Overview

SustainBench provides datasets and standardized benchmarks for 15 SDG-related tasks, listed below. Details for each dataset and task can be found in our paper and on our website. The raw data can be downloaded from Google Drive and is released under a CC-BY-SA 4.0 license.

  • SDG 1: No Poverty
    • Task 1A: Predicting poverty over space
    • Task 1B: Predicting change in poverty over time
  • SDG 2: Zero Hunger
  • SDG 3: Good Health and Well-being
  • SDG 4: Quality Education
    • Task 4A: Women educational attainment
  • SDG 6: Clean Water and Sanitation
  • SDG 13: Climate Action
  • SDG 15: Life on Land
    • Task 15A: Feature learning for land cover classification
    • Task 15B: Out-of-domain land cover classification

Dataloaders

For each dataset, we provide Python dataloaders that load the data as PyTorch tensors. Please see the sustainbench folder as well as our website for detailed documentation.

Running Baseline Models

We provide baseline models for many of the benchmark tasks included in SustainBench. See the baseline_models folder for the code and detailed instructions to reproduce our results.

Dataset Preprocessing

11 of the 15 SustainBench benchmark tasks involve data that is being publicly released for the first time. We release the processed versions of our datasets on Google Drive. However, we also provide code and detailed instructions for how we preprocessed the datasets in the dataset_preprocessing folder. You do NOT need anything from the dataset_preprocessing folder for downloading the processed datasets or running our baseline models.

Computing Requirements

This code was tested on a system with the following specifications:

  • operating system: Ubuntu 16.04.7 LTS
  • CPU: Intel(R) Xeon(R) CPU E5-2620 v4
  • memory (RAM): 125 GB
  • disk storage: 5 TB
  • GPU: NVIDIA P100 GPU

The main software requirements are Python 3.7 with TensorFlow r1.15, PyTorch 1.9, and R 4.1. The complete list of required packages and library are listed in the two conda environment YAML files (env_create.yml and env_bench.yml), which are meant to be used with conda (version 4.10). See here for instructions on installing conda via Miniconda. Once conda is installed, run one of the following commands to set up the desired conda environment:

conda env update -f env_create.yml --prune
conda env update -f env_bench.yml --prune

The conda environment files default to CPU-only packages. If you have a GPU, please comment/uncomment the appropriate lines in the environment files; you may need to also install CUDA 10 or 11 and cuDNN 7.

Code Formatting and Type Checking

This repo uses flake8 for Python linting and mypy for type-checking. Configuration files for each are included in this repo: .flake8 and mypy.ini.

To run either code linting or type checking, set the current directory to the repo root directory. Then run any of the following commands:

# LINTING
# =======

# entire repo
flake8

# all modules within utils directory
flake8 utils

# a single module
flake8 path/to/module.py

# a jupyter notebook - ignore these error codes, in addition to the ignored codes in .flake8:
# - E305: expected 2 blank lines after class or function definition
# - E402: Module level import not at top of file
# - F404: from __future__ imports must occur at the beginning of the file
# - W391: Blank line at end of file
jupyter nbconvert path/to/notebook.ipynb --stdout --to script | flake8 - --extend-ignore=E305,E402,F404,W391


# TYPE CHECKING
# =============

# entire repo
mypy .

# all modules within utils directory
mypy -p utils

# a single module
mypy path/to/module.py

# a jupyter notebook
mypy -c "$(jupyter nbconvert path/to/notebook.ipynb --stdout --to script)"

Citation

Please cite this article as follows, or use the BibTeX entry below.

C. Yeh, C. Meng, S. Wang, A. Driscoll, E. Rozi, P. Liu, J. Lee, M. Burke, D. B. Lobell, and S. Ermon, "SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning," in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), Dec. 2021. [Online]. Available: https://openreview.net/forum?id=5HR3vCylqD.

@inproceedings{
    yeh2021sustainbench,
    title = {{SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning}},
    author = {Christopher Yeh and Chenlin Meng and Sherrie Wang and Anne Driscoll and Erik Rozi and Patrick Liu and Jihyeon Lee and Marshall Burke and David B. Lobell and Stefano Ermon},
    booktitle = {Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year = {2021},
    month = {12},
    url = {https://openreview.net/forum?id=5HR3vCylqD}
}

sustainbench's People

Contributors

chrisyeh96 avatar jlee24 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sustainbench's Issues

Bugs in download_datasets.py

Hey, Thanks a lot for putting this dataset together and congrats on getting accepted into the NeurIPS workshop. I'm going to be poking around the repo, and I'll post issues as I see bugs. I'm sure you're busy getting everything together for camera ready, so don't worry about debugging these in real time. Here's the first round that I found when trying to run download_datasets.py:

First, the PovertyMapDataset class has a typo on the declaraion of the versions_dict variable, line 124. The download_urls entry should contain a list, but instead contains a dictionary without keys.

Second, the initialize_data_dir() function in the SustainBenchDataset class requires versions_dict to contain two keys which are both mission from the PovertyMapDataset: download_url and compressed_size.

Third, self._data_dir is assigned incorrectly in crop_seg_dataset.py.

pip install from github

Hi @chrisyeh96 and team,

Thanks for your work and this comprehensive Github repository. And congratulations on the NeurIPS data track acceptance.

I wanted to install the repository directly in pip via pip install git+https://github.com/sustainlab-group/sustainbench.git but ran into the following issue (posted below). This is a convenience issue, but being able to import directly from pip may be a nice addition.

Thanks again,
Marc

❯ python -m pip install git+https://github.com/sustainlab-group/sustainbench.git
Collecting git+https://github.com/sustainlab-group/sustainbench.git
  Cloning https://github.com/sustainlab-group/sustainbench.git to /tmp/pip-req-build-mzsr5qc0
  Running command git clone -q https://github.com/sustainlab-group/sustainbench.git /tmp/pip-req-build-mzsr5qc0
    ERROR: Command errored out with exit status 1:
     command: /home/marc/anaconda3/envs/elects/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-mzsr5qc0/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-mzsr5qc0/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-qaax5_2s
         cwd: /tmp/pip-req-build-mzsr5qc0/
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-req-build-mzsr5qc0/setup.py", line 7, in <module>
        from version import __version__
    ModuleNotFoundError: No module named 'version'
    ----------------------------------------
WARNING: Discarding git+https://github.com/sustainlab-group/sustainbench.git. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Edit: I got the same error after cloning and pip install .. I changed out the following in setup.py fix

sys.path.insert(0, os.path.join(here, 'sustainbench')) # used to be SustainBench

Minor bug : LSMS Images are flipped

Hello. I noticed that loading LSMS image/ntl pairs with this function return a flipped version of the image. It does not affect any experiment but can be a problem when displaying images on a map.

def load(country, year, hhid):
    data = np.load(f'{country}_{year}/{hhid}.npz',
                   allow_pickle=True)

    img = data.f.x[:3]
    ntl = data.f.x[-1]
    return img, ntl

Brick Kiln Download does't work

Hi, the download link for the brick kiln seems to be broken.

(also would you mind telling me what is the license of this dataset)

Crop Yield Prediction USA results

Hi! Thanks for this collection of datasets and benchmark models.

The Crop Yield Prediction model has very different yield values for the US and Brazil.

For example, the test yield values in Brazil have a mean value of 2.69375, while the test yield values in the US have a mean value of 44.39524599226092.

I have two questions:

  • Is this expected?
  • Are the results reported in the US for the You et al. model for these yield values? I specifically ask because the RMSE of 0.37 is much lower than what is reported in the referenced paper (e.g. the CNN+GP has an average RMSE of 5.55). If what is on the leaderboard is correct, what changed to improve the results so much?

Thanks again!

Gabi

Kenya not available in dataloader

Hi there,

Thanks for releasing this! I'd like to load the Kenya crop classification data.

Unfortunately crop_type_kenya is not yet available in the sustainbench dataloader, although the documentation says that it is. Please may you let me know when it will be available?

Thanks

Unable to load sustainbench dataloaders

Hi, thanks for sharing everything! Was wondering if you could provide the code for loading the dataloaders for loading the datasets as PyTorch tensors? I could not find the details on the website as instructed on the ReadMe.

I cloned the repo and imported sustainbench (from sustainbench import sustainbench). Then, I tried to run sustainbench.get_dataset('poverty') but am running into import issues such as module 'sustainbench' has no attribute 'supported_datasets'. I'm wondering if this process would be easier with some extra documentation or sample code for getting the dataloaders. Thanks so much!

@chrisyeh96

Running Baseline Models

Thanks a lot for putting this dataset together.
Can you please add the baseline models for the benchmark tasks and the code to reproduce the results(as mentioned in the readme)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.