Giter Club home page Giter Club logo

pride's Introduction

(P)lan for (R)ap(I)d (DE)carbonization (PRIDE)

This repository contains analysis tools, models, and publications associated with planning for rapid decarbonization.

Publication: 2020-fairhurst-hydrogen-production

This repository holds:

  • data of the fuel consumed by the MTD and UI fleet.
  • analysis of the hydrogen required by those fleet to become carbon free.
  • information of different methods to produce hydrogen.

Publication: 2020-dotson-optimal-sizing

This repository holds the data analysis, figures, that will lead to quantitative recommendations for the optimal reactor size.

Multiple scenarios will be addressed:

  1. The reactor itself is free (significant reduction in capital cost).
  2. The reactor still has a price tag and higher capital cost.
  3. Increasing penetration of variable renewable energy sources.
  4. Add grid flexibility in the form of H2 and thermal storage.

Instructions to Run TEMOA

TEMOA is an open source modeling tool available on GitHub. Follow the installation instructions here.

After creating a database in sql, navigate to the directory with your database:

sqlite3 [filename].sqlite < [filename].sql

if you don't have sqlite installed, run:

sudo apt-get install sqlite or sudo apt-get install sqlite3

TEMOA models can be run from the command line, current iterations use the online model platform at model.temoacloud.com.

Instructions to Run TEMOA scenarios

To run a single TEMOA scenario first add the path to Temoa to your ~/.bashrc:

echo "export TEMOA=/path/to/temoa" >> ~/.bashrc

for example:

echo "export TEMOA=/home/roberto/github/temoa" >> ~/.bashrc

Remember to either close and open the terminal, or run source ~/.bashrc. Then, write the following commands in the terminal:

cd temoa-uiuc
source activate temoa-py3
# Example scenario
sqlite3 data_files/bau_uiuc.sqlite < data_files/bau_uiuc.sql
yes | python $TEMOA/temoa_model/ --config=data_files/run_bau.txt

The data processing must be done separately. Figures can be produced using tools in data_parser.py. An example of how this is done can be found in mga_analysis.ipynb.

To run all scenarios (except for MGA, which must be run individually):

snakemake must be installed.

cd temoa-uiuc
source activate temoa-py3
pip install snakemake
snakemake --cores=4
# if the build fails due to file system latency, try
# snakemake --cores=4 --latency-wait=10

This automatically generates figures in the /figures/ folder.

Instructions to Run the Jupyter Notebooks

Generating typical time histories was done by using RAVEN an open source tool from INL. This repository should be in a folder adjacent to raven. See directory map below for an example.

To install RAVEN follow the instructions from INL.

Instructions to Obtain the Data

Some of the data has not yet been cleared for publication so a shared link cannot yet be provided. Shared links for data that is already publicly available have been provided below.

In order to execute the jupyter notebooks the data files should be downloaded to your computer in a folder called data such that your directories look like:

home
|
|--2020-dotson-optimal-sizing
|
|--raven
|
|--data

Data:

pride's People

Contributors

abachma2 avatar datw0258 avatar katyhuff avatar robfairh avatar samgdotson avatar yardasol avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

pride's Issues

Produce a synthetic history for wind power

This issue can be closed when a synthetic history is produced that

  • has the same statistics as a typical history
  • reflects physical reality

For wind data I am waiting on Mike Marquissee to fulfill a data request. One possibility is also to use wind speed data, produce a synthetic history for that variable, and calculate the power.

Create a paper-dev folder

This folder will contain abstract for the ANS Student conference 2020.

This issue can be closed when a folder is created and populated with the ANS latex templates.

Reorganize publications

This issue can be closed when all publications are neatly organized into a publications folder.

Update the ANS abstract

This issue can be closed when the following items have been updated:

  • Figures (replace typical figures with typical/synthetic figures).
  • Introduction/Methodology (anywhere that says I'm not working on Steam...)
  • Update the simplifying assumption from MWe baseload power to MWth "baseload" (this way the reactor can supply steam OR electricity).
  • Add the assumption that steam be delivered at 325 PSI (this replaces the steam produced by gas turbines).

Add a .gitignore file

I'm not sure why this hasn't been done previously. But a .gitignore file should be added.
This issue can be closed when a .gitignore file has been added that ignores:

  • pycache
  • ipynb-checkpoints
  • latex built-files
  • sqlite files
  • temoa built files
  • pytest-cache
    Possibly others. But for now this is the minimum.

Add hospital data to the EIA analysis

This issue can be closed when a PR is merged that does the following:

  • Adds hospital data to the EIA market analysis functions
  • Creates an ipynb that displays graphs of the data similar to the education_market_analysis.ipynb
  • Updates test functions to match

Fix plot labels in data_parser.py

The x-axis labels created by data_parser.py are squished when the dataframe is too long. This issue can be closed when a fix is added to handle dataframes of larger sizes.

Good First Issue | Write tests for data_parser.py

The functions in temoa-uiuc/data_parser.py need to have associated unit tests.

This is a good first issue for new students/researchers.

For guidance on writing unit tests, please see "Effective Computation in Physics by Huff, Scopatz."

This issue can be closed when:

  • test_data_parser.py has been added to the tests folder in pride/ (this folder may need to be created, depending on the status of Issue #103).
  • Each function in data_parser.py has an appropriate unit test, covering edge and corner cases if applicable.
  • All tests pass by running pytest in the top level of the repository.

Generate synthetic histories for steam production/

This issue can be closed when a PR is made that contains synthetic steam production.

(I'm being careless, it is steam demand. At UIUC the demand and the production are identical because APP fulfills all of the demand).

Correct the iCAP carbon limits

The current carbon limits are actually shifted by a few years and should be corrected.
This issue can be closed when the 0x_uiuc.sql databases are updated to reflect the correct carbon limits based on the iCAP document.

UIUC Transportation Demand

This issue can be closed when a document or jupyter notebook has been created to answer the following questions (with detailed steps on how to replicate the answers):

  • Determine the total fuel demand in gallons of gasoline equivalent for each technology type.

  • What is the CO2 equivalent per gallon-gasoline-equivalent. Note: It's important that this an equivalent because emissions might consist of more than just CO2. E.g. 1 ton of CH4 released might be the same as 3 tons of CO2. So report 3 tons of CO2eq.

  • Determine the cost of each type of fuel

  • Determine conversion rates for each type of fuel (e.g. 1 kWh(e) = 10 gallons-gasoline)

  • Document to replicate answers and sources cited.

Vehicles should be sorted by fuel type (E85, Diesel, Unleaded, Lithium-ion, H2 fuel-cell).

edit1: for clarity

edit 2: updates goal

Handle Huff Review Comments

dotson.pdf

This issue can be closed with a PR that handles the comments from the Huff review, including items from the writing checklist that should be reviewed again:

  • Other misused/overused words include: code, input, output, different, value, amount, model.
  • clunky nouns -> spunky verbs (progression, expression --> progress, express)
  • reduce vague words (important, methodological)
  • get rid of extraneous prepositions (the meeting happened on monday -> the meeting happened monday) (they agreed that it was true -> they agreed it was true)
  • get rid of passive voice (is/was/are/were/be/been/am + past tense verb), replace with active voice
  • use strong verbs (use sparingly: is, are, was, were, be, been, am)
  • avoid turning verbs into nouns ("obtain estimates of" -> "estimates"; "provides a description of" -> "describes") (management of power systems -> power systems management, to the task of net load prediction -> to predict net load, the task of net load forecasting -> to forecast net load)
  • don't bury the verb (keep the predicate close to the subject at the beginning of the sentence)
  • Always use isotopic notation like $^{239}Pu$. Never $Pu-239$ or plutonium-239.
  • A phrase of the form ion of is probably clearer as ion. (For example, convert "calculation of velocity" to "velocity calculation".)
  • Cite all software. Review the principles and try CiteAs if necessary.
  • Refer to software consistently by name.

Additional Table and Figure Checklist:

  • When referring to figures by their number, use Figure 1 and Table 1. They should be capitalized and not abbreviated (not fig. 1 or figure 1.)

Additional Math Comments

  • define all variables, with units. If unitless, indicate that this is the case $[-]$.
  • Variables should be defined in the align environment as well, not buried in paragraphs.
    Here’s an example of an equation:
The line is defined as
\begin{align}
y&=mx + b
\intertext{where}
y&= \mbox{ height of the line, also known as rise [m]}\nonumber\\
m&= \mbox{ slope [-]}\nonumber\\
x&=\mbox{ independent parameter, known as run [m]}\nonumber\\
b&= \mbox{ y intercept [m].}
\end{align}

Add carbon-pricing analysis

This issue can be closed when

  • a jupyter notebook containing preliminary carbon price analysis has been added
  • a folder containing a skeleton paper for carbon-pricing has been added

Script to parse temoa results

This issue can be closed when a script has been created that can

  • parse and graph activity by sector
  • parse and graph capacity by sector
  • parse and graph emissions by sector
  • parse and graph total emissions

Produce a synthetic history for solar power

This issue can be closed when a synthetic history is produced that

  • has the same statistics as a typical history
  • reflects physical reality

It is possible that

  1. This data should not be modeled using an ARMA
  2. More parameters are needed to capture the correct behavior
  3. Another method of generating these series should be devised

Add CI

There are tests in this repo, but they aren't run in CI for each PR. This issue can be closed with a merged PR (and adjusted repository settings) that runs the tests for all PRs.

@samgdotson can do this, or could perhaps guide @datw0258 , @nsryan2 , or @Dayvis7 as they figure out how to.

fix optimal-sizing presentation

At the moment, optimal-sizing-pres doesn't compile.
It is missing the file Methods.tex.
The file is called methods.tex.
This issue can be closed when the main .tex file calls methods.tex instead of Methods.tex

Add instructions for EIA data

This issue can be closed when

  • the README.md file includes instructions for users to access EIA data
  • publicly shared links are included in README.md.
  • a description of the analysis is added (e.g. This notebook analyzes data from the EIA about electricity production from the education sector.)

Add Transportation Sector for Temoa Model

This issue can be closed when

  • a .sql database has been added to the repository that includes a transportation sector for UIUC
  • using data about gasoline/diesel demand. All transportation demand should be in a "gasoline energy equivalent."

The file should be called 0x_uiuc.sql and have an associated config file called run_scenario0x.txt, similar to the other scenarios in temoa-uiuc.

Transportation Demand

All of the data analysis for transportation demand has been done in fuel-analysis, hydro-requirement, and uiuc_transportation_demand. This should be enough information to update the TEMOA model.

Guidance:

  1. Add an experimental branch in your forked repository

git checkout -b iss43

  1. Each subsequent TEMOA model should build on the previous one (in most cases). Thus, copy the most recent model, in this case scenario04.
cp data_files/04_uiuc.sql data_files/05_uiuc.sql
cp data_files/run_scenario04.txt data_files/run_scenario04.txt
  1. Edit the configuration file, run_scenario05.txt and change the name of the input database to data_files/05_uiuc.sqlite

    • Also change the scenario name, and the output file name.
  2. In Snakefile comment out the "All Scenarios" block, and uncomment the "Experimental Scenario" block. Replace the name in experimental scenario with the name of the model you're working on.

  3. Edit the .sql database to add a new sector, transportation. (You can visually check the model at model.temoacloud.com)

  4. Run the model by executing

source activate temoa-py3
snakemake --cores=4
  1. If the results look appropriate, add the experimental scenario to all scenarios in the snakefile (maybe run it again with all files) and make a pull request.

Add Temoa config files

This issue can be closed when Temoa config files are added for

  • Business-as-usual
  • Scenario 1
  • Scenario 2
  • Scenario 3
  • Scenario 3, with uncertainty and sentitivity analysis

Handle ANS Reviewer comments

This issue can be closed with a PR that updates the reservoir computing paper in a manner that handles the review comments below.

Reviewer 1

  • In Fig. 3, it appears that there are two local minima (rho = 0.9, noise = 10^-4; rho = 1.3, noise = 10^-2) which are both close to the global minimum. How did you select between these two? Since the two input parameters vary quite a bit between these two, I suspect that the output of the model using the other minimum would be significantly different, too.
  • What more quantitative metrics could be used to evaluate the quality of forecasts? How many Lyapunov times are useful before some accuracy threshold is reached?

Reviewer 2

  • Interesting work. Might help to clearly state the objectives of this initial phase of research towards the beginning of the paper.

Reviewer 3

  • The summary is well written but needs some discussion on verification of the proposed methodology.
  • What is the basis for selection of hyperparameters and some sensitivity and parametric analysis needs to be done in order to ensure confidence in results.
  • How dependent is the methodology and results on the training length and network size ?

Submit

  • A suitably revised summary must be uploaded no later than August 17 into the same system that you submitted your original paper (https://epsr.ans.org/meeting/?m=308). If your revised summary is not received by the deadline date, your original summary will be published as is.

Update optimal-sizing-pres

This issue can be closed when

  • Clearer assumptions
  • Clear conclusions
  • back up slides for anticipated questions

Rename presentation and abstract folders

This issue can be closed when presentation and abstract folders have been renamed something more appropriate than ans-abs or pres. So that everyone can see what it contains.

fix labels in temoa_processing notebook

Currently, the figures in this notebook look like this:

image

This issue can be closed when a PR fixes the labels.
Some possibilities are making the bars wider or rotating the labels slightly.

Update the README

This issue can be closed when

  • the README.md file has been updated to reflect changes to the goals of the repository. The original scenarios are no longer accurate.

Add cases to make_increasing function

This issue can be closed when the make_increasing() function in data_funcs.py has added cases for

  • sort = True, strict = False
  • sort = False, strict = False
  • sort = True, strict = True

and appropriate tests have been added for each.

Generate synthetic time histories

The next step for this methodology is to use RAVEN to generate synthetic time histories (training data for an reduced order model).

Read the section in the user guide on trainARMA

This issue can be closed when a RAVEN input file for generating typical time histories has been created.

Analyze data from Rail Splitter

Recently received actual wind power data from Mike Marquissee.
This issue can be closed when that data has been analyzed and a typical and series of synthetic histories have been generated.

Convert rail splitter data from spreadsheet to csv

This issue can be closed when a file called railsplitter-data.csv has been uploaded to Box (or sent to Sam) that has two columns Date and Time and Buyer's Share where the latter refers to the actual MWh delivered to the UIUC campus for a given hour by Rail Splitter Wind Farm.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.