Giter Club home page Giter Club logo

climate's People

Contributors

alyssaterrell avatar biancaglez avatar camerondow35 avatar forestgeoadm avatar gonzalezeb avatar mcgregorian1 avatar rhelcoski avatar rudeboybert avatar teixeirak avatar valentineherr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

climate's Issues

fix (lots of) broken URLs

@forestgeoadm , @biancaglez , I've been revamping the organizational structure of this repo, and as a result breaking lots of URLs in the readme files, this file, and scripts. I'm still working on the reorganization, so please hold off on fixing them just yet, but I want you to be aware.

long-term goal: integrate with PeCAn tools for gap-filling and standardization.

Info from Mike Dietze, Sept. 2020:

All the met tools are within pecan/modules/data.atmosphere are here.

The default gapfilling, metgapfill.R, was written by Ankur Desai and leverages the Ameriflux MDS approach with a few extra special cases. It is generally good for filling small gaps (e.g. QC flagged data), but not for filling large gaps (e.g. months) when systems are down.

There’s an even simpler version based on splines and linear models -- again it’s really only good for small gaps

For larger gaps I’d use the tdm_* scripts, which is something that Christy Rollinson built with help from one of Ankur’s grad students. This code is really for downscaling, not gapfilling, but for large gaps what I’d do is to use the downscaling code to downscale a spatially-coarser reanalysis product (I’m particularly fond of ERA5, and there’s code for downloading that in the data.atmosphere module). Christy’s code needs a training data set at both scales (local and coarse) and build a complex series of GAMs across variables and across a moving average through day-of-year. It can also produce ensembles of outputs to capture the uncertainty associated with downscaling/gapfilling. It’s more computationally demanding, but more sophisticated than anything else I’ve got. Once you do the downscaling it should be pretty easy to just substitute these values for any NAs in your original time-series.


From Christy, Sept. 2020:

Like Mike says, the TDM scripts weren’t built for gap filling, but what Mike suggests with downscaling a coarser product and inserting the values into your gaps should work. However, right now this could be a bit buggy since the code was developed with producing ensembles that propagate uncertainty in mind. In the work for a different project (MANDIFORE), I discovered that the single-use instance where you don’t produce ensembles appears to be buggy and needs more than the quick hack I had put in there.

One key thing that might be important depending on what kind of weather station variables you need is that I spent a lot of time playing with the equations in my scripts to at least try to preserve met variable covariance. I suspect this is more focused on variables/resolutions you may not care about like sub daily long/shortwave radiation, temperature, and wind, but if that is important, my workflow might be worth the trouble.

Desired weather station products

from Yao, 2018:

I am not familiar with the data structure, it would be good if SCBI can produce

  • (i) a protocol that explain the data structure,
  • (ii) algorithm for data exploration, including producing a daily reading and
  • (iii) FAQ for Data Logger troubleshooting.

We can't trust Cedar Breaks climate data extracted based on the site coordinates.

The UFDP is on the edge of cliff, and the valley below is 750 m lower and much warmer/drier. For any gridded data set, it is important to use a pixel that is 100% above the cliff.

Here's an explanation from Jim Lutz:

The UFDP lies in part of a PRISM pixel that also includes the very much lower area below the cliff, therefore influencing the local t/p regressions that PRISM uses (especially if you are using the 4k data set). I use the PRISM pixel directly to the north of the one that actually contains the plot (attached). This interpolation ‘issue’ would apply to any gridded data set, so just check that the pixel you are using is 100% on the rim of ’the breaks’ and does not include any of the valley (which is 750 m lower).

PRISM grid cells that I use for UFDP (800m, top and 4k, bottom). The plot is actually in the grid cell immediately south of the highlighted cell.

image

image

Novice question about creating README files

Hi @teixeirak and @biancaglez

I know how to edit README files in GitHub, but am I able to create one in GitHub, or do I need to create a .MD file somewhere else (in which case...I don't know how to do that)? This is a general question, but most immediately I thought I could create a README (at least in a skeletal form) for SCBI's SPEI data

Take care,
Caly

readme for so2 and nox data

@forestgeoadm the SO2 and NOx data is ready and available here:
https://github.com/forestgeo/Climate/tree/master/Other_environmental_data/so2_nox_data

Here are the notes you need for a readme for this new data. Let me know if you have any questions and happy to provide answers :D

Bianca


resources for readme:

wiki instructions on running gridding https://github.com/JGCRI/CEDS/wiki/User_Guide#use-instructions
download the gridding proxies here https://zenodo.org/record/3606753#.X1kTUmdTk6U
CEDS data citation:
Hoesly, Rachel M., O'Rourke, Patrick R, Smith, Steven J., Feng, Leyang, Klimont, Zbigniew, Janssens-Maenhout, Greet, … Muwan, Presley. (2020). CEDS v_2019_12_23 Emission Data (Version v_2019_12_23) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3606753

Special thanks to https://github.com/ssmithClimate for guidance in editing and using the G1.1.grid_bulk_emissions.R script

Steps:

Downloaded the following files from Zenodo, unzipped, select relevant files) and stored in "C:/Users/GonzalezB2/Desktop/Smithsonian/CEDS/final-emissions/"
_______ CEDS_NOx_emissions_by_country_CEDS_sector_v_2019_12_23.csv
_______ CEDS_SO2_emissions_by_country_CEDS_sector_v_2019_12_23.csv

Edited and debugged G1 script to generate NC files from these. Stored these files in the intermediate output of CEDS directory
Edits and bugs found in G1 scipt here:
Change relevent atmospheric metric : line 39 in module G script to
if ( is.na( em ) ) em <- "NOx" # change to nox or s02

module G bug identified (CEDS team is now fixing)
addd , meta = FALSE ) to all readData() functions

Write script below using these files to grab data at forestgeo sites
https://github.com/forestgeo/Climate/blob/master/Other_environmental_data/so2_nox_data/forest_geo_nos_so2.R

CRU annual data summaries

@biancaglez , could you please write a script to generate monthly and annual summaries for the CRU data? (I assume this should be easy). In particular, I'd like Jan and July T, annual precip (for MEE paper Table 1), and mean annual temperature (might be needed for another paper). We'll want this for all the ForestGEO sites.

Let's go with a time range of 1950-present, but keep it easily adjustable in the code.

Note that for the annual summaries, some variables should be summed across months and others averaged.

Average:
TMP, TMN, TMX, CLD

Sum:
PRE, WET, FRS,

Special:
PET - convert daily average to monthly sum (mm/mo), then sum across months for units of mm/yr.

Citing Met Station Data

Hi @teixeirak,

I'm making progress on adding data use and attribution sections to each of the climate data sources, and I wanted to ask you about Met Station data.

Should we copy a more generalized attribution, like is currently the case for HKK?

Or should I aim to stick with the more formal data citation? If so, my approach would be:

  • data contacts, as listed in directory metadata sheet, as authors
  • data title/version as file name?
  • ForestGEO Climate Data Portal as repository name.
  • no DOI.

What's your preference?

Take care,
Caly

Test Issue

Hi Krista, thanks for your response to my questions - do you get this message even if I don't specifically tag you in it? Take care, Caly

CRU Data Visualizations

Hello, @teixeirak and @biancaglez,

I think that it would be helpful to add a section to the CRU README and/or to create a new README within the figures/ForestGEO_sites_TS.plots.by.month folder to highlight and contextualize the data visualizations from the CRU data.

I'm happy to work up a draft if you could confirm/correct the following information:

  • @biancaglez created these figures in summer 2020 using these scripts based on CRU v4.04 data. If data users should desire even more up-to-date figures, they could recreate them using the script, right?

  • I understand the 3 letter code between the plot name and "CRU" to indicate the variable that is illustrated, but what do the different plot designations within one field site mean?

  • How shall we frame the intended use of these illustrations: to quickly identify trends? to be used as supporting figures in a manuscript? something else?

  • Related to the above, what shall we say by way of data use and attribution? Citation for original CRU data + attribution to @biancaglez ? to ForestGEO Climate Data Portal?

Take care,
Caly

.RProj, etc.

Hi @biancaglez,

I'm taking a look at the folder/file structure of the Climate Data Portal and was wondering if you could help me to understand what the .RProj.user folder is and the .Rhistory and Climate.Rproj files are/if they're connected.

Take care,
Caly

update directory

Our directory is out of date, and this file needs work. It's also too much work to maintain as currently set up.

The first step is to come up with a good plan. Ideas so far:

  • Most fundamentally, this needs to be easy to keep info up to date. This includes avoiding duplicate info. This file currently contains lots of info that duplicates what's in the README files for the gridded data products (or, at present, would often be an outdated version of those). So, I think that should go.
  • This file is out of date and work to maintain-- should probably just be deleted. I don't think it's really needed.
  • This file needs some cleanup work (e.g., single header row).

@forestgeoadm or @biancaglez , this task is primarily for me, but if you have suggestions/ feel inspired to work on this, that would be much appreciated!

making .csv files easily accessible

@forestgeoadm , you asked about downloading single .csv files... Let's use ForestGEO climate data sources.csv as an example. The way to do this would be go to Raw and then copy-paste into Excel or such, then convert text to columns using comma delimiter.
image

For this particular file (and probably others), that doesn't work nicely. Thus, it isn't a great solution, but seems to be the best there is. (I consulted @rudeboybert on this.)

For files that we want to be super user friendly, like ForestGEO climate data sources.csv, I need to think of some other solution--or at least convert them to formats that would be cleaner with the copy-paste.

Download & extract climate data for ForestGEO sites.

Download data from the following sources for all ForestGEO sites:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.