forestgeo / climate Goto Github PK
View Code? Open in Web Editor NEWClimate data for ForestGEO sites
Home Page: https://forestgeo.github.io/Climate/
License: Creative Commons Attribution 4.0 International
Climate data for ForestGEO sites
Home Page: https://forestgeo.github.io/Climate/
License: Creative Commons Attribution 4.0 International
@biancaglez loaded CO2 data + script and put a readme here: https://github.com/forestgeo/Climate/tree/master/Other_environmental_data. It's ready for your review.
@biancaglez , one small comment-- it would be helpful to mention the time periods covered by each record.
@forestgeoadm , you asked about downloading single .csv files... Let's use ForestGEO climate data sources.csv
as an example. The way to do this would be go to Raw
and then copy-paste into Excel or such, then convert text to columns using comma delimiter.
For this particular file (and probably others), that doesn't work nicely. Thus, it isn't a great solution, but seems to be the best there is. (I consulted @rudeboybert on this.)
For files that we want to be super user friendly, like ForestGEO climate data sources.csv
, I need to think of some other solution--or at least convert them to formats that would be cleaner with the copy-paste.
Hi @teixeirak and @biancaglez
I know how to edit README files in GitHub, but am I able to create one in GitHub, or do I need to create a .MD file somewhere else (in which case...I don't know how to do that)? This is a general question, but most immediately I thought I could create a README (at least in a skeletal form) for SCBI's SPEI data
Take care,
Caly
The UFDP is on the edge of cliff, and the valley below is 750 m lower and much warmer/drier. For any gridded data set, it is important to use a pixel that is 100% above the cliff.
Here's an explanation from Jim Lutz:
The UFDP lies in part of a PRISM pixel that also includes the very much lower area below the cliff, therefore influencing the local t/p regressions that PRISM uses (especially if you are using the 4k data set). I use the PRISM pixel directly to the north of the one that actually contains the plot (attached). This interpolation ‘issue’ would apply to any gridded data set, so just check that the pixel you are using is 100% on the rim of ’the breaks’ and does not include any of the valley (which is 750 m lower).
PRISM grid cells that I use for UFDP (800m, top and 4k, bottom). The plot is actually in the grid cell immediately south of the highlighted cell.
Download data from the following sources for all ForestGEO sites:
Hi @biancaglez,
I'm taking a look at the folder/file structure of the Climate Data Portal and was wondering if you could help me to understand what the .RProj.user folder is and the .Rhistory and Climate.Rproj files are/if they're connected.
Take care,
Caly
@biancaglez , could you please write a script to generate monthly and annual summaries for the CRU data? (I assume this should be easy). In particular, I'd like Jan and July T, annual precip (for MEE paper Table 1), and mean annual temperature (might be needed for another paper). We'll want this for all the ForestGEO sites.
Let's go with a time range of 1950-present, but keep it easily adjustable in the code.
Note that for the annual summaries, some variables should be summed across months and others averaged.
Average:
TMP, TMN, TMX, CLD
Sum:
PRE, WET, FRS,
Special:
PET - convert daily average to monthly sum (mm/mo), then sum across months for units of mm/yr.
Hi Krista, thanks for your response to my questions - do you get this message even if I don't specifically tag you in it? Take care, Caly
@gonzalezeb @ValentineHerr @beckybanbury,
I know we've extracted the CWD and E values for ForestGEO sites, but I'm not sure where we have them. (I've found a couple copies of E, but I'm looking for CWD at the moment.) Could one of you please point me to them and/or load them here (under gridded data products)?
Our directory is out of date, and this file needs work. It's also too much work to maintain as currently set up.
The first step is to come up with a good plan. Ideas so far:
@forestgeoadm or @biancaglez , this task is primarily for me, but if you have suggestions/ feel inspired to work on this, that would be much appreciated!
The 2019 csv has all of the data entered into a single row so it is unusable. Is there a quick fix to this?
wet_BCI
is missing records for months with zero precipitation.
It is produced with this script, which will need to be fixed.
@forestgeoadm , @biancaglez , I've been revamping the organizational structure of this repo, and as a result breaking lots of URLs in the readme files, this file, and scripts. I'm still working on the reorganization, so please hold off on fixing them just yet, but I want you to be aware.
@forestgeoadm the SO2 and NOx data is ready and available here:
https://github.com/forestgeo/Climate/tree/master/Other_environmental_data/so2_nox_data
Here are the notes you need for a readme for this new data. Let me know if you have any questions and happy to provide answers :D
Bianca
resources for readme:
wiki instructions on running gridding https://github.com/JGCRI/CEDS/wiki/User_Guide#use-instructions
download the gridding proxies here https://zenodo.org/record/3606753#.X1kTUmdTk6U
CEDS data citation:
Hoesly, Rachel M., O'Rourke, Patrick R, Smith, Steven J., Feng, Leyang, Klimont, Zbigniew, Janssens-Maenhout, Greet, … Muwan, Presley. (2020). CEDS v_2019_12_23 Emission Data (Version v_2019_12_23) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3606753
Special thanks to https://github.com/ssmithClimate for guidance in editing and using the G1.1.grid_bulk_emissions.R script
Steps:
Downloaded the following files from Zenodo, unzipped, select relevant files) and stored in "C:/Users/GonzalezB2/Desktop/Smithsonian/CEDS/final-emissions/"
_______ CEDS_NOx_emissions_by_country_CEDS_sector_v_2019_12_23.csv
_______ CEDS_SO2_emissions_by_country_CEDS_sector_v_2019_12_23.csv
Edited and debugged G1 script to generate NC files from these. Stored these files in the intermediate output of CEDS directory
Edits and bugs found in G1 scipt here:
Change relevent atmospheric metric : line 39 in module G script to
if ( is.na( em ) ) em <- "NOx" # change to nox or s02
module G bug identified (CEDS team is now fixing)
addd , meta = FALSE ) to all readData() functions
Write script below using these files to grab data at forestgeo sites
https://github.com/forestgeo/Climate/blob/master/Other_environmental_data/so2_nox_data/forest_geo_nos_so2.R
Info from Mike Dietze, Sept. 2020:
All the met tools are within pecan/modules/data.atmosphere are here.
The default gapfilling, metgapfill.R, was written by Ankur Desai and leverages the Ameriflux MDS approach with a few extra special cases. It is generally good for filling small gaps (e.g. QC flagged data), but not for filling large gaps (e.g. months) when systems are down.
There’s an even simpler version based on splines and linear models -- again it’s really only good for small gaps
For larger gaps I’d use the tdm_* scripts, which is something that Christy Rollinson built with help from one of Ankur’s grad students. This code is really for downscaling, not gapfilling, but for large gaps what I’d do is to use the downscaling code to downscale a spatially-coarser reanalysis product (I’m particularly fond of ERA5, and there’s code for downloading that in the data.atmosphere module). Christy’s code needs a training data set at both scales (local and coarse) and build a complex series of GAMs across variables and across a moving average through day-of-year. It can also produce ensembles of outputs to capture the uncertainty associated with downscaling/gapfilling. It’s more computationally demanding, but more sophisticated than anything else I’ve got. Once you do the downscaling it should be pretty easy to just substitute these values for any NAs in your original time-series.
From Christy, Sept. 2020:
Like Mike says, the TDM scripts weren’t built for gap filling, but what Mike suggests with downscaling a coarser product and inserting the values into your gaps should work. However, right now this could be a bit buggy since the code was developed with producing ensembles that propagate uncertainty in mind. In the work for a different project (MANDIFORE), I discovered that the single-use instance where you don’t produce ensembles appears to be buggy and needs more than the quick hack I had put in there.
One key thing that might be important depending on what kind of weather station variables you need is that I spent a lot of time playing with the equations in my scripts to at least try to preserve met variable covariance. I suspect this is more focused on variables/resolutions you may not care about like sub daily long/shortwave radiation, temperature, and wind, but if that is important, my workflow might be worth the trouble.
find SPEI source
see if possible to download for all locations and store here
also download for New Mexico and store in new directory here
if all of the above done in a timely manner - search for PDSI data and repeat above steps.
This Readme could use some work.
when you have a chance, it would be great if you could add a description of the folders/files you added here to the ReadME:
previously here: http://www.teamnetwork.org/gridsphere/gridsphere?cid=search
MODIS-derived global land products of shortwave radiation and diffuse and total photosynthetically active radiation at 5 km resolution from 2000.
provides total solar radiation, PAR and diffuse PAR, which are more informative than
cloudiness.
Paper: http://www.sciencedirect.com/science/article/pii/S0034425717304327?via%3Dihub
Hi all -- working on my first issue to get the hang of this and also request some feedback. If you're available - pls review visualization script found below.
My main question is, would it be easy for anyone to use these scripts to visualize CRU data? @rudeboybert @teixeirak
See scripts here: https://github.com/forestgeo/Climate/tree/master/scripts
Hi @teixeirak,
I'm making progress on adding data use and attribution sections to each of the climate data sources, and I wanted to ask you about Met Station data.
Should we copy a more generalized attribution, like is currently the case for HKK?
Or should I aim to stick with the more formal data citation? If so, my approach would be:
What's your preference?
Take care,
Caly
Hello, @teixeirak and @biancaglez,
I think that it would be helpful to add a section to the CRU README and/or to create a new README within the figures/ForestGEO_sites_TS.plots.by.month folder to highlight and contextualize the data visualizations from the CRU data.
I'm happy to work up a draft if you could confirm/correct the following information:
@biancaglez created these figures in summer 2020 using these scripts based on CRU v4.04 data. If data users should desire even more up-to-date figures, they could recreate them using the script, right?
I understand the 3 letter code between the plot name and "CRU" to indicate the variable that is illustrated, but what do the different plot designations within one field site mean?
How shall we frame the intended use of these illustrations: to quickly identify trends? to be used as supporting figures in a manuscript? something else?
Related to the above, what shall we say by way of data use and attribution? Citation for original CRU data + attribution to @biancaglez ? to ForestGEO Climate Data Portal?
Take care,
Caly
from Yao, 2018:
I am not familiar with the data structure, it would be good if SCBI can produce
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.