wswup / pymetric Goto Github PK
View Code? Open in Web Editor NEWPython implementation of the METRIC model
License: Apache License 2.0
Python implementation of the METRIC model
License: Apache License 2.0
Need to add functionality to support monthly bias correction rasters instead of uniform monthly values
There is currently no way in the workflow to set non-default hot/cold Kc values for calibration. This was originally done since the Monte Carlo script was adjusting the Kc values internally for each image.
Per dgketchum/Landsat578#28, we will likely need to update the WRS2 descending shapefile url in tools/download/download_footprints.py from:
https://landsat.usgs.gov/sites/default/files/documents/wrs2_descending.zip
to:
https://landsat.usgs.gov/sites/default/files/documents/WRS2_descending.zip
We will also need to update all the INIs or modify the download script so that the unzipped shapefile is renamed to match the old (all lowercase) style.
Computing at-surface reflectance is the most memory intensive operation in model 1 and would be a good place to try and catch memory errors and suggest to the user that they try a smaller block size.
when attempting to download the 2006 NLCD, the script produces a HTTPError: 503 and the message "NLCD raster already extracted", without downloading or extracting the 2006 NLCD data.
The function refet.calcs.saturation_vapor_pressure_func
appears in cimis_daily_refet.py, but this function is not defined in the latest refet version I see. I think it needs to be changed to refet.calcs._sat_vapor_pressure
.
For some reason, since I pulled the latest changes (24 July git pull), the local scripts (e.g., metric_model1.py) are raising ImportError exceptions when attempting to import python_common. I see that there are no init.py files in the support folder and wonder if this has something to do with it, or if it needs to be
from support.python_common import x, y, z
Can you reproduce this error on a windows machine with the current version of master?
The goal is to be able to use one calibration for multiple images in the same path (but different rows).
The gridmet_daily_refet.py and gridmet_daily_ppt.py codes are working fine, but still getting nodata grids when running gridmet_daily_temp.py. Are there plans to update/commit this code?
It would probably be good to add a downloader script that could access the ESPA API (or use some other python ESPA module).
I would like to see an option that allows for the creation of ETrF vs NDVI scatter plots.
PyMETRIC is currently using a custom ID to uniquely identify each Landsat scene. This ID has the format LXSS_PPPRRR_YYYYMMDD (i.e. LT05_043030_20001014) and is a shortened version of the full Landsat Collection 1 Product ID. The Landsat578 tool that downloads the Landsat images also has to deal with converting the custom ID to the old style scene ID in order to download the images.
Instead of dealing with three different image ID formats, I think it would make more sense to only use the Landsat Product ID throughout the code. There is a branch landsat-product-id where I have been making the necessary changes to support this switch.
I have already switched the cloud-free-scene-counts tool to using the Landsat Product ID and additional details can be found in the closed issue 5.
The documentation says "This script can only download the 2006 or 2011 NLCD images" but I requested 2006 and still got 2011.
Looking at the code, it appears line 114 does not pass the year from the args to the main function, so the year is always 2011 as defined on line 16.
In the workshop, the RefET file path was invalid, which caused model 2 to fail, but the tool continues on to the next image. The interpolator still runs even though there aren't any ETrF images and makes output rasters with all nodata.
This is definitely needed in landast_prep_path_row.py (since it is the first one a user runs) but should probably be put in some of the other scripts also.
D:\pyMETRIC\pymetric\code\support\et_numpy.py File "", line D:\pyMETRIC\pymetri
c\code\support\et_numpy.py1421", line
1421SyntaxError
: SyntaxErrorN: on-ASCII character '\xe2' in file D:\pyMETRIC\pymetric\code\supp
ort\et_numpy.py on line 1422, but no encoding declared; see http://python.org/de
v/peps/pep-0263/ for detailsN
on-ASCII character '\xe2' in file D:\pyMETRIC\pymetric\code\support\et_numpy.py
on line 1422, but no encoding declared; see http://python.org/dev/peps/pep-0263/
for details
This raises a SyntaxError for each bibliographic reference in the docstrings. Not sure if there is a way to ensure proper encoding when these are inserted.
Interpolate
For a brand new user, the low_etrf_limit and high_etrf_limit should not be 0-1.5, since this will tend to hide some of the problems in the initial calibrations.
The other one is that filling the ETrF from NDVI should probably be something the user explicitly chooses to do, not a default.
Pixel Rating
save_rating_rasters_flag should probably be True, so that the user can get a better sense of where the hot and cold could be.
LC08_038031_20150701 - Cloud shadows are being buffered but not clouds.
LC08_038031_20150802 - Clouds are not masked
This is a simple fix but like the others it would be good to double check all the soil paths.
This script doesn't seem to be working. I think the URL is producing a 404 error.
(1) I got messages "ERROR EXTRACTING FILE" and "ERROR WRITING FILE" and then execution failed on line 171 when attempting to convert from ascii to raster. It appears to me the gzip.open(gz_path, 'rb') command wasn't working, as the gz_path file was not compressed but already an ascii file.
(2) After working around issue 1 above, I started getting messages "Unused file/variable, skipping" even though upon inspection the skipped files did match an item in data_list.
When the GDAL_DATA environment variable is not set, the scripts will return "OSRCoordinateTransformationShadow" errors (see issue #9 for example). It would help avoid a lot of debugging if some or all of the scripts checked for the GDAL_DATA environment variable.
This call be made from the main scripts (e.g. landsat_prep_path_row.py) or could be made in one of the imported modules. It might make sense to have this be a drigo function.
A slightly different approach for handling this would be to move the .transform() and osr.ImportFromEPSG() calls into try/except blocks. The code is not well setup for this though and the exceptions may not get returned all the way out to the main scripts.
This should be an easy fix but we should check some of the other docs also.
Each script should have documentation on specifically what the code does
I'm having an issue where the code\local files (landsat_interpolate.py, etc.) can't import the code\support files (et_common.py, python_common, etc.). I wonder if a relative path needs to be included in those calls. Something like:
import ..support/et_common
or
from ..support/python_common import open_ini, read_param, call_mp
When running summary_histograms.py -h, I get the following warning:
C:\Anaconda3\envs\pymetric\lib\site-packages\mpl_toolkits\axes_grid_init_.p
12: MatplotlibDeprecationWarning:
The mpl_toolkits.axes_grid module was deprecated in Matplotlib 2.1 and will be
emoved two minor releases later. Use mpl_toolkits.axes_grid1 and mpl_toolkits.
isartist, which provide the same functionality instead.
obj_type='module')
usage: summary_histograms.py [-h] [--file FILE PATH] [--skip FID SKIPLIST]
[-bmin HISTOGRAM MIN] [-bmax HISTOGRAM MAX]
[-bsize HISTOGRAM BIN SIZE] [--start YYYY-MM-DD]
[--end YYYY-MM-DD] [--plot {all,acreage,field}]
[--output FOLDER] [-d]
Create Histograms
optional arguments:
-h, --help show this help message and exit
--file FILE PATH CSV File Path (default: None)
--skip FID SKIPLIST Comma separated list or range of FIDs to skip
(default: [])
-bmin HISTOGRAM MIN Histogram Minimum (integer) (default: 0)
-bmax HISTOGRAM MAX Histogram Maximum (integer) (default: 5)
-bsize HISTOGRAM BIN SIZE
Histogram Bin Size (integer) (default: 0.25)
--start YYYY-MM-DD Start date (default: None)
--end YYYY-MM-DD End date (default: None)
--plot {all,acreage,field}
Output Plots: all, acreage, or fields (default: all)
--output FOLDER Output folder (default:
C:\pymetric\summary_histograms)
-d, --debug Debug level logging (default: 20)
Some of the tiles have a slightly different naming scheme. These may be updated tiles.
n43w118: ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/1/IMG/USGS_NED_1_n43w118_IMG.zip
n43w117: ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/1/IMG/n43w117.zip
The NED tile downloader doesn't check which files are on the server and just blindly attempts to download the file. One approach would be to try an alternate naming format if the first one fails. The other approach would be to retrieve the full file list from the server and then identify the tiles by searching for the path/row.
In landsat_prep_path_row.py the path for the wrs2_tile_utm_zones.json is hardcoded to the footprint workspace. The problem with this is that this JSON file is provided as part of the repository, but the footprints can be downloaded to a different folder (i.e. outside of pymetric). This path could be set as part of the project INI, the path could be set relative to the script location, or we might just need to add some text to the docs making it clear that the footprints should always be downloaded to the pymetric\landsat\footprints folder.
Occurred during calibration at conference, cold pixel was placed right at edge of, but was still on, cloud.
It looks like there is a small change to the GRIDMET elevation netcdf file that breaks the gridmet_ancillary.py script. The array shape was previously 3d (1, 585, 1386), but now the shape is 2d (585, 1386). I'm not sure when this change was made, but as of 17-Sep-2018, it looks like the file was last modified on 11-Sep-2018 (https://climate.northwestknowledge.net/METDATA/data/). There is a fix for this in the develop branch.
Hi, Just curious if there has been a change in the naming scheme of the Landsat images that isn't allowing landsat_prep_path_row.py to unpack the tar.gz files. It looks like landsat_prep_path_row.py is looking for a naming scheme that follows:
'^(LT04|LT05|LE07|LC08)(\d{3})(\d{3})(\d{4})(\d{2})(\d{2})'
However I am getting file names that look like:
LT50370321988213XXX03.tar.gz
Am I missing a step? Thanks.
Add Bias Correction/Scaler option to gridmet_daily_refet.py
-Create both single value and monthly (12 values) options.
Currently, only the Monte Carlo script supports using the daily soil water balance Ke value when generating daily ETrF images. It seems like it would be helpful if the user could use the Ke value automatically for the Kc hot, instead of the defualt Kc hot from the INI file.
This might end up being related to issue #27 since there will likely need to be logic to control whether the code uses the default Kc hot (if one is not set), the Ke, or the user defined Kc hot.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.