Giter Club home page Giter Club logo

cesm_postprocessing's Introduction

CESM_postprocessing

Project repository for the CESM python based post-processing code, documentation via the wiki, and issues tracking.

The input data sets required by this code are separate from this repository. Instructions for accessing these data sets will be coming soon. For NCAR users, the data sets are already loaded into a central location on glade and do not need to be downloaded.

The NCAR cheyenne and DAV quick start guide along with other documentation is available at:

http://github.com/NCAR/CESM_postprocessing/wiki/

A NCAR/CGD mailman listserv is available by user subscription to convey information to NCAR machine users regarding the build status of common postprocessing python virtualenvs on cheyenne and geyser.

To subscribe:

http://mailman.cgd.ucar.edu/mailman/listinfo/cesm_postprocessing

To post a message to all the list members, send email to [email protected].

IMPORTANT NOTE: The purpose of the mailman listserv is to inform NCAR machine users about the build status of the common virtualenv's on cheyenne and geyser.

All bug reports and enhancement requests should be posted in this github repo issues and not via the mailman listserv.

cesm_postprocessing's People

Contributors

alperaltuntas avatar bandre-ucar avatar bertinia avatar dabail10 avatar duvivier avatar jedwards4b avatar lenae101 avatar mnlevy1981 avatar olyson avatar sherimickelson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cesm_postprocessing's Issues

Update / Replace the ocn diag timeseries awk script budget calculations and reporting

Currently the ocn diag timeseries scripts call awk scripts to look at the coupler log outputs to extract freshwater and heat budget information. This is fragile and breaks when the coupler log format changes. The information necessary for reporting budget information should be extracted directly from the coupler history output. Will work with @jtruesdal to get this functionality included in the postprocessing suite.

DIAGOBSROOT consistency across all components

Currently the DIAGOBSROOT xml setting is read for each component on each supported machine
in the $POSTPROCESS_PATH/Machines/machine_postprocess.xml by the create_postprocess script.

Both the land and the ice diags have been distributing their required diag input data as part of the
diag package but this data also resides on shared locations on the supported machines. Should
we only point to shared locations for input data rather than distribute input data as part of the postprocessing package? This would require setting up these shared location prior to running
the diags on each supported machine but it might reduce problems associated with file contents
diverging.

The atm and ocn currently only point to shared locations because their data sets are so large.

Yrs averaged in land diagnostics plot titles are trends, not climos

The climatology years that are shown in the land diagnostics plot titles, e.g., set_2, are showing trends years, not the climatology years. The climatology years for the plot titles are taken from the global attribute "yrs_averaged" in the climatology netcdf files, which have trend years, not climo years. I imagine this is being done somewhere in the python averager scripts.

Bug in AMWG diags: regridding

From @cecilehannay on April 21, 2017 20:46

Hi Cecile,

I dont know if you've been following this issue we've been having in ACME
on confluence. When running the AMWG diagnostics (generating plots),
we've been first doing offline interpolation to an FV style grid with NCO
and then running the diagnostics. However, for some plots the
diagnostics package will choke. I think (but not 100% sure) that the
issues is because the diagnostics package mis-detects the grid as a
Gaussian grid, and then tries to generate Gaussian weights. The Gaussian
weight NCL routine will fail if nlat is odd (yielding no plot). If nlat
is even, then it will succeed, but now we have the problem that we will be
using Gaussian weights instead of equi-angle weights, which will introduce
a small error.

I think the problem is because the diagnostic package relies on the
presence of the CAM-FV "slat" array to determine if it is a FV grid or
Gaussian grid, and this array would not be present in interpolated CAM-SE
data. Have you thought about this issue? Any plans to make the
detection algorithm more robust? Algorithms I've used in the past, none
of which are perfect are (1) FV grids should have a point on the pole, and
Gaussian grids will not or (2) check the spacing between different
latitudes (non-equal spacing suggests a Gaussian grid).

Mark

Copied from original issue: CESM-Development/cime#488

cesm_timeseries_generator.py needs to be able to "chunk" the data

Currently the cesm_timeseries_generator.py tool operates on every history time slice file in
a given directory regardless of the number of files. The script needs to be updated so that it uses the <tseries_filecat_years> element in the timeseries.xml specification for each history stream and
creates the variable timeseries file to span those number of years.

OCN fw and heat budget plots out of order

The python diagnostics are passing an unordered list of log files to process_cpl7b_logfiles_fw.awk and process_cpl7b_logfiles_heat.awk. The resulting text files that are created for the budgets contain out of order years. When these text files are processed the ncl scripts are plotting the years in the order given and the resulting plots are jumbled. Can the python scripts be modified to pass an alphabetically sorted list of files to the awk routines. I believe this was the default in the past.

env_diags_ice.xml DIAGOBSROOT

Looks like some scripts expect this to be /glade/p/cesm/pcwg while others expect it to be /glade/p/cesm/pcwg/ice/data. These need to be consistent. I would prefer the former.

Dave

bug in AMWG diag: date issues

From @cecilehannay on April 21, 2017 20:57

Dear Cecile

I just wanted to inform you of a dating/time issue I noticed with the amwg diag package. The 12 monthly climo files produced by the package have incorrect "time" values. In that the number of days of the January file corresponds to midyear while the number of days in the June file corresponds to the end of the year. This could lead to a lot of confusion by netcdf readers like ncview.

For example:

ncdump -v time F2000.e106.f09f09.raijin.PD_ctrl_01_climo.nc

leads to the following output
...
time = 14448.5 ;
}

14448.5 is halfway through the year and so ncview will display July for this month, even though it's January. I hope that makes sense.

Kind regards

Nicholas.

Copied from original issue: CESM-Development/cime#489

finish coding and testing pp_env_get

This utility will be created in the postprocessing caseroot and can be used like the CIME xmlquery
to get XML environment variables for any of the env_*.xml files except the env_timeseries.xml
which has a different format.

missing ice diag set 4 plots

I'm not getting all of my ice diagnostic plots for set 4 and am getting the following errors in the log. This used to work for me. Not sure if this is related to issue #11.

Here is a web page showing the problem:
http://webext.cgd.ucar.edu//B1850/ocn-dev/b.e15.B1850_WW3.f09_g16.lang_redi_2hr.001/ice/proc/diag/b.e15.B1850_WW3.f09_g16.lang_redi_2hr.001/b.e15.B1850_WW3.f09_g16.lang_redi_2hr.001-b.e15.B1850.f09_g16.pi_control.36/yrs36-55/index.html

NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001
//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_jfm_albsni_cice_NH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_amj_albsni_cice_NH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_jas_albsni_cice_NH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_ond_albsni_cice_NH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_ann_albsni_cice_NH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_jfm_albsni_cice_SH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_amj_albsni_cice_SH.png
NOT FOUND: /glade/scratch/jet/archive/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001/ice/proc/diag/g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001//g.e15.GIAF_WAV.T62_gx1v6_rx1_ww3a.lang_redi_2hr.001-g.e15.GIAF.T62_gx1v6.e15_ctrl.002//yrs146-165//contour/diff_con_jas_albsni_cice_SH.png

Bug in AMWG diags: Modification of the AMWG scripts related to Taylor diagrams

From @cecilehannay on April 21, 2017 20:42

A script for Taylor diagrams in the AMWG diagnostics package uses the pressure data for
vertical interpolation for 300-mb zonal wind, and vertical weighted average of relative humidity
and temperature. In the current script, “lev”, lev = po*(hyam+hybm), is used for pressure, but lev
is not the actual pressure at each grid point.

Thus, pressure should be computed using the NCL function, pres_hybrid_ccm, which
calculates pressure at hybrid levels [pres(i,j)=po*hyam+ps(i,j)*hybm]. Also, a variable at a
specific pressure level should be calculated by a vertical interpolation function (vinth2p) and
should not be a value at a closest pressure level.

20160826-AMWG_Taylor_debug.pdf

Copied from original issue: CESM-Development/cime#486

ocn model_vs_control can't handle multiple comparisons

The ocean diagnostics for model vs control are not general enough to handle more than one comparisons. Running successive model_vs_control experiments against different controls makes use of the same generic directory with the later comparisons overwriting the earlier ones. The other components use directory names that are composed of the actual model and control case names and are unique. Implementing something like this would allow us to make several comparisons to different controls.

update the djf *_avg_generator.py scripts to more closely check the years

If the requested diags start year is the first year that data is available, then the djf climo
calculation fails ungracefully because it can't find the previous December file. Need to
update the diagUtilsLib.py and *_avg_generator.py routines to print a warning message and modify the avg_list input to the specifier for the pyAverager to use inclusive years in the djf spec.

merge the land-ice diags into the the github repo master

Jan Lenearts has added new land-ice diagnostics to the CESM_postprocessing suite
in this sandbox:
/glade/u/home/lenaerts/work/CESM_postprocessing

The additions / changes to the lnd_diag and python wrapper scripts need to be added.

In addition, the the RACMO2 files: /glade/p/work/lenaerts/diag/lnd_diag_data/obs_data
need to be added to the common lnd_diag_data location on glade as well as
the inputdata repo.

update the ice diags generation to allow for parallelism

Currently the ice diags require that the model_vs_obs be run first followed by the model_vs_model. These 2 diag types should be able to split the MPI communicator and run in parallel. The problem is with the same plot classes being used for model_vs_obs and model_vs_model. Solution is to add additional plot classes to accommodate the 2 diag types.

cloned postprocess dirs

After cloning (create_clone) a new case with where the clone directory already has a "postprocess" dir, can we get the new "postprocess" directory to work? currently, the "under the covers" content still points to the original directory. thanks!

create a common location for the virtualenv on yellowstone

Need to install the postprocessing virtualenv in a common location on yellowstone so users only need to cd to that dir and run create_postprocess for their particular case. This will also allow updates to the postprocessing code without requiring early adopters to update and run create_python_env.

postprocessing tasks need to be gzip aware

For older runs, some of the history and log files are only available in gzip format. The postprocessing tools need to be able to gunzip files in parallel via a library call and separate utility in the postprocessing suite of tools.

problem with pyAverager and python dependencies on cheyenne

There's a problem with the mpi4py intel build on cheyenne and the pyAverager.
Currently, the [comp]_averages submission script includes:

module load impi/5.1.3.210
module load mpi4py/2.0.0-impi

If we use a gnu build in Sheri's anaconda install for these libraries then the pyAverager
works as expected.

So the workaround is to comment out these module loads and add these lines after all the modules are loaded:

export PATH=/glade/p/work/mickelso/anaconda/anaconda/bin/:$PATH
export PYTHONPATH=/glade/p/work/mickelso/anaconda/anaconda/lib/:/glade/p/work/mickelso/anaconda/anaconda/lib/python2.7/site-packages/:$PYTHONPATH

You're basically not using impi or the system version of mpi4py and using @sherimickelson version instead. We're wondering if it's a problem with the version of mpi4py and it's corrupting things. @sherimickelson is working with CISL.

AMWG diags: New release of CERES EBAF TOA Edition4.0

From @cecilehannay on April 21, 2017 21:2

New release of CERES EBAF TOA Edition4.0

Dear EBAF-TOA User,

The CERES team announces the release of Edition 4.0 of the Energy Balanced and Filled (EBAF) Top-of-Atmosphere (TOA) data product. EBAF-TOA Ed4.0 leverages off of the many algorithm improvements that have been made in the Edition 4 suite of CERES level 1-3 data products. EBAF Ed4.0 also includes a limited set of MODIS imager-based cloud parameters alongside the EBAF-TOA fluxes.

This initial release covers the period March 2000 – June 2015. Additional months (through September 2016) will become available later this month.

For users interested in EBAF surface radiative fluxes, we anticipate releasing EBAF-Surface Ed4.0 in May 2017.

For more information, please refer to the CERES EBAF Ed4.0 Data Quality Summary (https://ceres.larc.nasa.gov/documents/DQ_summaries/CERES_EBAF_Ed4.0_DQS.pdf).

CERES EBAF-TOA Ed4.0 data are available from the CERES Ordering Tool (https://ceres.larc.nasa.gov/order_data.php), which also provides subsetting and visualization capabilities.

Copied from original issue: CESM-Development/cime#491

The ocn diags yz_plot.ncl needs to be updated

This is a common NCL routine that doesn't allow for enough user control of placement
of plots, titles, or scales. It needs to be simplified and updated with newer NCL calls
that support all the customizable changes that have been requested.

Finish getting hi-res ocean diags working

Using a case from Justin Small, was able to get 10 years worth of ocean hi-res data through the
pyAverager. The diags are still dying because of a missing hi-res phc RHO observation file - and maybe some other files as well.

The incomplete model vs. obs are included here:
http://webext.cgd.ucar.edu/BRCP/BRCP85C5CN_ne120_t12_pop62.c13b17.asdphys.001/index.20161004-115725.html

For the timeseries, created the horizontal zonal average files in: /glade/p/cesm/omwg/timeseries_obs_tx0.1v2_62lev

Steps that are needed to finish this:

  1. make sure all the hi-res obs files with 62 levels are in the omwg/obs_data
  2. rename the timeseries zonal average to include the res and levels in the file name.
  3. modify the call to the pyAverager in ocn_avg_generator.py for timeseries so the suffix matches
    the horizontal za file names from 2.
  4. rerun the diags all the way through for model vs. obs and timeseries
  5. add an ocn hi-res examples dir
  6. make sure the input data repo has all the new files included.

Cylc workflow integration

Successfully tested an end-to-end workflow using

  • 5 submissions of CESM concurrently with STA submission
  • all component diags from history slice data
  • timeseries generation concurrent with diags

Need to include an email address in env_postprocess.xml.

Need to update documentation for using and setting up cylc.

Bug in AMWG diags: set 4 Stationary Heat Flux

From @cecilehannay on April 21, 2017 20:24

Grad student Hansi Singh and I discovered an error in the amwg diagnostics. The vertical integral of the zonal mean eddy heat fluxes (set 4 "Stationary Heat Flux" and "Transient Heat Flux") in functions_eddyflux.ncl is off by a factor of ~4. This happened because the computation in functions_eddyflux.ncl involves a multiplication by "coeff", which is supposed to be cp/g. However, coeff is also defined in functions_zonal.ncl, where it is 2pire (re=earth radius). I put some print statements in the scripts to prove that functions_eddyflux.ncl is actually getting 2pire when it should be getting cp/g. Now these two coeff differ by 4e5 so you would think this would be noticeable. But there is an compensating (and wrong) division by PS in the vertical integrals in functions_eddyflux.ncl. This leaves only a factor of ~4 error. Computation of the meridional eddy flux terms for moisture and momentum are vertical averages not vertical integrals, so they should be divided by PS. They are not multiplied by cp/g, so they are fine. Only the heat flux functions are wrong because they should be multiplied by cp/g and they are integrals, so should not be divided by PS. It is very easy to fix. It requires two steps: 1) change coeff to another variable in either one of these functions 2) do not divide by PS in function get_VBSTAR_TBSTAR_2D and function get_VPTP_BAR_2D This is probably pretty important to fix. Thanks! Cecilia

Copied from original issue: CESM-Development/cime#485

Latest tag postprocessing_20160609

I get the following error when trying to create a postprocess with the latest tag. Everything works fine with the postprocessing_20160523 tag.

./create_postprocess -backtrace -debug 3 -caseroot /glade/scratch/dbailey/b.e15.B1850.f09_g16.pi_control.all.62b
'NoneType' object has no attribute 'getitem'
Traceback (most recent call last):
File "./create_postprocess", line 605, in
status = main(options)
File "./create_postprocess", line 499, in main
envDict = initialize_main(envDict, options, standalone)
File "./create_postprocess", line 456, in initialize_main
if options.cesmtag[0]:
TypeError: 'NoneType' object has no attribute 'getitem'

finish coding and testing the copy_html utility

This is a stand-alone tool that will exist in the postprocessing caseroot and will read the
env_postprocess.xml global web settings and diag options to copy html plots to an external
web server.

This extracts the step of copying html plots from the [comp]_diags_generator.py, which are run
in parallel, and creates a single utility that can be called from the workflow manager as needed.

finish coding and testing pp_env_set

This utility will be created in the postprocessing caseroot and can be used like the CIME xmlchange
to set XML environment variables for any of the env_*.xml files except the env_timeseries.xml
which has a different format.

ocn diag timeseries zonal averages calculations

The zonal averages work on average files that come out of the pyAverager. Need to investigate if these calculations can be incorporated into python and the pyAverager or if the existing fortran wrapped calls can be incorporated into the ocn_avg_generator.py script.

timeseries script

This script still lists caseroot as the "postprocess" directory and should be one level up. Also, the code does not work with gzipped files, but does not print a message to this effect.

model vs model ice/lnd timeseries files in correct dirs

  1. model ice diags: need to copy ice_vol_* (timeseries) file for the "Control" (which is the main case, it is not the "Diff" case) over to the model_vs_model subdirectory from the main climo directory
  2. same for land, case 2 file needs to be in the model_vs_model dir: *ANN_ALL.nc file

integrate Chuck's WACCM diags and obs data sets into atm diags

Code location sandbox that needs to be merged into master:
/glade/p/work/bardeenc/postprocess

A test case with example XML settings:
/glade/p/work/bardeenc/cases/update2_pp

Chuck is working on updating the metadata for the obs data sets required by the new diag code.

Bug in AMWG diags: significance coder

From @cecilehannay on April 21, 2017 20:45

Hi there -

I'm trying to run the AMWG, but it's dying when I turn on significance.

CODE_BASE
-- /glade/p/cesm/amwg/amwg_diagnostics

Error:
-- fatal:Undefined identifier: (get_PRECC_VARIANCE) is undefined, can't continue

Problem:
so I started poking around and it looks like the version of plot_surfaces_cons.ncl in SVN is different from the one on glade, which has these extra lines of code:

if (vars(i) .eq. "PRECC") then
A = get_PRECC (inptr1,outptr1)
if (sig_plot .eq. "True") then
A_var = get_PRECC_VARIANCE (meansptr1,varptr1)
end if
end if

if (vars(i) .eq. "PRECL") then
A = get_PRECL (inptr1,outptr1)
if (sig_plot .eq. "True") then
A_var = get_PRECL_VARIANCE (meansptr1,varptr1)
end if
end if

The problem is that funcs_surf_variance.ncl doesn't have a matching get_PRECC_VARIANCE function call.

Solution:
I guess either funcs_surf_variance.ncl needs a function call, OR these lines need to be removed from plot_surfaces_cons.ncl.

Thanks! (p.s., This is the problem file:)

-- /glade/p/cesm/amwg/amwg_diagnostics/code/plot_surfaces_cons.ncl

Copied from original issue: CESM-Development/cime#487

lnd diag regridding does not work in parallel

Problem tracked down to the hardcoding of a temp file in an esmf library routine that causes
a race condition when the NCL scripts for regridding are called in parallel.

Solution is to create separate regridding working directories in the lnd_diags_generator.py script for the following files:

b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._MAM_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._SON_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._JJA_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._DJF_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._ANN_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062.MONS_climo.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._JJA_means.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._MAM_means.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._SON_means.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._ANN_means.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0043-0062._DJF_means.nc
b.e12.B1850C5CN.ne30_g16.control.012.clm2.h0.0001-0062.ANN_ALL.nc

Integrate CAM-CHEM diags into python suite

Need to integrate Simone's updated CAM-CHEM diags into the python postprocessing suite.

Complete suite is available here:

/glade/p/acd/tilmes/amwg/aerodiag

This will be the diag suite used in Simone's 2017 ASD runs on cheyenne.

AMWG diags: Long-term satellite-based Precipitation Product

From @cecilehannay on April 21, 2017 20:59

Regarding the rainfall product I told you about, as part of my dissertation, a long-term global rainfall precipitation product using the historical archive of infrared observations from multiple satellites (ISCCP GridSat-B1), and climatology information from a NASA GPCP, was developed. The product is called Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network - Climate Data Record (PERSIANN-CDR). PERSIANN-CDR provides rainfall estimations at Daily and at 0.25 degree resolution from 1983 – 2014 (delayed present). PERSIANN-CDR is now part of the U.S. national Data Record Program and is now available via the NOAA NCDC servers. You can find PERSIANN-CDR via the following link under the "Atmospheric CDRs" category.
National Climatic Data Center (NCDC) | Climate Data Records (CDR) Program

The paper of this work has recently been accepted to be published at the Bulletin of American Meteorological Society (BAMS). I have attached the paper for your perusal. It is also accessible from here. AMS Journals Online - PERSIANN-CDR: Daily Precipitation Climate Data Record from Multi-Satellite Observations for Hydrological and Climate Studies

Regarding the Precipitation related slides that you showed, It would be great if we can see how PERSIANN-CDR performs in seeing the extremes in your research on interest. In fact, one thing I hope I can do is on the application of PERSIANN-CDR in evaluating climate models’ simulations in capturing historical rainfall patterns, as well as extreme events. It would be my great pleasure to collaborate with you and others at NCAR on this and I would be more than happy to help you with the data and other things.

Please let me know what you think.

Sincerely,

Hamed

P.S. The other part of my dissertation was to use satellite-based observation to study climate and precipitation extremes, such as flood, drought, hurricanes, tropical cyclones. I was humbled to receive a NASA fellowship for my proposal to NASA on studying extreme precipitation events using satellite-based observation, considering the impacts of climate change.
CEE Graduate Student Awarded NASA Earth and Space Science Fellowship Award | The Henry Samueli School of Engineering at UC Irvine


Copied from original issue: CESM-Development/cime#490

need check_input_data like tool for CESM postprocessing

If a user wants to include their own input data sets to the diagnostics for testing and/or development, there currently isn't a clear process in place. One idea is as follows:

  1. Allow for multiple ':' separated paths in *_DIAGOBSPATH XML variable.
  2. The check_input_data module would aggregate files in the paths string and then create symlinks to all input data files in $PP_CASE_PATH/[comp]_data
  3. The *_DIAGOBSROOT single path setting would then be set to the $PP_CASE_PATH/[comp]_data before being passed to the NCL.

This has the advantage of not requiring SVN gatekeeper or common obs_data local dir permissions to work with user specified input files. The user can then later ask to have their datasets included in the WG obs_data repo for wider use.

cesm_timeseries_generator.py needs to update the monthly average time coordinate for each variable

Currently the history slice files and subsequent variable timeseries files reporting of the monthly
time average date is misleading to users because it writes the average value for the month at the
first time step of the subsequent month. Other models report the average value at the mid-point
of the month.

Kevin has a stand-alone serial tool that goes back and updates the files with the non-misleading time coordinate. This tool should be called from the cesm_timeseries_generator.py script as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.