I’m a firm believer in the power of open collaboration. Feel free to join our Turing Environment and Sustainability Slack Workspace if you’re interested in joining our mission!
Website | Google Scholar | Twitter | LinkedIn
Making it far easier to read in and work with large volumes of climate model output from CMIP5/6
I’m a firm believer in the power of open collaboration. Feel free to join our Turing Environment and Sustainability Slack Workspace if you’re interested in joining our mission!
Website | Google Scholar | Twitter | LinkedIn
missing_value = scs_full_cube.data.fill_value
slope = np.zeros(cube.data[0,:,:].shape)
for x in range(0,len(lons)):
for y in range(0,len(lats)):
arr =cube.data.data[:,x,y]
ind = np.where(arr < 366)[0]
if len(ind) < len(arr)*0.8: # if 20% of points are missing then set to missing value (ignore it in plot)
slope[x,y] = missing_value
else:
slope[x,y], intercept, r_value, p_value, std_err = stats.linregress(years[ind], arr[ind])
# create new mask and add to cube
new_mask = np.zeros(slope.shape)
new_mask[ slope == missing_value] = 1.
new_mask = new_mask.astype(bool) # convert to boolean array (True/False)
mx = ma.masked_array(slope, mask=new_mask)
catlg = bp.catalogue(dataset='happi', Experiment='All-Hist', Var='ua', RunID='run001', Model='MIROC5')
cube = bp.get_cube(catlg)
in ()
----> 1 cube = bp.get_cube(catlg)
/home/users/shosking/PYTHON/baspy/_iris/get_cubes.py in get_cube(filt_cat, constraints, verbose, nearest_lat_lon)
194 if (len(filt_cat.index) == 1):
195 cube = get_cubes(filt_cat, constraints=constraints, verbose=verbose,
--> 196 nearest_lat_lon=nearest_lat_lon)
197 cube = cube[0]
198
/home/users/shosking/PYTHON/baspy/_iris/get_cubes.py in get_cubes(filt_cat, constraints, verbose, nearest_lat_lon)
172
173 ### Apply Fixes to enable cubes in tmp_cubelist to concatenate ###
--> 174 tmp_cubelist = fix_cubelist_before_concat(tmp_cubelist, __current_dataset, model, freq)
175
176 ### if the number of netcdf files (and cubes) >1 then
/home/users/shosking/PYTHON/baspy/_iris/cube_fixes.py in fix_cubelist_before_concat(cubelist, dataset, model, freq)
58 coords = [dc.long_name for dc in c.dim_coords]
59 for axis in coords:
---> 60 if c.coord(axis).points.dtype == 'float64':
61 c.coord(axis).points = c.coord(axis).points.astype('float32')
62 for axis in ['time','latitude','longitude']:
/home/users/shosking/miniconda/lib/python2.7/site-packages/iris/cube.pyc in coord(self, name_or_coord, standard_name, long_name, var_name, attributes, axis, contains_dimension, dimensions, coord_system, dim_coords)
1455 msg = 'Expected to find exactly 1 %s coordinate, but found '
1456 'none.' % bad_name
-> 1457 raise iris.exceptions.CoordinateNotFoundError(msg)
1458
1459 return coords[0]
CoordinateNotFoundError: u'Expected to find exactly 1 pressure coordinate, but found none.'
I get a warning
_catalogue.py:305: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
As far as I can tell, the output is as expected and this does not cause a problem.
e.g., bp.catalogue(dataset='cmip', Var=['tos', 'tas'], SubModel=['ocean', 'atmos'])
The ordering of SubModel should match that of Var - this ensures we do not read a Var name which is present in both ocean and atmos (e.g., 'pr')
Hi Scott, I'm trying to use baspy to get data from a modelling experiment which is stored on JASMIN. I've added the relevant info to datasets.py but I'm getting the error that it is looking for the _catalogues.csv file and can't find it. Is there a way to generate this file for a new dataset? I can't easily see how. Thanks!
ps I'm not currently in bas_climate but have requested access.
at the moment the code assumes you are a member of our JASMIN GWS - use publicly readable locations for catalogue files (http?)
also make the util.region_masking generic for continents, countries, subregions etc
👏 for releasing this excellent and useful open source tool!
I wanted to invite you to collaborate with some broader efforts related to processing CMIP data. Much of this work is also related to cloud-based CMIP6 data access. Some of the relevant repos are here:
We have a group involving folks from LDEO, GFDL, NCAR, and CEDA that meets every other Friday to discuss --details here: https://github.com/pangeo-forge/cmip6-pipeline#coordination-meetings
Please feel free to have anyone from your team join this meeting if you're interested in learning what others are up to. We would certainly love to hear about baspy and how you are using it in your research.
All netcdf files found within each path should be listed within the catalogues
Create an efficient way to extract a lat-lon cube from a cube of high dimensions.
e.g., take the first index from all other dimensions
xy_cube = cube[0,0,0,:,:,0,0]
^ here we need to work out which are the lat and lon dimensions.
Add start and end date to each item within the catalogues. These can be obtained from the netcdf file names
..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.