CESM CIME Case Control System configuration files
esmci / ccs_config_cesm Goto Github PK
View Code? Open in Web Editor NEWCESM CIME Case Control System configuration files
CESM CIME Case Control System configuration files
The NAG compiler flag "-nan" which initializes all variables to NaN should be used when debug = TRUE.
Getting my feet wet with adding some new grids with nuopc. With a 15km MPAS grid + gx1v7 mask (a homebrew, admittedly, but templated using existing MPAS grids in CIME) I get the below 10m wind field for Hurricane Harvey that clearly shows a low resolution lnd/ocn mask.
<model_grid alias="mpasa15natl_mpasa15natl" not_compset="_POP">
<grid name="atm">mpasa15natl</grid>
<grid name="lnd">mpasa15natl</grid>
<grid name="ocnice">mpasa15natl</grid>
<mask>gx1v7</mask>
</model_grid>
You can see the "blockiness" associated with the land/sea mask and the corresponding fluxes provided to CAM (in this case, the land tiles are draggier than the ocean tiles, making for a sharp gradient in surface wind).
If I change the mask to tx0.1v2 in modelgrid_aliases_nuopc.xml...
<model_grid alias="mpasa15natl_mpasa15natl" not_compset="_POP">
<grid name="atm">mpasa15natl</grid>
<grid name="lnd">mpasa15natl</grid>
<grid name="ocnice">mpasa15natl</grid>
<mask>tx0.1v2</mask>
</model_grid>
I get a much better lnd/ocn mask more amenable to the high-resolution grid.
Perhaps a dumb question (feel free to close it if is!), but for all not_compset="_POP"
combos that are of higher resolution than gx1v7, should we be using tx0.1v2 by default?
For example, all the MPAS grid configs (even the ones used for Earthworks, e.g., mpasa3p75_mpasa3p75_mg17?) are using the gx1v7 for masking by default, which would seem to have a similar issue as above.
Currently all queues on a system are assumed to have
the same set of flags, but we may want to submit some
jobs of a workflow with a flag such as -x and others without that flag.
I have several mpi-serial cases that break on cheyenne after updating to ccs_config_cesm0.0.24 and after. I did a git bisect to trace it to that exact change.
An example test that fails is:
SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.cheyenne_intel.clm-USUMB_nuopc
Also SMS_Ly1_Mmpi-serial.1x1_brazil.IHistClm50BgcQianRs.cheyenne_intel.clm-output_bgc_highfreq
All I get in the cesm.log is:
Completion(send) value=1 tag=1
Completion(send) value=1 tag=2
Completion(send) value=1 tag=3
Completion(send) value=1 tag=4
Completion(send) value=0 tag=1
Completion(send) value=0 tag=2
Completion(send) value=0 tag=3
Completion(send) value=0 tag=4
The drv.log and med.log show that it's in the middle of initialization. There is a lnd.log and atm.log
There is one mpi-serial test that is in place for CESM alpha testing that looks like it's passing: SMS_D_Ld1_Mmpi-serial.f45_f45_mg37.I2000Clm50SpRs.cheyenne_intel.clm-ptsRLA
There are several changes that come together in this version, updating ESMF, mpt, and pio altogether. I'm going to see if I can separate that out.
I'd like to be able to add invalid
to the ffpe-trap
list for gnu. However, this currently causes problems: gfortran's isnan (which is called in cime via the CPRGNU-specific shr_infnan_isnan) causes a floating point exception when called on a signaling NaN. We use isnan
in various places in CESM, so this would presumably cause problems when running in debug mode.
This appears to be a known gnu bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66462
Once gnu fixes that bug and we can rely on having versions of gcc/gfortran with that bug fixed, we should add invalid
to the ffpe-trap
list for gnu.
(This is a revision of ESMCI/cime#1763 . See that issue for details of things I tried, unsuccessfully, to workaround gnu's current issues.)
There's a grid entry for ne240np4.pg3 in ccs_config/component_grids_nuopc.xml but no mesh file specified. Add mesh file:
/glade/work/aherring/grids/uniform-res/ne240np4.pg3/grids/ne240pg3_ESMFmesh_cdf5_c230728.nc
Please verify this grid runs, as I've not.
We need to modify so that jobs using < 128 tasks are set to 128 tasks for mpibind to work correctly on derecho.
Some of the ESMF mesh files under share/messhes on cheyenne don't have area with units of radians^2. This is potentially a problem if a conservative remapping is used between grids that have different units. CESM also does a scaling based on area and if this scaling doesn't have consistent units and area from the mesh file is used the scaling would be wrong.
The following have units of m^2:
/glade/p/cesmdata/cseg/inputdata/share/meshes/C12_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C192_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C24_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C384_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C48_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C5_181018_ESMFmesh.nc
The following have area, but don't have units defined for it. If the units are radians^2 this would be fine, but it's unclear when the units are not defined on the files:
/glade/p/cesmdata/cseg/inputdata/share/meshes/0.125nldas2_ESMFmesh_cd5_241220.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/5x5pt-amazon_navy_ESMFmesh_cd5_c20210107.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/bias_correction_gpcp_cmap.Prec_ESMFmesh_cdf5_110121.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/bias_correction_gpcp_cruncep.Prec_ESMFmesh_cdf5_110121.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/bias_correction_gpcp_qian.Prec_ESMFmesh_cdf5_110121.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/C96_181018_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/fv0.47x0.63_141008_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/fv0.9x1.25_141008_polemod_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/fv1.9x2.5_141008_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/gland_20km_c150511_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/gland_5km_c150511_ESMFmesh_c20190929.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/greenland_4km_epsg3413_c170414_ESMFmesh_c20190729.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/gx1v6_090205_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/gx1v7_151008_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/gx3v7_120309_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/JRA025m.170209_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/ne16np4_scrip_171002_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/rx1_nomask_181022_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/TL319_151007_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/TL639_200618_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.1v2_ESMFmesh_cd5_c20210105.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.25v1_190204_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.66v1_180604_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.66v1_190314_ESMFmesh_c20190714.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.66v1_190314_ESMFmesh.nc
/glade/p/cesmdata/cseg/inputdata/share/meshes/wtx0.66v1_210917_ESMFmesh.nc
I see the following issue with case.setup, when I try to run cheyenne_nvhpc
the specific test case is:
SMS_D.f10_f10_mg37.I2000Clm51BgcCrop.cheyenne_nvhpc.clm-crop
using ctsm5.1.dev091 which has ccs_config_cesm0.0.15
ERROR: module command /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python load esmf-8.3.0b05-ncdfio-openmpi-g openmpi/4.1.1 netcdf-mpi/4.8.1 pnetcdf/1.12.2 ncarcompilers/0.5.0 pio/2.5.6d failed with message:
Lmod has detected the following error: These module(s) exist but cannot be loaded as requested: "pio/2.5.6d"
Try: "module spider pio/2.5.6d" to see how to load the module(s).
It looks like the fix is pretty simple and I can make a PR for it.
Hello, trying to run this test
SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc
With these externals in ctsm5.1.dev158
diff --git a/Externals.cfg b/Externals.cfg
index a17f8e2ec..b29af5c64 100644
--- a/Externals.cfg
+++ b/Externals.cfg
@@ -34,7 +34,7 @@ hash = 34723c2
required = True
[ccs_config]
-tag = ccs_config_cesm0.0.84
+tag = ccs_config_cesm0.0.87
protocol = git
repo_url = https://github.com/ESMCI/ccs_config_cesm.git
local_path = ccs_config
@@ -44,11 +44,11 @@ required = True
local_path = cime
protocol = git
repo_url = https://github.com/ESMCI/cime
-tag = cime6.0.175
+tag = cime6.0.198
required = True
[cmeps]
-tag = cmeps0.14.43
+tag = cmeps0.14.47
protocol = git
repo_url = https://github.com/ESCOMP/CMEPS.git
local_path = components/cmeps
fails for me as follows.
qcmd -- ./create_test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc -r .
Waiting on job launch; 9351378.casper-pbs with qsub arguments:
qsub -l select=1:ncpus=1:mem=10GB -A P93300606 -q casper@casper-pbs -l walltime=01:00:00
Warning: no access to tty (Inappropriate ioctl for device).
Thus no job control in this shell.
Testnames: ['SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc']
Using project from .cesm_proj: P93300041
create_test will do up to 1 tasks simultaneously
create_test will use up to 45 cores simultaneously
Creating test directory /glade/work/erik/ctsm_worktrees/external_updates/cime/scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi
RUNNING TESTS:
SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc
Starting CREATE_NEWCASE for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc with 1 procs
Finished CREATE_NEWCASE for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc in 186.715961 seconds (PASS)
Starting XML for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc with 1 procs
Finished XML for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc in 119.385811 seconds (PASS)
Starting SETUP for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc with 1 procs
Finished SETUP for test SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc in 1.602533 seconds (FAIL). [COMPLETED 1 of 1]
Case dir: /glade/work/erik/ctsm_worktrees/external_updates/cime/scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi
Errors were:
ERROR: module command /glade/u/apps/casper/23.10/spack/opt/spack/lmod/8.7.24/gcc/7.5.0/m4jx/lmod/lmod/libexec/lmod python load ncarenv/23.10 cmake/3.26.3 intel/2023.2.1 mkl/2023.2.0 netcdf/4.9.2 ncarcompilers/0.5.0 parallelio/2.6.2 esmf/8.5.0 ncarcompilers/1.0.0 failed with message:
Lmod has detected the following error: The following module(s) are unknown:
"ncarcompilers/0.5.0"
Please check the spelling or version number. Also try "module spider ..."
It is also possible your cache file is out-of-date; it may help to try:
$ module --ignore_cache load "ncarcompilers/0.5.0"
Also make sure that all modulefiles written in TCL start with the string
#%Module
Waiting for tests to finish
FAIL SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc (phase SETUP)
Case dir: /glade/work/erik/ctsm_worktrees/external_updates/cime/scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi
Due to presence of batch system, create_test will exit before tests are complete.
To force create_test to wait for full completion, use --wait
test-scheduler took 380.3022334575653 seconds
casper-login1 cime/scripts> cd SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi/
Directory: /glade/work/erik/ctsm_worktrees/external_updates/cime/scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi
casper-login1 scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi> cat TestStatus
PASS SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc CREATE_NEWCASE
PASS SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc XML
FAIL SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc SETUP
casper-login1 scripts/SMS_D_Lm1_Mmpi-serial.CLM_USRDAT.I1PtClm50SpRs.casper_intel.clm-USUMB_nuopc.20240112_170555_qa7smi> ./case.setup
ERROR: module command /glade/u/apps/casper/23.10/spack/opt/spack/lmod/8.7.24/gcc/7.5.0/m4jx/lmod/lmod/libexec/lmod python load ncarenv/23.10 cmake/3.26.3 intel/2023.2.1 mkl/2023.2.0 netcdf/4.9.2 ncarcompilers/0.5.0 parallelio/2.6.2 esmf/8.5.0 ncarcompilers/1.0.0 failed with message:
Lmod has detected the following error: The following module(s) are unknown: "ncarcompilers/0.5.0"
Please check the spelling or version number. Also try "module spider ..."
It is also possible your cache file is out-of-date; it may help to try:
$ module --ignore_cache load "ncarcompilers/0.5.0"
Also make sure that all modulefiles written in TCL start with the string #%Module
Per discussion in ESMCI/cime#3854 , the general agreement was to go with this solution:
Still maintain cprnc builds on cheyenne and izumi and point to them with CCSM_CPRNC. However, point to a versioned location. So, rather than pointing to some generic location like $ENV{CESMDATAROOT}/tools/cime/tools/cprnc/cprnc.cheyenne
, the path would have some version information. I'd suggest using the short version of the cime hash from which the build was made. Then, if we want to use an updated build of cprnc, we would create a new directory with a new cime version, build cprnc there, and then update cime to point to this new version. This means we would still need to maintain a cprnc build on the system, but it would get around some of the biggest issues with doing so (listed at the top of this issue). Side-note: we would maintain a sym link pointing to the most recent version, so you can invoke 'cprnc' manually and get this most recent version.
@fischer-ncar given that you have become the de facto maintainer of cprnc builds for cesm, would you be willing to do this when you get a chance (no rush)?
Updating from ccs_config_cesm0.0.64 to ccs_config_cesm0.0.65 the test:
SMS_D_Mmpi-serial.5x5_amazon_r05.I2000Clm50SpMizGs.cheyenne_intel.mizuroute-default
fails on Cheyenne. It just gets to initialization and looks like it's an issue with mesh creation for mizuRoute.
Since, this is a Cheyenne update only this is a won't fix. But, it's here for documentation.
The template scripts (template.case.run, etc.) have paths to CIME tools that are no longer correct. In testing on izumi with a recent version of cime, this gave the following error at the start of the test job (immediately after the submitted job began to run):
Traceback (most recent call last):
File "/var/spool/torque/mom_priv/jobs/559137.izumi.cgd.ucar.edu.SC", line 21, in <module>
from standard_script_setup import *
ModuleNotFoundError: No module named 'standard_script_setup'
I can't figure out why I'm getting this problem on izumi but not on cheyenne, and @ekluzek reports not seeing it on either machine. One reason that you potentially wouldn't get this error is if you are using an older model checkout where cime/scripts/Tools used to exist at one point before you updated cime: in this case, you can still have a stale .pyc file in cime/scripts/Tools that can allow this to work. I encountered this issue with a fresh checkout. (But @ekluzek says he tried it with a fresh checkout.) On cheyenne, it looks like the sys.path already contains the needed entry (cime/CIME/Tools) on entry to the .case.test script, so the addition of a wrong path (cime/scripts/Tools) does no harm; but I can't tell why cime/CIME/Tools is already in sys.path on entry to .case.test on cheyenne but not on izumi. (I see there are some places in CIME that modify PYTHONPATH, and maybe this modification has different behavior in different situations????)
Fix incoming.
Since there are no intel-classic
equivalents to intel-oneapi.cmake
and intel-oneapi_derecho.cmake
in machines/cmake_macros, builds of CESM compsets will fail in the share external due to missing CPRINTEL
preprocessor macro in shr_infnan_mod.F90.in.
Example error messages:
path_to/CESM/share/src/shr_infnan_mod.F90.in(71): error #7950: Procedure name in MODULE PROCEDURE statement must be the name of accessible module procedure. [SHR_INFNAN_ISNAN_DOUBLE]
module procedure shr_infnan_isnan_double
--------------------^
path_to/CESM/share/src/shr_infnan_mod.F90.in(73): error #7950: Procedure name in MODULE PROCEDURE statement must be the name of accessible module procedure. [SHR_INFNAN_ISNAN_REAL]
module procedure shr_infnan_isnan_real
--------------------^
path_to/CESM/share/src/shr_infnan_mod.F90.in(71): error #7407: Unresolved MODULE PROCEDURE specification name. [SHR_INFNAN_ISNAN_DOUBLE]
module procedure shr_infnan_isnan_double
--------------------^
path_to/CESM/share/src/shr_infnan_mod.F90.in(73): error #7407: Unresolved MODULE PROCEDURE specification name. [SHR_INFNAN_ISNAN_REAL]
module procedure shr_infnan_isnan_real
--------------------^
compilation aborted for shr_infnan_mod.F90 (code 1)
Steps to reproduce:
checkout_externals
a clone of CESM (e.g. cesm2_3_alpha17a)--compiler intel-classic
(tested with FHS94)Proposed fix:
Create softlinks in machines/cmake_macros intel-classic.cmake
to intel.cmake
and intel-classic_derecho.cmake
to intel_derecho.cmake
.
I'm filing this as a wontfix, because the fix needs to go into mizuRoute. But, I am filing it here so the versions of ccs_config where this is an issue for mizuRoute can be documented.
mizuRoute fails on cheyenne with the update to ccs_config_cesm1.0.65 as seen in #141. The problem is the update of ESMF which isn't working with mizuRoute. The ESMF version in the update is:
<modules compiler="intel" mpilib="mpt" DEBUG="TRUE">
<command name="use">/glade/p/cesmdata/cseg/PROGS/modulefiles/esmfpkgs/intel/19.1.1/</command>
- <command name="load">esmf-8.4.1b02-ncdfio-mpt-g</command>
+ <command name="load">esmf-8.5.0b19-ncdfio-mpt-g</command>
</modules>
mizuRoute fails on izumi with ccs_config_cesm1.0.74 with the ESMF update as:
<modules compiler="intel" mpilib="mpt" DEBUG="TRUE">
<command name="use">/glade/p/cesmdata/cseg/PROGS/modulefiles/esmfpkgs/intel/19.1.1/</command>
- <command name="load">esmf-8.5.0b19-ncdfio-mpt-g</command>
+ <command name="load">esmf-8.5.0-ncdfio-mpt-g</command>
</modules>
Please add module for pnetcdf/1.12.3 for the gnu/openmpi combination on cheyenne
The current config_grids.xml that has just been migrated to this repository is confusing.
config_grids_mct.xml
component_grids_mct.xml
modelgrid_aliases_mct.xml
maps_mct.xml
config_grids_nuopc.xml
component_grids_nuopc.xml
modelgrid_aliases_nuopc.xml
maps_nuopc.xml
component_grids_xxx.xml
- specifies the component grids that are available.
create_newcase
). Since mapping files are normally not needed for nuopc, this will normally just entail creating a new mesh file.modelgrid_aliases_xxx.xml
- specifies the aliases that are used when calling create_newcase
. This is really only needed when new model grids are supported and used in testing. For just experimentation with a new grid, this will will not need to be changed.maps_xxx.xml
- specifies the mapping files between components that are needed. For nuopc this includes rof->ocn, glc->ocn and glc->ice mapping files - and also new mizuroute mapping files between the lnd->rof that are needed for performance reasons.Since the grid is specified created at create_newcase
time - introducing new grids cannot be put in the SourceMods
. But it seems fairly lightweight now to just add one entry to component_grids_nuopc.xml.
There are some changes required to ccs_config for the mksurfdata_esmf build.
Cmake must be updated
ESMF needs to be updated
NetCDF-MPI needs to be 4.8.1
PIO updated to 2.5.7
The environment variable PIO must be set to the top level directory of the PIO build.
This is done by the module load on cheyenne, but on other machines may need to
be added in.
NUOPC driver requires the ESMF library to be prebuilt. On Casper, the ESMF library is available for nvhpc/21.11
and nvhpc/22.1
with openmpi/4.1.1
. However, I encountered the following runtime error with both nvhpc compiler versions when using the NUOPC driver:
/glade/scratch/sunjian/cam6_run/F2000climo.f19_f19_mg17.casper.nvhpc-gpu.gpu02_pcols00384_mpi036_nuopc/bld/cesm.exe: symbol lookup error: /glade/u/apps/dav/opt/openmpi/4.1.1/nvhpc/22.1/lib/openmpi/mca_op_avx.so: undefined symbol: llvm.x86.avx512.pmins.d
There seems to be a bug in the nvhpc compiler regarding the AVX instruction (open-mpi/ompi#9444). Is it used somewhere in the ESMF lib or NUOPC driver?
Sam Rabin saw this issue. The wallclock time is hard coded to 12 hours for Derecho in machines/config_batch.xml. So if you set it for you testlist it ends up getting ignored and it uses 12 hours. We'd like to be able to set it and have that value used on submission to Derecho.
Cheyenne has recently updated the default library version including mpt/2.25
and netcdf-mpi/4.8.1
. Is there a reason that these versions are not updated in the config_machines.xml
file accordingly?
In order to run FORTRAN unit-testing on Derecho we need a version of the library built there as well as the path for it added to the intel_derecho.cmake file under machines.
The Cheyenne version has this:
if (MPILIB STREQUAL mpi-serial AND NOT compile_threaded)
set(PFUNIT_PATH "$ENV{CESMDATAROOT}/tools/pFUnit/pFUnit4.7.0_cheyenne_Intel19.1.1_noMPI_noOpenMP")
endif()
Something like that needs to be done, including building the PFUnit library for version 4.7.0.
@billsacks it looks like you were involved in this before. Is this something that you can do?
We need to add some new grids for mizuRoute that will have lakes in them. The grid is different when lakes are explicitly modeled versus grids that don't have lakes.
The component grids for HDMAmz will also need a lake version.
Within nvhpc_derecho.cmake and nvhpc_gust.cmake -target=zen3
is used when the statement should actually be -tp=zen3
. The only accepted arguments for -target=
should be gpu
or multicore
according to the NVHPC Documentation.
[Note I originally opened this issue in CAM: https://github.com/ESCOMP/CAM/issues/665]
Replace ESMF mesh files for ARCTIC, ARCTICGRIS and CONUS grids in ccs_config, using an improved algorithm. By improved, I mean the control volumes are more regular, with less frequent occurrences of non-convex polygons (which tends to reduce the accuracy of the mapping ... although I'm told ESMF can handle these fine). This is achieved by only allowing up to 6 sided polygons, as opposed to the current algorithm that permits 14 sided polygons. I'm referring to this new algorithm as "chevrons" since it does tend to produce some non-convex 6-sided chevrons (however, this preferable to our egregiously non-convex 12 sided "stars" that results from the current algorithm).
Changing the mesh files will change answers for anything that uses mapping weights. But it's not clear whether the F-compsets will have answer changes because nothing is mapped -- everything is on the same grid (maybe the SST/ICE dataset is mapped using the mesh files?).
I've verified that the new grids aren't of a lower quality by checking their conservation properties (mapping them to themselves using ESMF and then checking those maps) compared to the current ones. The global area has smaller errors, and the relative/maximum/least-squares errors are all reduced using these new mesh files.
/glade/work/aherring/grids/var-res/ne0np4.ARCTIC.ne30x4/grids/ne0ARCTICne30x4_esmf_chevrons_c220827.nc
/glade/work/aherring/grids/var-res/ne0np4.ARCTICGRIS.ne30x8/grids/ne0ARCTICGRISne30x8_esmf_chevrons_c220928.nc
/glade/work/aherring/grids/var-res/ne0np4.CONUS.ne30x8/grids/ne0CONUSne30x8_esmf_chevrons_c220928.nc
These will need to be added to the inputdata and paths updated in ccs_config/component_grids_nuopc.xml
. I don't think we should bother updating the MCT mapping files, as MCT will be deprecated soon/eventually.
Question. In the inputdata repo, the meshes belong in inputdata/share/meshes/
... do they need to be consistent with the equivalent SCRIP grid files in inputdata/share/scripgrids/
? If so, then I will need to add these new SCRIP files to the inputdata too.
@PeterHjortLauritzen
@patcal
@jedwards4b
@fischer-ncar
@cacraigucar
I get the following error when trying to build a case on cheyenne_intel under ctsm5.1.dev090 with ccs_config_cesm0.0.21 (or 15 as well) for the case...
SMS_D_Ld3.f10_f10_mg37.I1850Clm50BgcCrop.cheyenne_intel.clm-default
./xmlchange MPILIB=openmpi
ERROR: module command /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python load esmf-8.2.0b23-ncdfio-mpt-g ncarcompilers/0.5.0 pio/2.5.6d failed with message:
Lmod has detected the following error: These module(s) exist but cannot be
loaded as requested: "pio/2.5.6d"
Try: "module spider pio/2.5.6d" to see how to load the module(s).
We talked about this at the Nov/15/2022 CSEG meeting. We agreed on a few things to put into practice for a naming convention of grid aliases.
Examples:
Some of the the things we said in the 11/15 meeting were:
Grid for mizuRoute with lakes
Erik - I need to distinguish the grid for mizuRoute when lakes are active with not. So plan to have a grid alias like: f09_f09_mg17_rHDMA-lake.
Jim would prefer if this alias were shorter.
Mike suggests _rH and _rHl or something like that
Cheryl likes this, too; prefers not having a dash
Where should the mask go in the alias? Note this is currently inconsistent between runoff vs glacier aliases. Feeling is we should be consistent, and probably have the mask at the end. That would also be consistent with the ordering in long names.
The component grids are not set correctly (set to UNSET
) for the f05_f05_mg17
grid alias when nuopc driver is used.
Results in group build_grid
ATM_GRID: UNSET
GLC_GRID: null
GRID: a%0.47x0.63_l%0.47x0.63_oi%0.47x0.63_r%r05_g%null_w%null_z%null_m%gx1v7
ICE_GRID: UNSET
LND_GRID: UNSET
MASK_GRID: gx1v7
OCN_GRID: UNSET
ROF_GRID: r05
WAV_GRID: null
Results in group case_last
GRIDS_SPEC_FILE: /glade/work/fvitt/camdev/waccmx_fixes/ccs_config/config_grids.xml
Note: the GRIDS_SPEC_FILE
file does not exist.
I'm not sure what version of cime to use with ccs_config0.0.87 for a set of externals that will work together.
Wwith: ccs_config_cesm0.0.87 and latest master of cime (cime6.0.194-16-g102d408fb), but also in cime6.0.193. I don't see an update to the XSD file in cime, for a long time. I also don't see an issue or PR in cime about this as well.
It looks like the idea is to have NODENAME_REGEX set for all machines in the main config_machines.xml, and then each machine have a subdirectory for it with what's needed for just that machine. This is a great refactoring to do, just looks like some coordination needs to happen in cime.
./create_test SMS_Lm13.1x1_brazil.I2000Clm50FatesCruRsGs.casper_intel.clm-FatesCold -r . --no-build
ERROR: Command: '/usr/bin/xmllint --xinclude --noout --schema /glade/work/erik/ctsm_worktrees/external_updates/cime/CIME/data/config/xml_schemas/config_machines.xsd /glade/work/erik/ctsm_worktrees/external_updates/ccs_config/machines/config_machines.xml' failed with error '/glade/work/erik/ctsm_worktrees/external_updates/ccs_config/machines/config_machines.xml:52: element NODENAME_REGEX: Schemas validity error : Element 'NODENAME_REGEX': This element is not expected. Expected is ( machine ).
/glade/work/erik/ctsm_worktrees/external_updates/ccs_config/machines/config_machines.xml fails to validate' from dir '/glade/work/erik/ctsm_worktrees/external_updates/cime/scripts'
(ctsm_pylib) casper-login2 cime/scripts> /usr/bin/xmllint --xinclude --noout --schema /glade/work/erik/ctsm_worktrees/external_updates/cime/CIME/data/config/xml_schemas/config_machines.xsd /glade/work/erik/ctsm_worktrees/external_updates/ccs_config/machines/config_machines.xml
/glade/work/erik/ctsm_worktrees/external_updates/ccs_config/machines/config_machines.xml:52: element NODENAME_REGEX: Schemas validity error : Element 'NODENAME_REGEX': This element is not expected. Expected is ( machine ).
remove _CLM from" not_compset" for ne16pg3_ne16pg3_mg17 alias in modelgrid_aliases_nuopc.xml. This is to permit functional support for the 2deg spectral-element CSLAM in F-cases, which is being introduced in CTSM: ESCOMP/CTSM#1973.
This is the ccs_config issue associated with installing the ne3pg3 grid into CESM. This is the corresponding CAM issue: ESCOMP/CAM#726 and this is the corresponding CTSM issue:
ESCOMP/CTSM#1927.
This issue needs to be addressed first, and then the the CAM and CTSM issues can be addressed.
Here are the code mods needed for modelgrid_aliases_nuopc.xml:
<model_grid alias="ne3pg3_ne3pg3_mg37" not_compset="_POP">
<grid name="atm">ne3np4.pg3</grid>
<grid name="lnd">ne3np4.pg3</grid>
<grid name="ocnice">ne3np4.pg3</grid>
<mask>gx3v7</mask>
</model_grid>
And here's the entry for component_grids_nuopc.xml:
<domain name="ne3np4.pg3">
<nx>486</nx> <ny>1</ny>
<mesh>/glade/work/aherring/grids/uniform-res/ne3np4.pg3/grids/ne3pg3_ESMFmesh_c221214_cdf5.nc</mesh>
<desc>ne3np4 is Spectral Elem 10-deg grid with a 3x3 FVM physics grid:</desc>
<support>EXPERIMENTAL FVM physics grid</support>
</domain>
It looks there was an error that came in with 7461ce9 in config_batch.xml. It now doesn't match the schema...
This is in ccs_config_cesm0.0.49
(npl) ccs_config/machines> /glade/u/apps/opt/conda/envs/npl/bin/xmllint --xinclude --noout --schema /glade/work/erik/ctsm_worktrees/mizuRoute/cime/CIME/data/config/xml_schemas/config_batch.xsd config_batch.xml
config_batch.xml:59: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:82: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:103: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:150: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:197: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:328: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:337: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:353: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:370: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:382: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:429: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:465: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:479: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:562: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:593: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:605: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:621: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:636: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:651: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:666: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:682: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:695: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:717: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:737: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml:752: element argument: Schemas validity error : Element 'argument': This element is not expected. Expected is ( arg ).
config_batch.xml fails to validate
I'm 99% sure this is a ccs_config_cesm
issue, but it might be something in CIME?
Background: Greenplanet is kind of a frankenstein machine, where there are different queues for different groups of nodes with very different properties... so when you are logged in and run create_newcase
, the default detected machine is greenplanet-sky24
which uses a queue that only runs on nodes with 40 cores / node, but you can specify --mach greenplanet-sib2.9
to use a group of nodes with 16 cores / node instead.
Problem Description: In cesm2_3_beta07
, which uses cime6.0.12
(predating the creation of this repository), I can build and run on either machine. In cesm2_3_beta08
, which uses cime6.0.15
and ccs_config_cesm0.0.16
, I have the following problems:
greenplanet-sib2.9
, but when I try to run ./case.setup
I get the error$ ./case.setup
ERROR: Current machine greenplanet-sky24 does not match case machine greenplanet-sib29.
greenplanet-sky24
, but the job aborts before even executing cesm.exe
with the error$ cat run.${CASE}
ERROR: Could not initialize machine object from ${CESMROOT}/ccs_config/machines/config_machines.xml. This machine is not available for the target CIME_MODEL.
For (1), it is recognizing my hostname as being tied to greenplanet-sky24
and as a result it assumes it doesn't have access to greenplanet-sib29
. Is there an option I can add to config_machines.xml
to make it clear that the greenplanet login nodes have access to both machines?
For (2), I don't know where to begin troubleshooting. For what it's worth, I do have a ~/.cime/config
file that sets CIME_MODEL=CESM
.
Does anyone have a reason to keep this file around, it is already creating confusion in the user community. Perhaps if we still need it we can add a banner at the top indicating that the file is
deprecated.
I'm getting a fail in the build of mpi-serial cases with the intel compiler and DEBUG on are failing on Derecho in ccs_config_cesm0.0.84 with ctsm5.1.dev156-43-g84bab54dc in what will become ctsm5.1.dev157 (ESCOMP/CTSM#2269).
Two test cases that fail are:
ERS_D_Mmpi-serial_Ld5.1x1_brazil.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesCold
SMS_Lm3_D_Mmpi-serial.1x1_brazil.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesColdHydro
The build fails at the link step as follows with undefined references to MPI for mpich. Which is odd
because this is built with mpi-serial, so mpich shouldn't be anywhere in here.
model_only is True
- Building atm Library
Building atm with output to /glade/derecho/scratch/erik/tests_ctsm51d155derechofs/ERS_D_Mmpi-serial_Ld5.1x1_brazil.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesCold.GC.ctsm51d155derechofs_int/bld/atm.bldlog.231201-010530
datm built in 0.957645 seconds
Building cesm from /glade/work/erik/ctsm_worktrees/external_updates/components/cmeps/cime_config/buildexe with output to /glade/derecho/scratch/erik/tests_ctsm51d155derechofs/ERS_D_Mmpi-serial_Ld5.1x1_brazil.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesCold.GC.ctsm51d155derechofs_int/bld/cesm.bldlog.231201-010530
Component cesm exe build complete with 43 warnings
Building test for ERS in directory /glade/derecho/scratch/erik/tests_ctsm51d155derechofs/ERS_D_Mmpi-serial_Ld5.1x1_brazil.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesCold.GC.ctsm51d155derechofs_int
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_strerror@FABRIC_1.0'
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_fabric@FABRIC_1.1'
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_getinfo@FABRIC_1.3'
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_dupinfo@FABRIC_1.3'
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_freeinfo@FABRIC_1.3'
ld: /opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/lib/libmpi_intel.so.12: undefined reference to `fi_version@FABRIC_1.0'
I can see references for mpich in my software_env.txt for my case, which seems odd...
software_environment.txt:LMOD_SYSTEM_DEFAULT_MODULES=ncarenv/23.09:craype/2.7.23:intel/2023.2.1:ncarcompilers/1.0.0:cray-mpich/8.1.27:netcdf/4.9.2
software_environment.txt:PBS_O_PATH=/glade/u/apps/derecho/23.06/spack/opt/spack/netcdf/4.9.2/oneapi/2023.0.0/iijr/bin:/glade/u/apps/derecho/23.06/spack/opt/spack/hdf5/1.12.2/oneapi/2023.0.0/d6xa/bin:/glade/u/apps/derecho/23.06/spack/opt/spack/ncarcompilers/1.0.0/oneapi/2023.0.0/ec7b/bin/mpi:/opt/cray/pe/pals/1.2.11/bin:/opt/cray/libfabric/1.15.2.0/bin:/opt/cray/pe/mpich/8.1.25/ofi/intel/19.0/bin:/opt/cray/pe/mpich/8.1.25/bin:/glade/u/apps/derecho/23.06/spack/opt/spack/ncarcompilers/1.0.0/oneapi/2023.0.0/ec7b/bin:/glade/u/apps/common/23.04/spack/opt/spack/intel-oneapi-compilers/2023.0.0/compiler/2023.0.0/linux/lib/oclfpga/bin:/glade/u/apps/common/23.04/spack/opt/spack/intel-oneapi-compilers/2023.0.0/compiler/2023.0.0/linux/bin/intel64:/glade/u/apps/common/23.04/spack/opt/spack/intel-oneapi-compilers/2023.0.0/compiler/2023.0.0/linux/bin:/opt/cray/pe/craype/2.7.20/bin:/glade/u/apps/derecho/23.06/opt/utils/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/glade/u/home/erik/bin:/usr/sbin:/opt/c3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/pbs/bin:/glade/u/apps/derecho/23.06/opt/bin:/usr/local/bin:/usr/bin:/sbin:/bin:/opt/cray/pe/bin
Is there a correct configure_machines.xml file available for Perlmutter? Some of the modules in the current file are outdated.
The path needs to be updated to the nag_mpi_sergument.txt file which assumes a path under cime.
Here's how I changed it assuming ccs_config is parallel to the CIMEROOT directory. That might not be the best way to handle this...
diff --git a/machines/config_compilers.xml b/machines/config_compilers.xml
index 0aa7cda..1da28b1 100644
--- a/machines/config_compilers.xml
+++ b/machines/config_compilers.xml
@@ -357,7 +357,7 @@ using a fortran linker.
<FFLAGS>
<!-- The indirect flag below is to deal with MPI functions that violate -->
<!-- the Fortran standard, by adding a large set of arguments from a file. -->
- <base>-Wp,-macro=no_com -convert=BIG_ENDIAN -indirect $ENV{CIMEROOT}/config/cesm/machines/nag_mpi_argument.txt</base>
+ <base>-Wp,-macro=no_com -convert=BIG_ENDIAN -indirect $ENV{CIMEROOT}/../ccs_config/machines/nag_mpi_argument.txt</base>
<!-- DEBUG vs. non-DEBUG runs. -->
<append DEBUG="FALSE"> -ieee=full -O2 </append>
<!-- The "-gline" option is nice, but it doesn't work with OpenMP. -->
There are some files for different grids with mizuRoute that need to be updated:
<!-- hcru:-->
./xmlchange LND_DOMAIN_MESH='$DIN_LOC_ROOT/share/meshes/360x720_120830_ESMFmesh_c20210507_cdf5.nc'
./xmlchange ATM_DOMAIN_MESH='$DIN_LOC_ROOT/share/meshes/360x720_120830_ESMFmesh_c20210507_cdf5.nc'
<!-- MERIT:->
./xmlchange ROF_DOMAIN_MESH='$DIN_LOC_ROOT/rof/mizuRoute/meshes/polygon_centroid/MERITmz_global_ctrcrd_sqr_cdf5_ESMFmesh_c20230510.nc'
<!-- _amazon_rHDMA -->
./xmlchange --force LND2ROF_FMAPNAME='$DIN_LOC_ROOT/rof/mizuRoute/gridmaps/map_5x5_amazon_TO_HDMAmz_5x5_amazon_aave.201028.nc'
./xmlchange --force ROF2LND_FMAPNAME='$DIN_LOC_ROOT/rof/mizuRoute/gridmaps/map_HDMAmz_5x5_amazon_TO_5x5_amazon_aave.201028.nc
```'
CPU nodes have 128 MAX_TASKS_PER_NODE while gpu nodes have 64. How do we handle each case independently
and how do we handle the hybrid case?
@briandobbins suggested this and I think it's a good idea.
The config_*.xml files are large and unwieldy. The proposal is to separate them by machine. So config_machines.xml will contain only the NODENAME_REGEX field for each machine and the rest of the content will be moved to a subdirectory by machine name.
machines/config_machines.xml
machines/derecho/config_machines.xml
machines/perlmutter/config_machines.xml
etc. I would implement this change in a backward compatible manor so that the current config file format would continue to work. This would also be done for the config_batch.xml file.
The build on hobart is failing because of a missing ESMF directory. See as follows...
Create namelist for component sesp
Calling /scratch/cluster/erik/ctsm5.1.dev096/cime/CIME/non_py/src/components/stub_comps_nuopc/sesp/cime_config/buildnml
2022-05-19 23:34:11 cpl
Create namelist for component drv
Calling /scratch/cluster/erik/ctsm5.1.dev096/components/cmeps/cime_config/buildnml
ERROR: ESMFMKFILE not found /home/dunlap/ESMF-INSTALL/8.0.0bs16/lib/libg/Linux.intel.64.mvapich2.default/esmf.mk
now on derecho there is an additional option to specify job priorities (-l job_priority=premium/regular/economy/preempt).
Currently a case can be created with a fully resolved compset name and no compset alias,
but you cannot do this with a grid - the alias is required. We should be able to specify a fully
resolved grid for a case.
We need runoff mapping files for CESM-CISM's Antarctica grid, replacing the placeholders here:
We need to add a grid alias for fully coupled runs with mizuRoute to MOM. The proposed alias is:
f09_t232_rHDMAlk
So 1-degree CAM, t232 for MOM and the HDMA lake grid for mizuRoute.
The following meshes need to be added to component_grids_nuopc.xml for CAM nuopc tests:
Add a grid alias for new workhorse for F-cases (land, ocean, atm on same grid) with 2/3deg MOM6 mask:
ne30pg3_ne30pg3_mt232
The -xCORE-AVX2
intel compiler option in Depends.intel
for the PUMAS_MG_OBJS
source files is causing issues for the sandy bridge and ivy bridge processors on pleiades. The intel AVX instructions settings in the cmake_macros/intel_*.cmake
files should be sufficient. Is there a reason we should not remove the -xCORE-AVX2
setting from Depends.intel
?
--- a/machines/Depends.intel
+++ b/machines/Depends.intel
@@ -46,6 +46,6 @@ ifeq ($(DEBUG),FALSE)
$(SHR_RANDNUM_C_OBJS): %.o: %.c
$(CC) -c $(INCLDIR) $(INCS) $(CFLAGS) -O3 -fp-model fast $<
$(PUMAS_MG_OBJS): %.o: %.F90
- $(FC) -c $(INCLDIR) $(INCS) $(FFLAGS) $(FREEFLAGS) -O3 -xCORE-AVX2 -no-fma -ftz -no-prec-sqrt -qoverride-limits -no-inline-max-total-size -inline-factor=200 -qopt-report=5 $<
+ $(FC) -c $(INCLDIR) $(INCS) $(FFLAGS) $(FREEFLAGS) -O3 -no-fma -ftz -no-prec-sqrt -qoverride-limits -no-inline-max-total-size -inline-factor=200 -qopt-report=5 $<
We have a calibrated site for FATES, that we want to make available for CTSM/CESM users to use. This makes the site easier to use for users that haven't run with FATES previously. And it increases visibility of a FATES site that can be used by everyone. The site is at Barro Colorado Island in the middle of the Panama Canal. Technically it's part of US territory, and is a nature preserve.
The name above is from the convention of using lower case for the site or city name, followed by the country abbreviation in caps.
The start of the issue in CTSM is here...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.