Giter Club home page Giter Club logo

mpas's People

Contributors

akturner avatar amametjanov avatar benhills avatar climbfuji avatar douglasjacobsen avatar eclare108213 avatar jn4snz avatar jonbob avatar jonwoodring avatar larofeticus avatar ldfowler58 avatar lowrie avatar maltrud avatar mark-petersen avatar matthewhoffman avatar mgduda avatar mperego avatar nickszap avatar njeffery avatar philipwjones avatar pwolfram avatar skamaroc avatar stephenprice avatar syha avatar toddringler avatar tongzhangice avatar vanroekel avatar weiwangncar avatar whlipscomb avatar xylar avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mpas's Issues

restart_timestamp does not match file name in V3

When saving restarts every three months, the restart_timestamp does not match any file name. It looks like a 30 day rather than a true monthly interval. Are we allowed only single monthly intervals?

FYI, single yearly intervals work as expected.

mu-fe1.lanl.gov> pwd
/panfs/scratch3/vol16/mpeterse/runs/c35d
mu-fe1.lanl.gov> cat restart_timestamp 
 0005-04-01_00:00:00
mu-fe1.lanl.gov> ls restarts | cat
restart.0000-04-01_00.00.00.nc
restart.0000-06-30_00.00.00.nc
restart.0000-09-28_00.00.00.nc
restart.0000-12-27_00.00.00.nc
restart.0001-03-27_00.00.00.nc
restart.0001-06-25_00.00.00.nc
restart.0001-09-23_00.00.00.nc
restart.0001-12-22_00.00.00.nc
restart.0002-03-22_00.00.00.nc
restart.0002-06-20_00.00.00.nc
restart.0002-09-18_00.00.00.nc
restart.0002-12-17_00.00.00.nc
restart.0003-03-17_00.00.00.nc
restart.0003-06-15_00.00.00.nc
restart.0003-09-13_00.00.00.nc
restart.0003-12-12_00.00.00.nc
restart.0004-03-12_00.00.00.nc
restart.0004-06-10_00.00.00.nc
restart.0004-09-08_00.00.00.nc
restart.0004-12-07_00.00.00.nc
restart.0005-03-07_00.00.00.nc

in streams:

<immutable_stream name="restart"
                  type="input;output"
                  filename_template="restarts/restart.$Y-$M-$D_$h.$m.$s.nc"
                  filename_interval="output_interval"
                  reference_time="0000-01-01_00:00:00"
                  clobber_mode="truncate"
                  input_interval="initial_only"
                  output_interval="00-03-00_00:00:00"/>

seg fault in release-v2.0

I am getting a seg fault with the current version of release-v2.0 on a standard 120km global run. Develop, with the same IC and namelist, runs fine.

It must be a change in the last few commits.

mu0216.localdomain> pwd
/panfs/scratch3/vol16/mpeterse/runs/m69d

mu0216.localdomain> cat log.0000.err
Reading namelist.input

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
ocean_model_relea 00000000005C4F82 Unknown Unknown Unknown
ocean_model_relea 0000000000597310 Unknown Unknown Unknown
ocean_model_relea 000000000040BEE6 Unknown Unknown Unknown
ocean_model_relea 000000000040BDAB Unknown Unknown Unknown
ocean_model_relea 000000000040BD5C Unknown Unknown Unknown
libc.so.6 00002B9D7EBF1CDD Unknown Unknown Unknown
ocean_model_relea 000000000040BC69 Unknown Unknown Unknown

This is the last commit:

lo2-fe.lanl.gov> pwd
/usr/projects/climate/mpeterse/mpas_git/release-v2.0
lo2-fe.lanl.gov> git log | head -n 20
commit 01c479b
Author: Douglas Jacobsen [email protected]
Date: Tue Nov 5 09:51:30 2013 -0700

Removing LA-CC number from source files.

The LA-CC number can still be found in the LICENSE file.

commit fa6aff4
Author: Douglas Jacobsen [email protected]
Date: Tue Nov 5 11:08:22 2013 -0700

Cleaning up constituent numbers in registry.

Previously the number of constituents in Registry were not consistent
across all var_arrays.

commit 896a867
Author: Douglas Jacobsen [email protected]
Date: Tue Nov 5 13:12:50 2013 -0700

quad ocean grids not working

quad grids do not work in the baroclinic channel and overflow. See attached. Advection appears not to occur, as cold water should be down slope at 6 hours.

My fear is that cells that are not 6-sided may not be working correctly on spherical grids.

I tested quad grids on the overflow in October 2012 and results were identical by eye to hex grid. Advection settings for the run that worked were:
&advection
config_vert_tracer_adv = 'stencil'
config_vert_tracer_adv_order = 3
config_horiz_tracer_adv_order = 3
config_thickness_adv_order = 2
config_coef_3rd_order = 0.25
config_monotonic = .true.
config_check_monotonicity = .false.
/

@douglasjacobsen and @toddringler do you have any suggestions on isolating the problem by changing settings in the input file?

m81t_overflow_quad_6hr

ocean: boundary{Cell,Edge,Vertex}

it appears that the information contained in boundary{Cell,Edge,Vertex} is duplicated in {cell,edge,vertex}Mask.

{cell,edge,vertex}Mask is used throughout the ocean core.

boundary{Cell,Edge,Vertex} is never used (as best as I can tell).

should remove all occurrences of boundary{Cell,Edge,Vertex}.

Dimensions not available in mesh

I am defining two new dimensions in Registry_oac_epft.xml (see chunk of code below). However, only nBuoyancyLayers is available to me through mesh. I noticed that nVertLevelsP1 is also not available through mesh and that sometimes it just gets defined as nVertLevels + 1.

<dims>
  <dim name="nBuoyancyLayers" definition="namelist:config_nBuoyancyLayers"
         description="The number of buoyancy layers used for buoyancy coordinates."
  />
  <dim name="nBuoyancyLayersP1" definition="nBuoyancyLayers+1"
         description="The number of interfaces between buoyancy layers used for buoyancy coordinates."
  />
</dims>

Ocean loop over tracers

I don't know if there are more instances of this. But on line 832 of mpas_ocn_time_integration_split.F, the loop that is

do 1 = 1, 2

Is intended to be over temperature, and salinity. This should be modified to use the start and end indices of the array_group (dynamics) to be more robust.

It doesn't affect the current model in a negative way, but in the future if the order of tracers are modified at all in Registry.xml, then the answer will likely be wrong.

config_pressure_gradient_type refers to undefined option "common_level_eos"

It seems that "common_level_eos" should actually be "Jacobian_from_TS"

I'm getting a seg-fault with the following stack trace:
#3 0x53B351 in __ocn_vel_pressure_grad_MOD_ocn_vel_pressure_grad_tend at mpas_ocn_vel_pressure_grad.F:274 (discriminator 32)
#4 0x4E25F0 in __ocn_tendency_MOD_ocn_tend_vel at mpas_ocn_tendency.F:278
#5 0x42ABA2 in __ocn_time_integration_split_MOD_ocn_time_integrator_split at mpas_ocn_time_integration_split.F:427
#6 0x41166E in __ocn_time_integration_MOD_ocn_timestep at mpas_ocn_time_integration.F:108
#7 0x40BC9D in __mpas_core_MOD_mpas_timestep at mpas_ocn_mpas_core.F:702
#8 0x40C538 in __mpas_core_MOD_mpas_core_run at mpas_ocn_mpas_core.F:643
#9 0x4088C7 in __mpas_subdriver_MOD_mpas_run at mpas_subdriver.F:142
#10 0x407A0A in mpas at mpas.F:16

This seems to occur because the inSituEOS package is not active. This package is only activated when config_pressure_gradient_type is common_level_eos currently.

pgi compiler shows dependency of mpas_subdriver.F on cvmix*.mod

I corrected pgi line length errors from the release branch in my fork, branch:
pgi_line_length_coupled

On mustang, using the libraries

module purge
module load pgi mvapich2
module use --append /usr/projects/climate/SHARED_CLIMATE/modulefiles/all
module load netcdf/3.6.3
module load parallel-netcdf/1.3.1
module load pio/1.7.2

make pgi CORE=ocean

results in
PGF90-F-0004-Unable to open MODULE file cvmix_kinds_and_types.mod (mpas_subdriver.F: 14)
PGF90/x86-64 Linux 12.10-0: compilation aborted

If I put the following soft links in, the compile works:
mu-fe3.lanl.gov> pwd
/turquoise/usr/projects/climate/mpeterse/mpas_git/pgi_line_length_coupled/src/driver
mu-fe3.lanl.gov> ln -isf ../core_ocean/cvmix/*mod .

I looked for a dependence of mpas_subdriver.F on cvmix*, but I can't find it.

ocean: density listed as iro in ocean Registry

density is listed as input (i) in Registry, but its value in the input file can be incorrect. furthermore, density is always computed at init based on T,S, pressure and user-configured form of EOS.

have incorrect density values in the input file can be misleading to users. maybe we should list density is simply (o, output) in Registry and remove its presence from input ocean.nc files.

restart timestamp should be changed after file write

Currently, the restart timestamp is written before the corresponding file is written to disk. If the run dies during the filing of the restart file, the timestamp points to a partial file.

Here is an example from a run that died mid-write:

ls -lh restarts/
-rw-rw-r-- 1 mpeterse mpeterse 28G Feb 11 21:24 restart.0000-01-23_23.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 28G Feb 11 21:29 restart.0000-01-24_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 11G Feb 11 21:34 restart.0000-01-24_01.00.00.nc

wf-fe1.lanl.gov> cat Restart_timestamp 
 0000-01-24_01:00:00

output written incorrectly at every step

Beginning from a restart that was not aligned with the output interval, output is written at every time step.

wf-fe2.lanl.gov> pwd
/panfs/scratch3/vol16/mpeterse/runs/c37m
wf-fe2.lanl.gov> grep config_dt namelist.ocean_forward 
    config_dt = '00:08:00'
wf-fe2.lanl.gov> ls -lh output/output.0000-01-01_00.00.00.nc 
-rw-rw-r-- 1 mpeterse mpeterse 3.5T Dec  3 16:44 output/output.0000-01-01_00.00.00.nc
wf-fe2.lanl.gov> ncdump -v xtime output/output.0000-01-01_00.00.00.nc | tail
  "0000-10-28_23:28:00                                             ",
  "0000-10-28_23:28:00                                             ",
  "0000-10-28_23:36:00                                             ",
  "0000-10-28_23:36:00                                             ",
  "0000-10-28_23:44:00                                             ",
  "0000-10-28_23:44:00                                             ",
  "0000-10-28_23:52:00                                             ",
  "0000-10-28_23:52:00                                             ",
  "0000-10-29_00:00:00                                             " ;
}
wf-fe2.lanl.gov> cat Restart_timestamp 
 0000-10-28_00:00:00
wf-fe2.lanl.gov> ncdump -v xtime restarts/restart.0000-10-28_00.00.00.nc | tail -n 3
 xtime =
  "0000-10-28_00:00:00                                             " ;
}

At 3.5T, I just deleted the file.
Here are the relevant lines in streams file:

<immutable_stream name="restart"
                  type="input;output"
                  filename_template="restarts/restart.$Y-$M-$D_$h.$m.$s.nc"
                  filename_interval="output_interval"
                  reference_time="0000-01-01_00:00:00"
                  clobber_mode="truncate"
                  input_interval="initial_only"
                  output_interval="00-00-10_00:00:00"/>

<stream name="output"
        type="output"
        filename_template="output/output.$Y-$M-$D_$h.$m.$s.nc"
        filename_interval="01-00-00_00:00:00"
        reference_time="0000-01-01_00:00:00"
        clobber_mode="truncate"
        output_interval="00-01-00_00:00:00">

Immutable streams cannot have the same filename_template

Commit 3f760ff allows mutable input streams to have the same filename template, but if I attempt to have two immutable input streams have the same filename_template, I get the following build-time error:

Reading registry file from standard input
ERROR: Streams input and inputHigherOrderVelocity have a conflicting filename template of landice_grid.nc.
Validation failed.....

This error occurs at src/registry/parse.c:645.

An easy workaround is to make one of my input streams mutable, so this is a low priority fix.

Dimensions of uBolusGMMeridional etc.

The dimensions of uBolusGMMeridional etc. are set to (nVertLevels, nEdges, Time), which I believe are false. I am changing them to (nVertLevels, nCells, Time) in my ocean/gm branch, and so the issue can be corrected when ocean/gm is pulled into the develop branch. Comments welcome as to how we should proceed instead.

ocean: error in Registry description

file: core_ocean/Registry:

            <var name="vertDiffTopOfCell" type="real" dimensions="nVertLevelsP1 nCells Time" units="m^2 s^{-1}"
                 description="vertical diffusion defined at the edge (horizontally) and top (vertically)"

should read "vertical diffusion defined at the cell center (horizontally) and top (vertically)"

(maybe we can roll with in with the displacedDensity bug fix)

Problems with edgeTangentVector on a planar periodic grid

There seems to be a problem with edgeTangentVectors, at least in a previous version of develop. I observe these problems in a branch of mine that is based on an old version of develop, prior to pools:

https://github.com/jn4snz/MPAS/tree/ocean/twaFlow

I wrote code to test edgeTangentVector here, starting on line 211:
https://github.com/jn4snz/MPAS/blob/ocean/twaFlow/src/core_ocean/shared/mpas_ocn_vel_eddy_param.F

I calculate crossProductEdge = edgeNormalVectors x edgeTangentVectors using a grid from baroclinic_channel_10000m_20levs that I've passed through ocean init to modify IC and BC a bit.

Below is the third (= vertical) component of the cross product. The tangent vector has magnitude 1 everywhere. And I am assuming that the normal vector is right. Notice in the list below that I have masked the boundary edges with -1e+34. So there seems to be a problem with the tangent vector in interior edges.

I use the tangent vector to calculate the tangent bolus velocity, which is in the end what I need. I do this calculation by averaging xyz bolus velocities at cells on an edge to get the bolus velocity vector on an edge. Then this vector is multiplied by the normal and tangent vectors to get the normal and tangent components of the bolus velocity.

In order to rule out whether the problem originates in ocean init, I downloaded the baroclinic_channel_10000m_20levs directory and ran it with my namelist, i.e. I did not pass the grid file through ocean init. From just a visual instection of an ncdump, the the cross product from this run is identical to the one below.

The namelist file that I use is pasted at the end.
I am attaching a figure where I masked boundary edges and drew edges with edgeTangentVector = 1 as magenta/redish and anything else blue.

ss 0001

output from ncdump -v test3OnEdge output.0000-01-01_00.00.00.nc
...
-1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34,
-1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1,
-1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34,
1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34, -1e+34, 1, -1e+34,
-1e+34, 1, -1e+34, -1e+34, -1, -0.450909638480581, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438, -1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480582, -1, -0.450909638480581, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480582, -1, -0.450909638480582, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480582, -1, -0.450909638480581, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480581, -1, -0.450909638480583, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480581, -1, -0.450909638480581, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480583, -1, -0.450909638480581, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480579, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480579, -1, -0.450909638480579, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480579, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480579, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480579, -1, -0.450909638480579, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480579, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132438, -0.450909638480579, -1, -0.450909638480579, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132438,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132439, -0.450909638480583, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480583, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132439,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132439, -0.450909638480583, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480583, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132439,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132439, -0.450909638480583, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480583, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132439,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132437, -0.450909638480576, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
-0.856564761132439, -0.450909638480583, -1, -0.450909638480576, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -0.856564761132437,
-1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34,
-1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34,
-1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34, -1e+34,
-1e+34, -1e+34, -1e+34, -1e+34, -1e+34 ;

namelist.ocean_forward:

&time_management
config_do_restart = .false.
config_start_time = "0000-01-01_00:00:00"
config_stop_time = "none"
config_run_duration = '0_00:00:1200'
config_calendar_type = "360day"
/
&io
config_input_name = 'step5_ocean.nc'
config_output_name = 'output.nc'
config_restart_name = 'restart.nc'
config_restart_timestamp_name = "restart_timestamp"
config_restart_interval = "060_00:00:00"
config_output_interval = '0_00:00:600'
config_stats_interval = "1_00:00:00"
config_write_stats_on_startup = .true.
config_write_output_on_startup = .true.
config_frames_per_outfile = 1000
config_pio_num_iotasks = 0
config_pio_stride = 1
/
&time_integration
config_dt = 600.0000
config_time_integrator = 'split_explicit'
/
&ALE_vertical_grid
config_vert_coord_movement = 'uniform_stretching'
config_use_min_max_thickness = .false.
config_min_thickness = 1.0
config_max_thickness_factor = 6.0
config_set_restingThickness_to_IC = .true.
config_dzdk_positive = .false.
/
&ALE_frequency_filtered_thickness
config_use_freq_filtered_thickness = .false.
config_thickness_filter_timescale = 5.0
config_use_highFreqThick_restore = .false.
config_highFreqThick_restore_time = 30.0
config_use_highFreqThick_del2 = .false.
config_highFreqThick_del2 = 100.0
/
&partial_bottom_cells
config_alter_ICs_for_pbcs = .true.
config_pbc_alteration_type = "full_cell"
config_min_pbc_fraction = 0.10
config_check_ssh_consistency = .true.
/
&decomposition
config_num_halos = 3
config_block_decomp_file_prefix = "graph.info.part."
config_number_of_blocks = 0
config_explicit_proc_decomp = .false.
config_proc_decomp_file_prefix = "graph.info.part."
/
&hmix
config_hmix_ScaleWithMesh = .false.
config_maxMeshDensity = -1.0
config_apvm_scale_factor = 0.0
/
&hmix_del2
config_use_mom_del2 = .false.
config_use_tracer_del2 = .false.
config_mom_del2 = 10.0
config_tracer_del2 = 10.0
/
&hmix_del4
config_use_mom_del4 = .true.
config_use_tracer_del4 = .false.
config_mom_del4 = 1.5e10
config_tracer_del4 = 0.0
/
&hmix_Leith
config_use_Leith_del2 = .false.
config_Leith_parameter = 1.0
config_Leith_dx = 15000.0
config_Leith_visc2_max = 2.5e3
/
&twaFlow
config_use_twa_flow = .true.
config_use_passive_twa_flow = .false.
/
&mesoscale_eddy_parameterization
config_use_standardGM = .true.
config_use_Redi_diffusion = .false.
config_standardGM_tracer_kappa = 1200
config_Redi_kappa = 300
config_diapycnal_diff = 0
config_gravWaveSpeed_trunc = 0.3
/
&hmix_del2_tensor
config_use_mom_del2_tensor = .false.
config_mom_del2_tensor = 10.0
/
&hmix_del4_tensor
config_use_mom_del4_tensor = .false.
config_mom_del4_tensor = 5.0e13
/
&Rayleigh_damping
config_Rayleigh_friction = .false.
config_Rayleigh_damping_coeff = 0.0
/
&vmix
config_convective_visc = 1.0
config_convective_diff = 1.0
/
&vmix_const
config_use_const_visc = .true.
config_use_const_diff = .true.
config_vert_visc = 0.0001
config_vert_diff = 0.0001
/
&vmix_rich
config_use_rich_visc = .true.
config_use_rich_diff = .true.
config_bkrd_vert_visc = 1.0e-4
config_bkrd_vert_diff = 1.0e-5
config_rich_mix = 0.005
/
&vmix_tanh
config_use_tanh_visc = .false.
config_use_tanh_diff = .false.
config_max_visc_tanh = 2.5e-1
config_min_visc_tanh = 1.0e-4
config_max_diff_tanh = 2.5e-2
config_min_diff_tanh = 1.0e-5
config_zMid_tanh = -100
config_zWidth_tanh = 100
/
&cvmix
config_use_cvmix = .false.
config_cvmix_prandtl_number = 1.0
config_use_cvmix_background = .false.
config_cvmix_background_diffusion = 1.0e-5
config_cvmix_background_viscosity = 1.0e-4
config_use_cvmix_convection = .false.
config_cvmix_convective_diffusion = 1.0
config_cvmix_convective_viscosity = 1.0
config_use_cvmix_kpp = .false.
config_cvmix_kpp_criticalBulkRichardsonNumber = 0.25
config_cvmix_kpp_interpolationOMLType = "quadratic"
/
&forcing
config_forcing_type = "restoring"
config_restoreT_timescale = 20.0
config_restoreS_timescale = 90.0
config_restoreT_lengthscale = 50.0
config_restoreS_lengthscale = 50.0
config_flux_attenuation_coefficient = 0.001
config_frazil_ice_formation = .false.
config_sw_absorption_type = "jerlov"
config_jerlov_water_type = 3
config_fixed_jerlov_weights = .true.
/
&advection
config_vert_tracer_adv = "stencil"
config_vert_tracer_adv_order = 3
config_horiz_tracer_adv_order = 3
config_coef_3rd_order = 0.25
config_monotonic = .true.
/
&bottom_drag
config_bottom_drag_coeff = 1.0e-2
/
&pressure_gradient
config_pressure_gradient_type = "pressure_and_zmid"
config_density0 = 1014.65
/
&eos
config_eos_type = 'linear'
/
&eos_linear
config_eos_linear_alpha = 2.55e-1
config_eos_linear_beta = 7.64e-1
config_eos_linear_Tref = 19.0
config_eos_linear_Sref = 35.0
config_eos_linear_densityref = 1025.022
/
&split_explicit_ts
config_n_ts_iter = 2
config_n_bcl_iter_beg = 1
config_n_bcl_iter_mid = 2
config_n_bcl_iter_end = 2
config_n_btr_subcycles = 20
config_n_btr_cor_iter = 2
config_vel_correction = .true.
config_btr_subcycle_loop_factor = 2
config_btr_gam1_velWt1 = 0.5
config_btr_gam2_SSHWt1 = 1.0
config_btr_gam3_velWt2 = 1.0
config_btr_solve_SSH2 = .false.
/
&testing
config_conduct_tests = .false.
config_test_tensors = .false.
config_tensor_test_function = "sph_uCosCos"
/
&debug
config_check_zlevel_consistency = .false.
config_filter_btr_mode = .false.
config_prescribe_velocity = .false.
config_prescribe_thickness = .false.
config_include_KE_vertex = .false.
config_check_tracer_monotonicity = .false.
config_disable_thick_all_tend = .false.
config_disable_thick_hadv = .false.
config_disable_thick_vadv = .false.
config_disable_thick_sflux = .false.
config_disable_vel_all_tend = .false.
config_disable_vel_coriolis = .false.
config_disable_vel_pgrad = .false.
config_disable_vel_hmix = .false.
config_disable_vel_windstress = .false.
config_disable_vel_vmix = .false.
config_disable_vel_vadv = .false.
config_disable_tr_all_tend = .false.
config_disable_tr_adv = .false.
config_disable_tr_hmix = .false.
config_disable_tr_vmix = .false.
config_disable_tr_sflux = .false.
/
&oac_global_stats
config_oac_global_stats = .false.
/
&oac_epft
config_oac_epft = .false.
config_rhomin_buoycoor = 1026.0
config_rhomax_buoycoor = 1028.0
config_oac_epft_debug = .false.
/

add i/o timers

Looking at the timer data written to log.*.out, I noticed that there is no listing of read or write operations specifically. It would be nice to include that, so we can track performance as we use different i/o configurations. For example, as you change the flags

    config_pio_num_iotasks = 0
    config_pio_stride = 1

Also, Jim Ahrens is interested in the cost of i/o for MPAS, for his studies on state storage versus recompute. It would be nice to have numbers on i/o time for that.

exact restarts don't work when KPP is on

Using tag v3.3, when I run 10 days versus 5+5 days with restart, I get bit-for-bit when cvmix is off. When cvmix is on with the namelist below, it is not bfb and there is a mismatch in digit 7 of global kinetic energy.

I see these variables in the restart file for KPP:

                        <var name="surfaceWindStress"/>
                        <var name="seaSurfacePressure"/>
                        <var name="boundaryLayerDepth"/>

and I confirmed that they are in the restart file for my bfb test.

My runs are at:

/lustre/scratch1/turquoise/mpeterse/runs/c45s  10 days
/lustre/scratch1/turquoise/mpeterse/runs/c45t  first 5 day
/lustre/scratch1/turquoise/mpeterse/runs/c45u  restart, second five day

and here is my namelist for cvmix:

&cvmix
    config_use_cvmix = .true.
    config_cvmix_prandtl_number = 1.0
    config_use_cvmix_background = .true.
    config_cvmix_background_diffusion = 1.0e-5
    config_cvmix_background_viscosity = 1.0e-4
    config_use_cvmix_convection = .true.
    config_cvmix_convective_diffusion = 1.0
    config_cvmix_convective_viscosity = 1.0
    config_cvmix_convective_basedOnBVF = .true.
    config_cvmix_convective_triggerBVF = 0.0
    config_use_cvmix_shear = .true.
    config_cvmix_shear_mixing_scheme = 'KPP'
    config_cvmix_shear_PP_nu_zero = 0.005
    config_cvmix_shear_PP_alpha = 5.0
    config_cvmix_shear_PP_exp = 2.0
    config_cvmix_shear_KPP_nu_zero = 0.005
    config_cvmix_shear_KPP_Ri_zero = 0.7
    config_cvmix_shear_KPP_exp = 3
    config_use_cvmix_tidal_mixing = .false.
    config_use_cvmix_double_diffusion = .false.
    config_use_cvmix_kpp = .true.
    config_cvmix_kpp_niterate = 2
    config_cvmix_kpp_criticalBulkRichardsonNumber = 0.25
    config_cvmix_kpp_matching = 'SimpleShapes'
    config_cvmix_kpp_EkmanOBL = .false.
    config_cvmix_kpp_MonObOBL = .false.
    config_cvmix_kpp_interpolationOMLType = 'quadratic'
    config_cvmix_kpp_surface_layer_extent = 0.1
/

Registry variable type that is not necessarily allocated but can be part of I/O

Hi,

The MPAS-CICE core will eventually have optional physics packages. At the moment there is no variable type in registry that is not necessarily allocated (like scratch) but can also be read in/out (like persistent). Such a variable is necessary to have optional physics packages like BGC where most of the time you don't want it switched on using memory but when it is on you want to write out the results. Presumably the ocean core will want this functionality when it adds BGC.

Cheers,
Adrian

Registry not generating calls to create_*_links()

There appears to be a bug in the registry that is causing calls to create_<variable_group>_links() to not be generated for any variable group in the registry that has <=1 field in it, and for any other variable groups that follows a variable group that has <=1 field.

Example:

        <var_struct name="diag_physics" time_levs="1">
                <var name="relhum"        type="real"     dimensions="nVertLevels nCells Time"     streams="o"/>
        </var_struct>

        <var_struct name="tend_physics" time_levs="1">
                <!-- ================================================================================================== -->
                <!--  Tendency arrays for CAM phys                                                                      -->
                <!-- ================================================================================================== -->
                <var name="u_cam_tend"      name_in_code="u"       type="real"   dimensions="nVertLevels nEdges Time" streams="ro" />
                <var name="ux_cam_tend"     name_in_code="ux"      type="real"   dimensions="nVertLevels nCells Time" streams="ro" />
                <var name="uy_cam_tend"     name_in_code="uy"      type="real"   dimensions="nVertLevels nCells Time" streams="ro" />
                <var name="theta_cam_tend"  name_in_code="theta"   type="real"   dimensions="nVertLevels nCells Time" streams="ro" />
                <var_array name="scalars" type="real" dimensions="nVertLevels nCells Time">
                        <var name="qv_cam_tend"     name_in_code="qv"   array_group="moist" streams="o" />
                        <var name="qc_cam_tend"     name_in_code="qc"   array_group="moist" streams="o" />
                        <var name="qr_cam_tend"     name_in_code="qr"   array_group="moist" streams="o" />
                </var_array>
        </var_struct>

With these entries at the bottom of a Registry.xml file, no calls to create_diag_physics_links() or create_tend_physics_links() are generated in the file src/inc/field_links.inc.

Timers wrapping tridiagonal solves in ocean core are expensive

This was reported to me by Pat Worley. Below are excerpts from the emails he sent me:

Email 1

I've reproduced the result by the Maryland person (my apologies that I never picked up on which of Jeff's associates did this work).

For version 2.0 and for the ocean_QU_60km benchmark on 512 processors (on eos - an XC30 somewhat similar to Edison, but with 16 cores per node instead of 24), commenting out the timer calls for

       tridiagonal_solve
       tridiagonal_solve_mult

decreased the time for a 10 timestep run from 148 seconds to 85 seconds. In particular, the time spent in the routine

       ocn_vmix_implicit

(called from within ocn_time_integrator_split) dropped from 55.6 seconds to 2.7 seconds.

(Doug, you may want to comment out these timers before using 2.0 in scaling studies or for production runs.)


Email 2

Note that I also ran with GPTL timers inside of the native timer routines followed immediately by "returns" (so same number of timer calls, and same number of MPI_WTIME calls within the timer calls, but with the rest of the native timer logic disabled). Performance results for process 0 were as follows. (Results for other processes were qualitatively similar.)

original

ocn_vmix_implicit: 55.6

GPTL timers with native timers disabled

ocn_vmix_implicit: 2.81

GPTL timers with native timers disabled and without timers for tridiagonal_solve and tridiagonal_solve_mult

ocn_vmix_implicit: 2.69

So the high overhead of the native timers was not due to the routine call overhead or due to the MPI_WTIME cost.

These particular timers did not use the "pointer" option to eliminate the need for the linked list search in the native timers. Perhaps this would also eliminate the problem. I won't be doing this experiment however, as I am more comfortable with the GPTL timers anyway. Of course, the problem is also eliminated by commenting out the timers for the tridiagonal solve routines, which is the "real" solution.

FYI.

Pat

mpas_pool_destroy_pool extra if option

In mpas_pool_destroy_pool in pool_subroutines.inc the "else if (associated(dptr % r0a)) then" option is repeated twice. It appears that the second option is the correct one but the first one gets called.

Some compilers have issues traversing multiple pointer levels

This issue was reported to me by Abhinav but was found by members of the SUPER SciDAC institute.

Email

I'm attaching to this message three reports by Linda on some experiments she has done on Edison. Here's my current understanding of her experiments:

  • The code in the excerpt from the report suffers in performance from a lot of indirection in accessing arrays. It is the indirection through the complex structure that is hurting performance. By copying into an array that is used inside the loop, Linda saw a speedup of almost 2X. I believe this is for a single core.
  • She then discovered that she could use a Fortran pointer type object to point to this array rather than doing the copy, which is the subject of the later reports.

We were thinking a copy might be needed in some places where we might be able to reorganize the data and improve both locality and simplicity of the data structure. It appears that is not needed here, but would be needed in cases where we could amortize the cost of the copy with significant performance gains due to locality.


See additional information in this report: https://www.dropbox.com/s/imgm72kcureelqf/pointer_chasing_expense.pdf

Pool error in shallow water core

When running the shallow water core, the following errors turn up:

Initial timestep 0000-01-01_00:00:00
Doing timestep 0000-01-01_00:15:00
Error: Field pv_edge has too few time levels.
Error: Field pv_edge has too few time levels.
Error: Field pv_edge has too few time levels.
Error: Field pv_edge has too few time levels.

MPAS-A output diagnostics

If output is requested after every timestep in MPAS-A, the output diagnostic fields are not calculated correctly. This appears to be a bug in the way the 'l_diags' variable is set in mpas_atmphys_manager.F.

Cannot have multiple groups with >1 time level

There exists a bug in the registry infrastructure that prevents one from declaring another group of variables with two or more time levels. The cause of the problem appears to be that the gen_inc.c code creates a variable named "provis" for any group with >1 time levels, which causes errors due to the re-declaration of provis for the second variable group.

The bug appears to have been introduced in revision 2090 in the svn repository with a set of changes that were intended to support multiple blocks.

KE from surface wind stress does not match older runs

@douglasjacobsen @toddringler

The new ocean core surface wind stress variable is not forcing the model as expected.

I expect the following two runs to be nearly identical. Instead, max KE is 2x higher for the second after 6 hours, 120km global run - see image.
m69uv_ke_6hrs

m69u: develop, commit 05ffee Thu Oct 24, uses normalVelocityForcing
m69v: release-v2.0, commit 7d9572 Thu Nov 7 13:41uses surfaceWindStress

lo2-fe.lanl.gov> pwd
/panfs/scratch3/vol16/mpeterse/runs
lo2-fe.lanl.gov> diff m69u/namelist.input m69v/namelist.input
120,123c120,124
< config_restoreTS = .true.
< config_restoreT_timescale = 30.0
< config_restoreS_timescale = 30.0

< config_use_coupled_forcing = .false.

    config_forcing_type = "restoring"
    config_restoreT_timescale = 30.0
    config_restoreS_timescale = 30.0
    config_restoreT_lengthscale = 10.01244
    config_restoreS_lengthscale = 10.01244

125a127,129
config_sw_absorption_type = "jerlov"
config_jerlov_water_type = 3
config_fixed_jerlov_weights = .true.

uses
config_flux_attenuation_coefficient = 0.001

lo2-fe.lanl.gov> ncdump -h m69u/grid.nc | grep -i forcing
double normalVelocityForcing(nEdges, nVertLevels) ;
lo2-fe.lanl.gov> ncdump -h m69v/grid.nc | grep -i surfaceWindStress
double surfaceWindStress(nEdges) ;

These have the same values at the top layer of normalVelocityForcing

scaleWithMesh

this is an ocean model bug (might also show up in the atmosphere).

we have a config variable "config_hmix_ScaleWithMesh" that allows us to scale the del2 and del4 mixing coefficients based on mesh density. the problem is that this scaling assumes that the max(meshDensity)==1. this is nice because then we set the del2 and del4 values based on the high res region.

recently we have been building meshes with meshDensity >> 1 and where the max value is not particular intuitive.

so i think we need to normalize meshDensity either when output into grid.nc or within the core itself at runtime. the latter would be preferable, but would require an MPI global max operation.

thoughts?

.gitignore in MPAS directory

now that we have removed .exe suffix from executables, our .gitignore file should be updated.

Executables

*.exe
src/registry/parse

maybe we just want to list out the executable that should be ignored or maybe ignore all *_model or maybe something else.

location of executable

After, say, "make ifort CORE=ocean" we put the executable called "ocean_model.exe" into "src." But a symbolic link to ocean_model.exe exists at the level of Makefile.

This can be confusing. On some systems, the link at the level of the Makefile looks just like an executable. But if you copy it and try to run it, it fails. A new user will mostly like miss the fact that executable is really in the src directory.

I propose that we produce one version of "core_model.exe" and put it at the level of the Makefile.

Defining streams on constituents of variable arrays set streams correctly.

Within the Registry.xml file, constituents of variable arrays have the streams attribute, so they can be set to independent streams if the developer chooses.

e.g.

<var_array name="surfaceTracerFlux" type="real" dimensions="nCells Time">
    <var name="surfaceTemperatureFlux" array_group="dynamics" units="^\circ C m s^{-1}" streams="o"
           description="Flux of temperature through the ocean surface. Positive into ocean."
    />
    <var name="surfaceSalinityFlux" array_group="dynamics" units="PSU m s^{-1}" streams="o"
           description="Flux of salinity through the ocean surface. Positive into ocean."
    />
    <var name="surfaceTracer1Flux" array_group="testing" units="percent"
            description="Flux of tracer1 through the ocean surface. Positive into ocean."
    />
</var_array>

While the first two constituents have streams="o", the last constituent doesn't define the streams attribute (which defaults to meaning it shouldn't be part of any stream). However, the behavior is that none of these constituents are written to the output stream, as the last constituent defines the behavior for the rest of the constituents.

incorrect dimensions in subroutine call in mpas_ocn_global_diagnostics.F

From Rob Lowrie:

I should probably send this to the ocean list on github, but I wasn't sure which one:

I'm getting a segfault for certain parallel cases....here's my debug session on a certain proc:

Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000106f95000
0x000000010010aac1 in __ocn_global_diagnostics_MOD_ocn_compute_field_volume_weighted_local_stats_max_level (dminfo={nprocs = 4, my_proc_id = 2, comm = 0, info = 0, using_external_comm = false}, nvertlevels=4, nelements=13111, maxlevel=0x101ca8a00, areas=0x101c58800, layerthickness={{666.66666666666663}}, field=0x108db7000, localsum=0, localmin=0, localmax=0, localvertsummin=0, localvertsummax=0) at mpas_ocn_global_diagnostics.f90:580
580 thicknessWeightedColSum = sum(layerThickness(1:maxLevel(elementIndex),elementIndex)*field(1:maxLevel(elementIndex),elementIndex))
(gdb) p elementIndex
No symbol "elementIndex" in current context.
(gdb) p elementindex
$1 = 10497
(gdb) p maxlevel(elementindex)
Invalid data type for function to be called.
(gdb) p maxlevel[elementindex]
$2 = 3
(gdb) p nvertlevels
$3 = 4
(gdb) p maxlevel[elementindex-1]
$4 = 3
(gdb) up
#1 0x000000010011fdcb in __ocn_global_diagnostics_MOD_ocn_compute_global_diagnostics (domain={blocklist = 0x101548af0, dminfo = 0x101509770, modelname = "mpas", ' ' <repeats 508 times>, corename = "ocean", ' ' <repeats 507 times>, modelversion = "0.0.0", ' ' <repeats 507 times>, history = ' ' <repeats 1024 times>}, timelevel=1, timeindex=0, dt=16) at mpas_ocn_global_diagnostics.f90:224

224 verticalSumMaxes_tmp(variableIndex))

If you look at mpas_ocn_global_diagnostics.f90:224, it's passing in "nVertLevels+1", yet the routine it's calling is taking this as nVertLevels and dimensioning things like layerThickness. So it looks like a bug, right?

This was a 4-proc job. 16 runs fine, but 8 and 2 also fail.

Thanks,
-Rob

Timers return all 0 times.

When running MPAS on my Mac, all timers report times of 0.

With advice from @douglasjacobsen , I changed

include 'mpif.h'

to

use mpi

in two locations in mpas_timer.F. This allowed the timers to behave as expected.

This is not a high priority problem, but ideally one would not have to modify framework to get functioning timers.

MPAS commit 114b42e
openmpi 1.6.3 from macports
gfortran 4.6.3 from macports
netcdf 4.2.1.1 from macports
PIO 1.6.3
pnetcdf 1.3.1

computation of Brunt-Vaisala frequency and displacedDensity

This is a bug report with two threads. Qingshan identified this possible bug.

First, displacedDensity is computed in subroutine ocn_diagnostic_solve with the following code:


! compute displacedDensity, the potential density referenced to the top layer

call ocn_equation_of_state_density(s, grid, 1, 'relative', err)

The computed density is put into displacedDensity because the third argument is nonzero. While not a bug, the comment on line 1 and the executed code on line 2 are not at all consistent.

The result of this code is that displaceDensity(k-1) is computed at the pressure of k. This is fine.

The second thread (and the bug) is with the computation of BruntVaisalaFreqTop that is based on displacedDensity and is shown in the code below


!
! Brunt-Vaisala frequency
!
coef = -gravity/config_density0
do iCell=1,nCells
BruntVaisalaFreqTop(1,iCell) = 0.0
do k=2,maxLevelCell(iCell)
BruntVaisalaFreqTop(k,iCell) = coef * (displacedDensity(k-1,iCell) - displacedDensity(k,iCell)) / (zMid(k-1,iCell) - zMid(k,iCell))
end do

end do

This computation is based entirely on displaceDensity whereas is should be comparing displaceDensity(k-1) to density(k). In that, the proposed correction is

        BruntVaisalaFreqTop(k,iCell) = coef * (displacedDensity(k-1,iCell) - density(k,iCell)) &
          / (zMid(k-1,iCell) - zMid(k,iCell))

where displacedDensity at k-1 is compared to in-situ density at k.

(alternatively, we can change 'relative' to absolute in the displacedDensity computation. this then makes the comment and BruntVaisalaFreqTop correct.)
Comments?

MPAS Fails to build with gcc 4.8 series compilers

This is due to "improvements" to make the compiler more compliant with newer versions of the standards. It largely affects what is included in preprocessed files, and the "retention" of c/c++ style comments. Additional issues come from the fact that Fortran style string concatenation is the same as c-style comments.

You can read more about the work they did on the 4.8 series here. The specific section is: "Pre-processor pre-includes"

Essentially, cpp now (by default) includes <stdc-predef.h> on any linux system. This happens on any file we preprocess. In addition, we're currently passing in the -C flag, which when coupled with the inclusion of <stdc-predef.h> fills the header of Registry.xml with some 55 lines of c++ style block comments.

When these comments get added to the top of Registry.xml, parse can no longer parse the registry file.

There are a few solutions to this, and I'm not sure if any of the break compatibility with other systems/compilers. But, since we use cpp when generating the .f90 files regardless of build target this will eventually be an issue on other systems (large linux clusters).

The first solution is to add the -ffreestanding flag to our $(CPP) command. This prevents the inclusion of stdc-predef.h, and allows us to continue generating the .f90 files if we want to.

The second solution is to remove the -C flag, however this has an unintended side effect of causing Fortran string concatenation to not work anymore, since it is done through a // operator, which is a c-style comment. Currently the only place this occurs is in esmf, and this instance could easily be removed. But anyone adding strings like this in the future would need to be aware of this issue.

Since we are using cpp in a "non-standard" (preprocessing non-c or c++ files) way, the third solution is to stop using cpp on non c/c++ files (i.e. Registry. and .F or .F90 files). This would include removing the GEN_F90 options from the build system, and only allowing the compilers to preprocess the Fortran files (which they currently do in the case of GEN_F90=false).

This likely needs to be remedied quickly, as the 4.8 series is the newest compiler group, and the first group to include some OpenMP 4.0 implementations. More people are likely to be migrating to this compiler series, and would run into these same issues.

dump module mpas_constants to netcdf file

It would be a great feature enhancement for the code to dump variables in module mpas_constants to a netcdf file. Many times these constants are needed for postprocessing, and the best practice would be to read them in postprocessing scripts, rather than hard code them. Currently they have to be hardcoded as the constants are not available to be read.

Note that these constants can be written to netcdf files either as variables or attributes - both can be read in from matlab/python scripts. However, I am not sure if this can be a problem in software like Paraview.

MPAS freezes if 'Time' dimension in input file is not unlimited

If the 'Time' dimension in the input file is not unlimited (e.g. just has a size of 1, but without the unlimited condition enabled), MPAS freezes during init. The log simply says:
"Reading namelist.input"
with no additional information, and the run seems to get 'stuck'. On hopper, the job eventually goes out of memory. On my laptop, it completely froze up my machine and I had to manually do a hard restart of my system.

@douglasjacobsen and I had looked into this briefly awhile ago, and if I remember correctly the issue was happening inside PIO. I don't remember anymore than that.

On my laptop I am using:
MPAS 2.0
pnetcdf 1.3.1
pio Revision: 835 (1_7_2)

On hopper we are using:
MPAS 2.0
pnetcdf 1.3.1
pio 1_7_1

I think it would be desirable if the time dimension is not required to be unlimited, but if that is not an option, I would be happy if there was some sort of error printed and a clean abort rather than the whole thing freezing.

This is not a high priority issue, but I am adding it before I forget about it.

MPAS-A fails to build with 4.8 series GCC compilers

When building the atmosphere core with 4.8 series GCC compilers, an undefined reference is presented in mpas_atmphys_todynamics.F.

Here is the output from the build:
./libdycore.a(mpas_atmphys_todynamics.o): In function 'mpas_atmphys_todynamics_MOD_tend_toedges':
mpas_atmphys_todynamics.F:(.text+0x716): undefined reference to 'mpas_dmpar_exch_halo_field
'
mpas_atmphys_todynamics.F:(.text+0x785): undefined reference to 'mpas_dmpar_exch_halo_field
'

Adding a "use mpas_dmpar" to the top of mpas_atmphys_todynamics.F fixes the issue.

Division by zero in mpas_get_timeinterval in mpas_timekeeping

If you create a time interval from two times that are the same subtracted from one another (so the interval is zero length) and then call mpas_get_timeInterval with the dt option you get an arithmetic exception. This happens on the "dt = (days * 24 * 60 * 60) + seconds + (sn / sd)" line and is caused by sd == 0.

ocean: variable not initialized,

ifort-gcc, OS X, debug mode

ocn_forcing_mp_ocn_forcing_init_$ERR1' is being used without being defined

nunatak: /Users/todd/Desktop/Dropbox/github/testing/v2.0/baroclinic_channel_10000m_20levs > !mo
more log*
Reading namelist.input
Namelist record &hmix_del2_tensor not found; using default values for this name
list's variables
Namelist record &hmix_del4_tensor not found; using default values for this name
list's variables
Namelist record &cvmix not found; using default values for this namelist's vari
ables
Namelist record &testing not found; using default values for this namelist's va
riables

WARNING: Variable highFreqThickness not in input file.
WARNING: Variable lowFreqDivergence not in input file.
WARNING: Variable fCell not in input file.
WARNING: Variable restingThickness not in input file.
MPAS IO Error: Bad return value from PIO
Warning: Attribute History not found in grid.nc
Setting History to ''
forrtl: severe (193): Run-Time Check Failure. The variable 'ocn_forcing_mp_ocn_forcing_init$ERR1' is being used without being defined
Image PC Routine Line Source

Stack trace terminated abnormally.
...skipping...
ocn_vmix_init complete
~

use of icc vs gcc in Makefile

Hi All,

I have been using ifort almost exclusively for the last few years. When I rebased my branch today using develop, I find that gcc has been replaced by icc. I don't have icc and have been using gcc without problems.

Was there discussion about changing the build system to use icc?

Cheers,
Todd

ocean: surfaceTracerFlux addressed out of range.

I think what is going on here is that we have two surfaceTracerFlux fields (T and S) but three tracer fields (T, S and tracer1).

When we compute tracer tendency, we loop over iTracer that ranges from 1 to 3 (but we only range over 1 and 2 for surfaceTracerFlux).

nunatak: /Users/todd/Desktop/Dropbox/github/testing/v2.0/baroclinic_channel_10000m_20levs > more log*
Reading namelist.input
Namelist record &hmix_del2_tensor not found; using default values for this name
list's variables
Namelist record &hmix_del4_tensor not found; using default values for this name
list's variables
Namelist record &cvmix not found; using default values for this namelist's vari
ables
Namelist record &testing not found; using default values for this namelist's va
riables

WARNING: Variable highFreqThickness not in input file.
WARNING: Variable lowFreqDivergence not in input file.
WARNING: Variable fCell not in input file.
WARNING: Variable restingThickness not in input file.
MPAS IO Error: Bad return value from PIO
Warning: Attribute History not found in grid.nc
Setting History to ''
MPAS IO Error: Bad return value from PIO
Initial time 0000-01-01_00:00:00
MPAS IO Error: Undefined field
Doing timestep 0000-01-01_00:05:00
forrtl: severe (408): fort: (2): Subscript #1 of the array SURFACETRACERFLUX has value 3 which is greater than the upper bound of 2

Image PC Routine Line Source

Stack trace terminated abnormally.
...skipping...
ocn_vmix_init complete
Vertical coordinate movement is: uniform_stretching
Pressure type is: pressure_and_zmid

namelist.*.defaults uses .false. for all logicals

On MPAS-Dev/develop, commit b6de75e (but perhaps earlier too) namelist.*.defaults only has .false. entries, even if default_value=".true.".

For example:
grep config_write_stats_on_startup src/core_ocean/Registry.xml
<nml_option name="config_write_stats_on_startup" type="logical" default_value=".true." units="unitless"
grep config_write_stats_on_startup namelist.ocean_forward.defaults
config_write_stats_on_startup = .false.

grep true namelist.ocean_forward.defaults
--nothing--

error when compiling ocean core using pgi, on mpas/develop

Compilation ends in an error,
PGF90-F-0004-Unable to open MODULE file cvmix_kinds_and_types.mod (mpas_subdriver.F: 14)
PGF90/x86-64 Linux 12.10-0: compilation aborted

line 14 is:
type (dm_info), pointer :: dminfo

I am compiling on mustang, using
source /usr/projects/climate/SHARED_CLIMATE/scripts/mustang_pgi_mvapich.csh
make pgi CORE=ocean MODE=forward DEBUG=true

on develop, last commit 276d15b

Memory leaks in src/mpas_dmpar.F subroutine mpas_dmpar_get_exch_list

There are multiple memory leaks in src/mpas_dmpar.F subroutine mpas_dmpar_get_exch_list. For example, ownedlimitlist is allocated but never deallocated. This can create problems depending upon compiler flags, resulting in strange crashes (example below) if the input netcdf database has errors.

either

Program received signal SIGFPE, Arithmetic exception.
0x00000001005fc909 in ?? ()
(gdb) backtrace
#0  0x00000001005fc909 in ?? ()
#1  0x00007fff5fbecf30 in ?? ()
#2  0x00000001005ff9b6 in ?? ()
#3  0x00007fff5fbed0a8 in ?? ()
#4  0x0000000000002000 in ?? ()
#5  0x00007fff5fbed0a8 in ?? ()
#6  0x000000010370ea00 in ?? ()
#7  0x0000000000002000 in ?? ()
#8  0x0000000000017fe0 in ?? ()
#9  0x0000000000000400 in ?? ()
#10 0x000000000528f1f4 in ?? ()
#11 0x00007fff5fbed100 in ?? ()
#12 0x00000001005f0935 in ?? ()
#13 0x0000000000000000 in ?? ()

or

Reading namelist.ocean_forward
 Namelist record &hmix_del2_tensor not found; using default values for this namelist's variables
 Namelist record &hmix_del4_tensor not found; using default values for this namelist's variables
 Namelist record &cvmix not found; using default values for this namelist's variables
 Namelist record &testing not found; using default values for this namelist's variables
 Namelist record &debug not found; using default values for this namelist's variables
 Namelist record &global_stats not found; using default values for this namelist's variables
 Namelist record &zonal_mean not found; using default values for this enamelist's variables  
 RestartTimeStamp 0003-01-01_00:00:00
At line 1048 of file mpas_dmpar.F
Fortran runtime error: Attempting to allocate already allocated variableownedlimitlist'

for example.

As discussed with Doug, the leak isn't a big deal because the most memory that's leaked is O(nOwnedBlocks) which is typically 1. Also, this routine is only ever called once, so the memory leak doesn't grow as a function of time.

streams output timing bug when dt has minutes and seconds

In a test with

    config_dt = '00:33:20'

and monthly restart output

<immutable_stream name="restart"
                  type="input;output"
                  filename_template="restarts/restart.$Y-$M-$D_$h.$m.$s.nc"
                  filename_interval="output_interval"
                  reference_time="0000-01-01_00:00:00"
                  clobber_mode="truncate"
                  input_interval="initial_only"
                  output_interval="00-01-00_00:00:00"/>

The monthly output of restart files is highly variable and incorrect. Some files have a large number of entries, and the xtime variable does not correspond to the file name. Note that 33:20 is 2000 seconds, which divides evenly into a day, so output should be exactly at day boundaries. The output of a 240km global is:

-rw-rw-r-- 1 mpeterse mpeterse  15M Dec  3 08:27 restart.0000-02-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 1.6G Dec  3 08:32 restart.0000-03-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 08:36 restart.0000-04-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 528M Dec  3 08:40 restart.0000-05-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 08:44 restart.0000-06-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 528M Dec  3 08:48 restart.0000-07-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 08:52 restart.0000-08-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 08:56 restart.0000-09-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 09:00 restart.0000-10-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 09:03 restart.0000-11-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 09:07 restart.0000-12-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 09:11 restart.0001-01-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 8.9M Dec  3 09:15 restart.0001-02-01_00.00.00.nc
-rw-rw-r-- 1 mpeterse mpeterse 1.3G Dec  3 09:20 restart.0001-03-01_00.00.00.nc

note variable file size. The largest files have 218 time slices, and times look as follows:

mu-fe3.lanl.gov> ncdump -v xtime restart.0001-03-01_00.00.00.nc | tail
  "0001-03-26_10:06:40                                             ",
  "0001-03-26_10:40:00                                             ",
  "0001-03-26_10:40:00                                             ",
  "0001-03-26_11:13:20                                             ",
  "0001-03-26_11:13:20                                             ",
  "0001-03-26_11:46:40                                             ",
  "0001-03-26_11:46:40                                             ",
  "0001-03-26_12:20:00                                             ",
  "0001-03-26_12:20:00                                             " ;
}

The run can be found on the LANL turquoise at
test merge branch

Time varying forcing

Hi,

The MPAS-CICE core will require time varying forcing at some point. It would be useful for the runtime IO layer to eventually be able to read in forcing data at specified times during a simulation run.

Cheers,
Adrian

uninitialized pointer passed to subroutine

When I run with debug compiled with pgi, it dies on the first step, with error message:

0: Null pointer for tmp$r36 (mpas_ocn_time_integration_split.F: 423)

I think that the line

           call ocn_vert_transport_velocity_top(meshPool, verticalMeshPool, &
              layerThicknessCur, layerThicknessEdge, normalVelocityCur, &
              sshCur, highFreqThicknessNew, dt, vertAleTransportTop, err)

is using the variable highFreqThicknessNew, but thicknessFilterActive is false, so that pointer is not set.

@douglasjacobsen, it looks like we could fix this by pointing highFreqThicknessNew to a dummy, so it is actually set, or by making highFreqThicknessNew an optional variable in ocn_vert_transport_velocity_top. Let me know if you have a preference, and I can fix it if you like.

Mark

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.