boutproject / boutdata Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU Lesser General Public License v3.0
License: GNU Lesser General Public License v3.0
This was originally added as a straight dump of the then-state of the files in BOUT-dev. We could "add back" the original history. This would completely divorce any existing tags or branches from the new history, so is maybe not such a great idea.
For reference, this is how I've just done it with Zoidberg:
mkdir zoidberg && cd zoidberg
git init
git remote add --fetch https://github.com/bout-project/BOUT-dev.git
git cherry-pick -Xsubtree=tools/pylib \
$(git log --reverse --pretty=format:"%h" --follow bout/next -- tools/pylib/zoidberg)
git log ...
to get all the commits that touch zoidberg filescherry-pick -Xsubtree=tools/pylib
add those commits one at a time, stripping the tools/pylib
path so the files end up under zoidberg/
Requires some cleaning up along the way, but generally there aren't many conflicts -- it's usually just deleting other files brought in in those commits.
If we were to do this, the general plan of attack would be:
boutdata currently requires bunch
- which is dead.
BOUT-dev has patches to remove the dependency, they should be ported to boutdata and boututils
I do not understand where are the number of guard cells mxg,myg that we pass as input used in the resize method?
Lines 88 to 89 in 1cb6d08
The unit tests take about 30 minutes to complete, mostly it's collect
. Maybe we can mock out the netcdf calls to something much faster?
See boutproject/boututils#19 for details.
See discussion on boutproject/BOUT-dev#2151
Hello. Would it make sense to do a new release? There are some important changes since the last one.
Since the 0.1.2 release there have been several PRs merged. It would be nice to get a new release.
If the tests are installed, they should go under boutdata/test
rather then test
Currently this results in conflicts with boututils
:
Error: Transaction test error:
file /usr/lib/python3.9/site-packages/test/__pycache__/__init__.cpython-39.opt-1.pyc conflicts between attempted installs of python3-boutdata-0.1.2-1.fc33.noarch and python3-boututils-0.1.4-1.fc33.noarch
file /usr/lib/python3.9/site-packages/test/__pycache__/__init__.cpython-39.pyc conflicts between attempted installs of python3-boutdata-0.1.2-1.fc33.noarch and python3-boututils-0.1.4-1.fc33.noarch
file /usr/lib/python3.9/site-packages/test/__pycache__/test_import.cpython-39.opt-1.pyc conflicts between attempted installs of python3-boutdata-0.1.2-1.fc33.noarch and python3-boututils-0.1.4-1.fc33.noarch
file /usr/lib/python3.9/site-packages/test/__pycache__/test_import.cpython-39.pyc conflicts between attempted installs of python3-boutdata-0.1.2-1.fc33.noarch and python3-boututils-0.1.4-1.fc33.noarch
file /usr/lib/python3.9/site-packages/test/test_import.py conflicts between attempted installs of python3-boutdata-0.1.2-1.fc33.noarch and python3-boututils-0.1.4-1.fc33.noarch
At the moment we leave bool options as strings in BoutOptions. Failing to deal with this lead to a bug in xSTORM because bool("false") == True
. I think we should deal with this in BoutOptions, although there's not a perfect answer because (i) BOUT++ can understand various expressions as bools (so it would be nice to convert all of them in one place) but (ii) if an option is read as a string in BOUT++ "t"
and "f"
, etc., might be desired values and there's no way to know just from the input file if the option should be a bool or a string.
Options I can think of:
get_bool()
that converts strings to bools in the same way as BOUT++, and raises an exception if the string is wrong
3 seems like the best option to me - I'll make a PR (should be fairly simple I hope) - but opinions would be welcome.
@ZedThree @bendudson
If an error happens in BoutOutputs.__init__()
before self._parallel
is defined, the exception message gets hidden behind one from __del__()
saying AttributeError: 'BoutOutputs' object has no attribute '_parallel'
. What's the 'correct' way to fix this? We could move self._parallel = parallel
to the first line of __init__()
and I guess that would fix this issue here, but it feels kludgy and non-general. Does anyone know of a nicer pattern?
For example, this happened in boutproject/BOUT-dev#2335 (comment)
From the Slack:
Amy Krause 3:28 PM
Sadly that doesn't seem to work. I can resize the restart files from my simulation run. If I use to_restart() on the output files it produces the restart files, however I get an error when using restart.resize() on those
ValueError: cannot reshape array of size 5120 into shape (1,)

John Omotani 3:32 PM
that's odd. Do the restart files produced by xBOUT look OK (e.g. if you look at them with ncdump, or open them in Python with netCDF4.Dataset?

Amy Krause 3:37 PM
the files open ok in python

John Omotani 3:37 PM
not sure what's going on then. Sorry, I haven't used restart.resize() myself. Anyone else have any ideas?

Amy Krause 3:40 PM
Writing vort
Traceback (most recent call last):
File "src/netCDF4/_netCDF4.pyx", line 4916, in netCDF4._netCDF4.Variable.setitem
ValueError: cannot reshape array of size 5120 into shape (1,)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/work/d175/d175/akexcml/envs/xbout/lib/python3.8/site-packages/boututils/datafile.py", line 666, in write
var.assignValue(data)
File "src/netCDF4/_netCDF4.pyx", line 4963, in netCDF4._netCDF4.Variable.assignValue
File "src/netCDF4/_netCDF4.pyx", line 4918, in netCDF4._netCDF4.Variable.setitem
File "<array_function internals>", line 5, in broadcast_to
File "/work/d175/d175/akexcml/envs/xbout/lib/python3.8/site-packages/numpy/lib/stride_tricks.py", line 411, in broadcast_to
return _broadcast_to(array, shape, subok=subok, readonly=True)
File "/work/d175/d175/akexcml/envs/xbout/lib/python3.8/site-packages/numpy/lib/stride_tricks.py", line 348, in _broadcast_to
it = np.nditer(
ValueError: input operand has more dimensions than allowed by the axis remapping
3:41
That's the full error, it happens when it tries to write the variable (in our case vort) - I tried specifying one variable or just leaving the default (all time-evolving vars)

John Omotani 3:58 PM
Could you post the output of ncdump -h BOUT.restart.0.nc (for one of the files produced by xBOUT) so I can check the variable attributes please?
I'm slightly suspicious of the use multiprocessing (don't see anything obviously wrong, but it's often tricky...). Might be worth trying to pass the argument maxProc=1 to restart.resize() so that it only uses 1 process - might be slightly safer...
Otherwise, you could try to check the dimensions of new and newData before the write here
Line 251 in 07a2473
Copying the metadata from a BOUT.restart.*.nc
file to the resized BOUT.restart.*.nc
file is a bad idea. I mean, you are resizing the grid and therefore parameters like "nx" (and others) should be obviously updated to match the size of the new grid.
Line 219 in 1cb6d08
This make sense after travis is enabled
The master branch should be protected with a PR where the tests are passing.
Hermes-3 features species options in the below format:
type = (evolve_density, evolve_momentum, evolve_pressure,
noflow_boundary
)
BoutData does not interpret the brackets correctly. The expected behaviour would be for it to read the options in the following hierarchy:
Instead, it does the following:
It looks like it interprets the brackets as part of the option string and doesn't consider the bracket contents as children of the same option header.
The build is currently failing with the following backtrace
test/test_import.py:2: in <module>
467 from boutdata.collect import collect
468boutdata/__init__.py:9: in <module>
469 from boutdata.collect import collect, attributes
470boutdata/collect.py:12: in <module>
471 from boututils.datafile import DataFile
472../../../virtualenv/python3.7.1/lib/python3.7/site-packages/boututils/datafile.py:36: in <module>
473 from netCDF4 import Dataset
474../../../virtualenv/python3.7.1/lib/python3.7/site-packages/netCDF4/__init__.py:3: in <module>
475 from ._netCDF4 import *
476netCDF4/_netCDF4.pyx:1213: in init netCDF4._netCDF4
477 ???
478../../../virtualenv/python3.7.1/lib/python3.7/site-packages/cftime/__init__.py:1: in <module>
479 from ._cftime import utime, JulianDayFromDate, DateFromJulianDay
480__init__.pxd:918: in init cftime._cftime
481 ???
482E ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
Googling around, it turns out that some of the packages are compiled with different versions.
The following approaches to fix the issue has been unsuccessful:
numpy
to 1.18.2
numpy
to 1.16.1
requirements.txt
fileSee https://stackoverflow.com/a/40846742/2786884 for details.
See https://pythonclock.org for details
Although the code has been tested several times manually, automatic tests are needed in order to more easily test whether a commit breaks expected behaviour (and possibly detect undetected bugs).
See discussion at #19 (comment) for details.
I would prefer to have the black CI fix issues, rather then complain.
This is especially annoying if black changes things ...
In the boutdata.restart.resize method one can pass in the number of guard cells as argument
Lines 107 to 108 in 1cb6d08
To me, it seems the safest route using this method is to resize a single restart file. In other words, one has to first create a unique restart file from the simulation (BOUT.restart.0.nc) and resize that. Multiple restart files can be created after the resizing method if needed. You should add a warning to users.
1906985 is breaking BOUT v4.4.1
The error is:
Making restart I/O test
Grid sizes: 8 16 4
Traceback (most recent call last):
File "/builddir/build/BUILD/BOUT++-v4.4.1/build_mpich/tests/integrated/test-restart-io/./runtest", line 90, in <module>
restart.redistribute(nproc, path=restartdir, output='data')
File "/builddir/boutdata/boutdata/restart.py", line 641, in redistribute
jyseps2_1 = f["jyseps2_1"]
File "/usr/lib/python3.10/site-packages/boututils/datafile.py", line 319, in __getitem__
return self.impl.__getitem__(name)
File "/usr/lib/python3.10/site-packages/boututils/datafile.py", line 437, in __getitem__
raise KeyError("No variable found: " + name)
KeyError: 'No variable found: jyseps2_1'
Reverting 1906985 fixes the issue.
I sometimes find that calling squashoutput.squashoutput
fails when I try to call it on SD1D simulation output. This seems to only occur for simulations that ran in parallel. All simulations I've tested on exit gracefully with the creation of a BOUT.stop file.
The (truncated) error message is always:
squashoutput(
File "/marconi_work/FUA36_UKAEA_ML/gholt/miniconda3/envs/sd1d/lib/python3.9/site-packages/boutdata/squashoutput.py", line 207, in squashoutput
var = outputs[varname]
File "/marconi_work/FUA36_UKAEA_ML/gholt/miniconda3/envs/sd1d/lib/python3.9/site-packages/boutdata/data.py", line 1683, in __getitem__
data = self._collect(name)
File "/marconi_work/FUA36_UKAEA_ML/gholt/miniconda3/envs/sd1d/lib/python3.9/site-packages/boutdata/data.py", line 1484, in _collect
return collect(
File "/marconi_work/FUA36_UKAEA_ML/gholt/miniconda3/envs/sd1d/lib/python3.9/site-packages/boutdata/collect.py", line 237, in collect
varname = findVar(varname, f.list())
File "/marconi_work/FUA36_UKAEA_ML/gholt/miniconda3/envs/sd1d/lib/python3.9/site-packages/boutdata/collect.py", line 57, in findVar
raise ValueError("Variable '" + varname + "' not found")
ValueError: Variable 'flux_ion' not found
Grepping for flux_ion
in the dmp.nc files shows it is only present in one of them. However, this also seems to be the case for other similar simulations that don't throw the above error when running squashoutput
on them.
My exact call to squashoutput
is:
from boutdata.squashoutput import squashoutput
datadir = <path_to_directory>
squashoutput(datadir=datadir, tind=-1)
The BOUT.inp file for reproducibility is below. Changing neutral density from 0.0007 to 0.0001 results in a simulation that squashoutput
works on.
#
#
#
NOUT = 1600 # number of output time-steps
TIMESTEP = 5000. # time between outputs
MZ = 1 # number of points in z direction (2^n + 1)
MXG = 0 # No guard cells needed in X
[mesh]
ny = 50 # Resolution along field-line
length = 25 # Length of the domain in meters
length_xpt = 12.5 # Length from midplane to X-point [m]
area_expansion = 1 # Expansion factor Area(target) / Area(midplane)
dy = length / ny # Parallel grid spacing [m]
ypos = y * length / (2*pi) # Y position [m]
nx = 1
dx = 1
ixseps1 = -1 # Branch-cut indices, specifying that
ixseps2 = -1 # the grid is in the SOL
# The following make the field-aligned
# metric tensor an identity metric
Rxy = 1
Bpxy = 1
Btxy = 0
Bxy = 1
hthe = 1
sinty = 0
symmetricGlobalY = true
##################################################
# derivative methods
[ddy]
first = C2
second = C2
upwind = W3
[solver]
type = cvode
mxstep = 100000 # Maximum number of internal steps per output
atol = 1e-10
rtol = 1e-5
use_precon=true
[SD1D]
diagnose = true # Output additional diagnostics
# Normalisation factors
Nnorm = 1e20 # Reference density [m^-3]
Tnorm = 100 # Reference temperature [eV]
Bnorm = 1.0 # Reference magnetic field [T]
AA = 2.0 # Ion atomic number
Eionize = 13.6 # Energy lost per ionisation [eV]
excitation = true # Include electron-neutral excitation
volume_source = true # Sources spread over a volume
density_upstream = 4e+19
density_controller_p = 1e-2 # Density controller 'p' parameter
density_controller_i = 1e-3 # Density controller 'i' parameter
# Model parameters
vwall = 0.0 # Velocity of neutrals at the wall, as fraction of Franck-Condon energy
frecycle = 0.95 # Recycling fraction
fredistribute = 0.0 # Fraction of recycled neutrals redistributed evenly along length
redist_weight = h(y - pi) # Weighting for redistribution
gaspuff = 0 # NOTE: In normalised units
dneut = 10.0 # Scale neutral gas diffusion rate
nloss = 1e3 # Neutral gas loss rate [1/s]
fimp = 0.05 # Impurity fraction
impurity_adas = true # Use Atomicpp to get ADAS data
impurity_species = c # Species. currently "c" (carbon) or "n" (Nitrogen)
sheath_gamma = 6 # Sheath heat transmission
neutral_gamma = 0. # Neutral gas heat transmission
density_sheath = 1 # 0 = free, 1 = Neumann, 2 = constant nV
pressure_sheath = 1 # 0 = free, 1 = Neumann, 2 = constant (5/2)Pv + (1/2)nv^3
atomic = true # Include atomic processes (CX/iz/rc)
# Set flux tube area as function of parallel grid index
# using normalised y coordinate from 0 to 2pi
area = 1 #+ (mesh:area_expansion - 1) * h(mesh:ypos - mesh:length_xpt)*(mesh:ypos - mesh:length_xpt)/(mesh:length - mesh:length_xpt)
hyper = -1 # Numerical diffusion parameter on all terms
ADpar = -1 # 4th-order numerical dissipation
viscos = -0.0001 # Parallel viscosity
ion_viscosity = false # Braginskii parallel ion viscosity (ions and neutrals)
heat_conduction = true # Heat conduction
density_form = 4
momentum_form = 6
energy_form = 8
[All]
scale = 0.0
bndry_all = neumann_o2 # Default boundary condition
# Note: Sheath boundary applied in code
[Ne] # Electron density
scale = 1
# Initial conditions
function = 0.1
flux = 4e23 # Particles per m^2 per second input
source = (flux/(mesh:length_xpt))*h(mesh:length_xpt - mesh:ypos) # Particle input source
# as function of normalised y coordinate
[NVi] # Parallel ion momentum
scale = 1
vtarg = 0.3
function = vtarg * Ne:scale * Ne:function * y / (2*pi) # Linear from 0 to 0.03 in y
bndry_target = dirichlet_o2
[P] # Plasma pressure P = 2 * Ne * T
scale = 1
function=0.1 # Initial constant pressure
powerflux = 2e7 # Input power flux in W/m^2
source = (powerflux*2/3 / (mesh:length_xpt))*h(mesh:length_xpt - mesh:ypos) # Input power as function of y
# 1e7 W/m^2 / (L/2) with L = 100 m , factor of 2 because power only from y = 0 to y=pi
# * 2/3 to get from energy to Pe
[Nn]
# Neutral density
scale = 1
function = 0.0007
[NVn]
evolve = true # Evolve neutral momentum?
[Pn]
evolve = true # Evolve neutral pressure? Otherwise Tn = Te model
Tstart = 3.5 # Starting temperature in eV
scale = 1.0
function = Nn:scale * Nn:function * Pn:Tstart / SD1D:Tnorm
I've set up a GitHub Action for automatic python publishing whenever a release is created. However, I do not have the repository permission to add PYPI_USERNAME
and PYPI_PASSWORD
to the repository secrets. Is this something you could fix @bendudson?
Performing a redistribute
without providing an nxpe
fails in Hermes-3 with the following message, which took me quite a bit of time to work back from:
Error encountered: Value for option Nd+ cannot be converted to a Field3D
Since not providing a nxpe
results in an automatic choice as the below comment, I think the failure is because the algorithm generated nxpe
was not the same as what I have in the input file.
Lines 559 to 563 in 908a4c2
My suggestion for a fix is to print a message whenever nxpe
is not provided:
Warning: nxpe not provided. This may create incompatible restart files if you set nxpe in your input file.
Alternatively we could parse the input file and check whether nxpe
is set there. If it is and it is not provided to redistribute
, we could raise an Exception
. Or we could simply read the nxpe
from the input file and pass it to redistribute
.
Let me know what you think.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.