Comments (12)
Hi, @DamienIrving thanks for your careful check on this. You are right that one could find some analytical results to do the tests. But here I adopted an indirect way here. Basically, we are not checking the streamfunction itself, but its gradient (or velocity components).
According to Helmholtz decomposition theory, a vector field, say,
That is why I put assert
s to see if the divergence of
One of the potential problem with analytical expression is that, we use finite difference scheme here. If we use larger grid size, there would be grid-scale truncation error relative to analytical results, which may be above the tolerance. But the current asserts do not depend on grid size, because the inversion for streamfunction, as well we the calculation of its gradient or velocity components, all use the consistent finite difference scheme, which turns out to be quite good that the expected
Also, there will be an arbitary constant to Poisson solution. So any streamfunction, after adding a constant, is still a solution to Poisson equation. I guess this could be a problem with windspharm
to determine the arbitary constant. That is why I do really prefer the indirect way of assertion here, as it avoid the arbitary constant.
from xinvert.
You are right, I know this concept, probably using assert
keyword in the program, but not every detail. Could you please send me some concrete examples that I can follow?
Since it is a numerical solver, I cannot expect the repeated runs of the codes reproduce the same results. I only visualize the plots to see if they are close to those plots in the related papers.
from xinvert.
This chapter in Research Software Engineering with Python gives an overview of general testing concepts:
https://merely-useful.tech/py-rse/testing.html
This page gives more information on using pytest:
https://realpython.com/pytest-python-testing/
In terms of a concrete example, here's a test file I wrote recently:
https://github.com/climate-innovation-hub/qqscale/blob/master/test_qdm.py
While you can't expect repeated runs to reproduce exactly the same results, you should hopefully be able to create a fixture that produces an expected result within some tolerance (i.e. assert np.allclose(expected_result, actual_result)
).
Once you've got your tests setup, you can use GitHub Actions to run the tests every time you make a commit to your repository. That involves creating a tests.yml
file in a new .github/workflows/
directory. It will look something like:
https://github.com/climate-innovation-hub/qqscale/blob/master/.github/workflows/tests.yml
Hope that helps 😄
from xinvert.
Thanks very much for your kind help. I will follow these to make auto-tests for xinvert soon.
from xinvert.
Hi @DamienIrving, I just followed your guide and made a action for tests. Everything on pytest
works fine locally on my laptop. But the action build raised an error: did not find a match in any of xarray's currently installed IO backends ['netcdf4', 'scipy']
I made both netcdf4
and scipy
in the requirement.txt
file, so I don't know how to resolve this. Could you help me find why this is happening?
PS: I use git lfs
this time because some of the sample netcdf files are larger than github limits.
from xinvert.
If you add an extra step in tests.yml
after the "install dependencies" step to list the installed packages you'll see that netCDF4 is successfully installed, so that isn't the problem.
- name: List installed packages
shell: bash -l {0}
run: pip list
It might be that the files are too large... you could try a test with a very small data file?
from xinvert.
Hi @DamienIrving, thanks for your tries and suggestion. I guess I found the reason. All netcdf files in Data
has been tracked by LFS, which are then not recognized by xr.open_dataset
. I modified a small nc file but leave it outside of LFS, then it is passed (you may check the log).
So this is really embarrassing, because I need some global dataset to make a test, which exceeds the limit by github. LFS seems to upload big file transparently but not openable online. Do you have any suggestion?
from xinvert.
It sounds like you need to refactor the tests you want GitHub Actions to run so they don't require large input datasets.
For automated testing, it's most common to either have the test code create it's own fake data (e.g. https://github.com/climate-innovation-hub/qqscale/blob/master/test_qdm.py#L15) so that you don't need to include any data files in your repo, or if using "real" data is unavoidable then it's common to use a netCDF file containing only a single grid point or small spatial area.
You can do your own testing offline where you process and plot large global datasets, but those offline tests wouldn't be included in the tests run by GitHub Actions.
Does that make sense?
from xinvert.
Well, that's right. I should skip those tests requiring large datasets (>100MB), and do them locally by hands.
from xinvert.
Hi @DamienIrving, all the pytests has been passed. So please see if this issue is fixed now. Thank you very much for you kind help with this.
from xinvert.
Thanks for implementing automated testing.
Just so I understand, what are the tests in test_Poisson.py checking?
I would expect that each test would have three basic steps:
- Read in vorticity and divergence data from a file (that's the test "fixture")
- Calculate the streamfunction and velocity potential (that's the "actual result")
- Compare the actual result to an "expected result" to see if the test passes (i.e. using an assertion)
Do you have an expected result? If not, is there some vorticity and divergence data that you can create (i.e. simple example data) where you know what the streamfunction and velocity potential values should be?
from xinvert.
Thanks for the explanation - that's a nice way to write the tests!
from xinvert.
Related Issues (20)
- submit a JOSS paper HOT 1
- add a example of Stommel-Aron model for deep ocean circulation HOT 1
- add a example of Bretherton-Haidvogel model for 2D turbulence over topography HOT 1
- add a 2D reference state example HOT 1
- add a Fofonoff model HOT 1
- JOSS review: Installation HOT 5
- JOSS review: community guidelines HOT 3
- JOSS review: state of the field HOT 3
- Add installation instructions to the documentation HOT 15
- Software paper: Citation of python packages HOT 3
- Dependence on xcontour package HOT 3
- Re-running notebooks HOT 3
- add a vertical mode decomposition case HOT 1
- invert_omega: Need to pass N2 and f0 as dataArrays HOT 9
- 'dz' counts a lot HOT 9
- Lat-Lon Finite Differences HOT 4
- Broken link in readme HOT 2
- Pressure perturbation equation HOT 1
- a constants module HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xinvert.