Giter Club home page Giter Club logo

aiida-plugin-cutter's Introduction

Build Status

AiiDA plugin cutter

Cookie cutter recipe for AiiDA plugins. The fastest way to template a new AiiDA plugin package.

For the package structure produced by the plugin cutter, see the aiida-diff demo plugin.

Note: The plugin cutter produces plugins for aiida-core>=2.0.0. See the support/aiida-0.x branch for creating plugins for older versions of aiida-core.

Usage instructions

pip install cookiecutter black
cookiecutter https://github.com/aiidateam/aiida-plugin-cutter.git

Demo

This will produce the files and folder structure for your plugin, already adjusted for the name of your plugin.

In order to get started with development

  1. it's often useful to create a dedicated Python virtual environment for developing your plugin, e.g. using virtualenv, virtualenvwrapper or conda

  2. you will want to install your plugin in editable mode, so that changes to the source code of your plugin are immediately visible to other packages:

    cd aiida_<name>
    pip install -e .  # install in editable mode

You are now ready to start development! See the README.md of your package (or of aiida-diff) for an explanation of the purpose of all generated files.

Features

Plugins templated using the plugin cutter

  • include a calculation, parser and data type as well as an example of how to submit a calculation
  • include basic regression tests using the pytest framework (submitting a calculation, ...)
  • can be directly pip-installed (and are prepared for submisson to PyPI
  • include a documentation template ready for Read the Docs
  • come with Github Actions configuration - enable it to run tests and check test coverage at every commit
  • come with pre-commit hooks that sanitize coding style and check for syntax errors - enable via pre-commit install

For more information on how to take advantage of these features, see the developer guide of your plugin.

Developing the plugin cutter

The plugin cutter comes with rather strict continuous integration tests which

  • test that the cookiecutter recipe works
  • test that the plugin can be pip-installed
  • test that the unit tests of the plugin pass
  • test that the documentation of the plugin builds
  • test that the code of the plugin confirms to coding standards

As you develop the plugin cutter, you will want to also update the aiida-diff repository with its default output. Simply run ./update-aiida-diff.sh to create a clone of the aiida-diff repository, with the latest changes produced by the plugin cutter ready to be committed.

License

MIT

Contact

Please report issues to the GitHub issue tracker. Other inquiries may be directed to the AiiDA mailinglist.

Acknowledgements

This work is supported by the MARVEL National Centre for Competency in Research funded by the Swiss National Science Foundation, as well as by the MaX European Centre of Excellence funded by the Horizon 2020 EINFRA-5 program, Grant No. 676598.

MARVEL MaX

aiida-plugin-cutter's People

Contributors

chrisjsewell avatar dev-zero avatar katrinleinweber avatar lan496 avatar ltalirz avatar mbercx avatar q-posev avatar unkcpz avatar yakutovicha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aiida-plugin-cutter's Issues

Update "install_requires" from aiida to aiida-core

One of the major reasons to do this is that readthedocs builds seem to fail with aiida (https://readthedocs.org/projects/aiida-crystal17/builds/7693840/) but not with aiida-core

I'd also note that, for some reason I get an ImportError for pgtest during tests, unless I specifically but it in the requirements (even though it seems to be in the aiida-core[testing] requirements?)

{
    "install_requires": [
        "aiida-core>=0.12.0,<1.0.0"
    ],
    "extras_require": {
        "testing": [
            "aiida-core[testing]",
            "wheel>=0.31",
            "pgtest"
        ]
   }
}

rename new_database fixture to clear_database

This fixture only clears the database after the test function finishes. It is not gauranteed that the test function work on get a fresh database in the begining. Propose to rename it to clear_database.

add type annotations?

Type annotations are not standard in the aiida-core codebase.

We may want to add those to the sample plugin as well (although one will need to carefully think about whether it makes the code harder to read for beginners).

submit script broken

submit script still relies on helpers scripts that no longer exist.
the submit script should be included in pytest

Voluptuous Dependency

It seems that voluptuous is veering towards being deprecated outright. Might consider updating to schema (which seems similar) or even pydantic (might be more work).

Not really an issue for new plugins of course.

Fix Readthedocs build of default documentation

The latest build of the aiida-diff RTD build failed: https://readthedocs.org/projects/aiida-diff/builds/11057632/

    raise ValueError('{} is not a valid entry point string: {}'.format(entry_point_string, exception))
ValueError: aiida.groups:core is not a valid entry point string: Entry point 'core' not found in group 'aiida.groups'. Try running `reentry scan` to update the entry point cache.

looking for now-outdated files... none found
pickling environment... done
checking consistency... done
processing aiida-diff.tex... index user_guide/index user_guide/get_started user_guide/tutorial developer_guide/index apidoc/aiida_diff apidoc/aiida_diff.data 
resolving references...
done
writing... failed

Exception occurred:
  File "/home/docs/checkouts/readthedocs.org/user_builds/aiida-diff/envs/latest/lib/python3.7/site-packages/sphinx/writers/latex.py", line 2041, in unknown_visit
    raise NotImplementedError('Unknown node: ' + node.__class__.__name__)
NotImplementedError: Unknown node: details

Testing against aiida development branch

It would be nice to have an optional test against upcoming versions of aiida, in particular v1.
I tried this:

    - TEST_TYPE="tests" AIIDA_BRANCH="develop" TEST_AIIDA_BACKEND="django" MOCK_EXECUTABLES=true

matrix:
  allow_failures:
    - env: TEST_TYPE="tests" AIIDA_BRANCH="develop" TEST_AIIDA_BACKEND="django" MOCK_EXECUTABLES=true

install:
# Upgrade pip setuptools and wheel
- pip install -U pip wheel setuptools
- pip install .[pre-commit,testing,docs]
- >
  if [[ ! -z "${AIIDA_BRANCH}" ]]; then
    cur_path="$(pwd)";
    cd ..;
    git clone --branch=${AIIDA_BRANCH} https://github.com/aiidateam/aiida_core.git;
    cd aiida_core;
    pip install -U .[testing];
    cd "$cur_path";
  fi

but I get an error when pytest tries to initiate (https://travis-ci.org/chrisjsewell/aiida-crystal17/jobs/421127367).
Maybe a dependency version clash?

Standardised Test Suite Implementations

So this is a bit of as amalgamation of #9 and #7, but I think its an important part of plugin development and deserves a separate issue. In essence, it would be good to have a relatively generic and endorsed implementation of a test suite that plugins can build on; which, in particular, test the end-to-end running of a calculation.

I've made some progress on this here: https://travis-ci.org/chrisjsewell/aiida-crystal17/builds/420453585. If you look at the repo/tests, you'll see that at present I'm updating the original aiida-diff tests, in parallel to tests for my own plugin. Then eventually the plan would be to create a PR for the aiida-diff and remove them from my repo.

Breaking down my understanding of the required tests:

Submission Test

At present in aiida-diff the test_submit test only calls calc.submit. This is a bit misleading, since the submit method does not call _prepare_for_submission (as explained in the this tutorial), which is the majority of the JobCalculation plugin code. Instead, calc.submit should be replaced with:

    with SandboxFolder() as folder:
        subfolder, script_filename = calc.submit_test(folder=folder)
        print("inputs created successfully at {}".format(subfolder.abspath))

Process Execution Test

As discussed in #7, I don't at present know how to directly retrieve the results of a submitted test. However, an alternative (and potentially more modular way) is to try directly running the script_file created by submit_test. The following code achieves this:

def test_calculation_execution(calc, allowed_returncodes=(0,)):
    """test that a calculation executes successfully"""
    from aiida.common.folders import SandboxFolder

    # output input files and scripts to temporary folder
    with SandboxFolder() as folder:

        subfolder, script_filename = calc.submit_test(folder=folder)
        print("inputs created at {}".format(subfolder.abspath))

        script_path = os.path.join(subfolder.abspath, script_filename)
        # we first need to make sure the script is executable
        st = os.stat(script_path)
        os.chmod(script_path, st.st_mode | stat.S_IEXEC)
        # now call script, NB: bash -l -c is required to access global variable loaded in .bash_profile
        returncode = subprocess.call(["bash", "-l", "-c", script_path], cwd=subfolder.abspath)

        if returncode not in allowed_returncodes:
            # the script reroutes stderr to _scheduler-stderr.txt (at least in v0.12)
            err_msg = "process failed (and couldn't find _scheduler-stderr.txt)"
            stderr_path = os.path.join(subfolder.abspath, "_scheduler-stderr.txt")
            if os.path.exists(stderr_path):
                with open(stderr_path) as f:
                    err_msg = "Process failed with stderr:\n{}".format(f.read())
            raise RuntimeError(err_msg)
        print("calculation completed execution")

Testing long running executables

In reality testing that physics codes are executed would be time consuming, and also for licensed software you can't really use that on Travis. Two approaches are possible:

(1) pytest allows test to be decorated with a marker, e.g.

@pytest.mark.process_execution
def test_process(new_database):

which can then be used to ignore certain tests, e.g. pytest -v -m "not process_execution"

(2) mock executables can be created. aiida-vasp appears to do this, but I think I've come up with a fairly generic mock executable solution, as I explain here. The executable works by creating a hash of the input file contents, then attempting to match this to the expected (pre-computed) outputs. Switching between mock and non-mock executables, is simply a matter of setting the global variable MOCK_EXECUTABLES=true.

Output Parsing and end-to-end testing

This could be achieved in the same manner as the step above, but more preferably by wrapping the calculation in a WorkChain and accessing the future of the calculation. Hopefully @sphuber can advise

test computer not configured for user

When doing verdi run examples/submit.py, calculations end up with SUBMISSIONFAILED because of

|   Submission of calc 914 failed, computer pk= 2 (localhost-test) is not configured for aiidauser [email protected]

Need to figure out how to configure the computer programmatically.
For the moment, a workaround is to verdi run examples/submit.py once to set up the computer, and then

verdi computer configure localhost-test

GHA: outdated actions and packages

The current CI configuration contains a couple of outdated things:

  • outdated actions like actions/setup-python@v1 (should be actions/setup-python@v2) (non-crucial)
  • mix of action versions actions/checkout@v1 and actions/checkout@v2 (non-crucial)
  • PostgreSQL-10 where GHAs ubuntu-latest is Ubuntu Focal which has PostgreSQL-12, hence the error:
E: Unable to locate package postgresql-10
Error: Process completed with exit code 100.

fetch top-level spec to set defaults

when overwriting input specs, you are losing validators that may have been set by aiida-core (e.g. of parser name, resources, etc.)

we should follow the pattern

spec.inputs['metadata']['options']['..'].default = 1234

replace develop git hash by 1.0.0b2

Currently, the plugin is tested using a recent commit from the develop branch.
As soon as aiida-core 1.0.0a4 is released, test against aiida-core >= 1.0.0a4 instead.

This involves:

  • Plugin cutter:
    • remove .travis-data/install_aiida_develop.sh
    • update .travis.yml:
  • Plugin:
    • remove .travis-data/install_aiida_develop.sh
    • update .travis.yml

Compatible with AiiDA 2.0?

Hi Everyone,

I was interested in maybe using this for some AiiDA 2.0 things I'm working on, but wanted to know if cookiecutter is compatible with the latest version.

Best,

Daniel

pytest fixture_manager doesn't work out of the box with `PluginTestCase`

The cookiecutter uses pytest (and the corresponding fixture_manager).

At the moment, this works together with the PluginTestCase for unittest only if there is at least one test function for pytest that uses the aiida_profile fixture, such as this one:

This repo uses both pytest-style test functions and unittest-style test classes, since this is intended to demonstrate how to use both. If there is no such function (i.e. all tests written using the PluginTestCase), the aiida_profile fixture is never called and the PluginTestCase will exit with a message like

 "Fixture mananger has no open profile. Please use aiida.manage.fixtures.TestRunner to run these tests.")

To see whether we can find a way to circumvent this problem (perhaps one can just put an "empty" test function in the conftest.py that does this?).

Note: For the moment, the "workaround" is to have at least one pytest test function using the aiida_profile fixture.

todo for production

  • add test for parser class of aiida-diff plugin
  • move manage.py to click
  • add instructions for running tests & examples to README
  • add info to aiida-core docs (replaces plugin template)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.