Giter Club home page Giter Club logo

ndx-template's Introduction

About

Run tests

This repo provides a template for creating Neurodata Extensions (NDX) for the Neurodata Without Borders data standard.

This template currently supports creating Neurodata Extensions only using Python 3.8+. MATLAB support is in development.

Getting started

  1. Install Python for your operating system if it is not already installed.

  2. Install cookiecutter, pynwb, and hdmf-docutils. cookiecutter is a Python-based command-line utility that creates projects from templates.

    python -m pip install -U cookiecutter pynwb hdmf-docutils
  3. Run cookiecutter in the directory where you want to create a new directory with the extension:

    cookiecutter gh:nwb-extensions/ndx-template

    To overwrite the contents of an existing directory, use the --overwrite-if-exists flag:

    cookiecutter --overwrite-if-exists gh:nwb-extensions/ndx-template

    This can be useful if you want to populate an existing empty git repository with a new extension.

  4. Answer the prompts, which will be used to fill in the blanks throughout the template automatically. Guidelines:

    • Select a name for your extension. It must start with 'ndx-' - The name of the namespace for your extension. This could be a description of the extension (e.g., "ndx-cortical-surface") or the name of your lab or group (e.g., "ndx-allen-institute"). The name should generally follow the following naming conventions:
      • Use only lower-case ASCII letters (no special characters)
      • Use "-" to separate different parts of the name (no spaces allowed)
      • Be short and descriptive
    • Select an initial version string - Version of your extension. Versioning should start at 0.1.0 and follow semantic versioning guidelines
    • Select a license - Name of license used for your extension source code. A permissive license, such as BSD, should be used if possible.
  5. A new folder with the same name as your entered namespace will be created. See NEXTSTEPS.md in that folder for the next steps in creating your awesome new Neurodata Extension.

In case cookiecutter runs into problems and you want to avoid reentering all the information, you can edit the file ~/.cookiecutter_replay/ndx-template.json, and use that via cookiecutter --replay gh:nwb-extensions/ndx-template.

See the PyNWB tutorial for guidance on how to write your extension.

When you are done creating your extension, we encourage you to follow the steps to publish your Neurodata Extension in the NDX Catalog for the benefit of the greater neuroscience community! :)

Running tests with breakpoint debugging

By default, to aid with debugging, the project is configured NOT to run code coverage as part of the tests. Code coverage testing is useful to help with creation of tests and report test coverage. However, with this option enabled, breakpoints for debugging with pdb are being ignored. To enable this option for code coverage reporting, uncomment out the following line in your pyproject.toml:

# uncomment below to run pytest with code coverage reporting. NOTE: breakpoints may not work
# addopts = "--cov --cov-report html"

Integrating with NWB Widgets

When answering the cookiecutter prompts, you will be asked whether you would like to create templates for integration with NWB Widgets, a library of plotting widgets for interactive visualization of NWB neurodata types within a Jupyter notebook. If you answer "yes", then an example widget and example notebook will be created for you. If you answer "no", but would like to add a widget later on, follow the instructions below:

  1. Create a directory named widgets in src/pynwb/{your_python_package_name}/.
  2. Copy __init__.py to that directory and adapt the contents to your extension.
  3. Copy tetrode_series_widget.py to that directory and adapt the contents to your extension.
  4. Create a directory named notebooks in the root of the repository.
  5. Copy example.ipynb to that directory and adapt the contents to your extension.
  6. Add nwbwidgets to requirements-dev.txt.

Maintainers

Copyright

Neurodata Extensions Catalog (NDX Catalog) Copyright (c) 2021-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit others to do so.

ndx-template's People

Contributors

rly avatar jcfr avatar bendichter avatar oruebel avatar t-b avatar

Stargazers

Nikolaus Schlemm avatar Matthew Avaylon avatar  avatar Andrea PIERRÉ avatar Eric Denovellis avatar

Watchers

 avatar James Cloos avatar

ndx-template's Issues

Error checking for create_extension_spec.py

Current Behavior

Users can place NWBGroups such as TimeSeries into other specs such as NWBDatasetSpec. The .yaml files are rendered without error and when these incorrectly defined extension types are used, HDMF throws hard-to-interpret error:

TypeError: GroupBuilder.set_dataset: incorrect type for 'builder' (got 'GroupBuilder', expected 'DatasetBuilder')

Expected Behavior

It would be really nice if create_extension_spec.py automatically checked the neurodata_types that are used in the extension and ensured that the NWBSpecs are fed the appropriate neurodata_types. Ex:

TypeError: NWBDataSpec: incorrect type for 'neurodata_type_inc' (got 'NWBGroup' expected 'NWBDataset')

step 11 confusion

For src, should we include the URL of the GitHub repo page (.com) or of the repo itself (.git)?

place for widgets

It would be great to have a designated place where users could optionally define custom visualizations for their extensions, so that the visualizations could then be integrated with nwb-jupyter-widgets

names

My preference for names would be:

  • the namespace name is e.g. "speech" (speech.namespace.yaml)
  • the repo name is e.g. "ndx-speech" (git clone ...ndx-speech.git)
  • the package name is e.g. "ndx_speech" (from ndx_speech import Transcription)

We know all non-core namespaces are going to be neurodata extensions, so there isn't much point in having the ndx prefix there. It only serves a purpose for the repo and package names, where this will be alongside other repos and packages.

I know it's a little tricky having 3 different names, but that's the convention I've been using. Would it be possible to change the cookiecutter to use that convention?

One advantage to this is we can ask for the namespace name and then add on the ndx prefix ourselves, thus automatically adhering to that standard of using the ndx prefix.

[Feature request] Migration to `pyproject.toml` due to future deprecation of `setup.py install`

When I do pip install ., I see this warning of deprecation of setup.py. While I can use --use-pep517 to silence the warning, since many packages are switching to pyproject.toml anyway, I think it would be nice to start slowly migrating to it.

DEPRECATION: ndx-test is being installed using the legacy 'setup.py install' method, 
because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. 
pip 23.1 will enforce this behaviour change. 
A possible replacement is to enable the '--use-pep517' option.
Discussion can be found at https://github.com/pypa/pip/issues/8559

error on cookiecutter

$ cookiecutter gh:nwb-extensions/ndx-template
You've downloaded /Users/bendichter/.cookiecutters/ndx-template before. Is it okay to delete and re-download it? [yes]: 
namespace [ndx-my-namespace]: ndx-experimenters
description [An NWB:N extension]: allows you to list more than one experimenter per session
author [My Name]: Ben Dichter
email [[email protected]]: [email protected]
github_username [myname]: bendichter
copyright [2019, Ben Dichter]: 
version [0.1.0]: 
release [alpha]: 
license [BSD 3-Clause]: 
py_pkg_name [ndx_experimenters]: 
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/conf.py.
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/index.rst.
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/make.bat.

Finished: An initial directory structure has been created.

You should now populate your master file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

Cleaning file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/conf.py
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/_static/theme_overrides.css
Updating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/conf.py
Updating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/conf_doc_autogen.py
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/Makefile
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/description.rst
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/release_notes.rst
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/credits.rst
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/format.rst
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/source/index.rst
Creating file /Users/bendichter/dev/pynwb/ndx-experimenters/docs/README.md
Traceback (most recent call last):
  File "src/spec/create_extension_spec.py", line 44, in <module>
    main()
  File "src/spec/create_extension_spec.py", line 40, in main
    export_spec(ns_builder, new_data_types)
  File "/Users/bendichter/dev/pynwb/ndx-experimenters/src/spec/export_spec.py", line 20, in export_spec
    if ns_builder.name is None:
AttributeError: 'NWBNamespaceBuilder' object has no attribute 'name'
Traceback (most recent call last):
  File "/var/folders/mn/b_p5fwjx3999zx0qdqhvqjvw0000gn/T/tmp7i7d3a5t.py", line 55, in <module>
    main()
  File "/var/folders/mn/b_p5fwjx3999zx0qdqhvqjvw0000gn/T/tmp7i7d3a5t.py", line 43, in main
    _create_extension_spec()
  File "/var/folders/mn/b_p5fwjx3999zx0qdqhvqjvw0000gn/T/tmp7i7d3a5t.py", line 28, in _create_extension_spec
    check_call([sys.executable, spec_dir + "/create_extension_spec.py"])
  File "/Users/bendichter/anaconda3/lib/python3.6/subprocess.py", line 311, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/bendichter/anaconda3/bin/python', 'src/spec/create_extension_spec.py']' returned non-zero exit status 1.
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)

Does not work out of the box due to "left over conflict markers" error from git

repo version: 65ce598 (Update setup.py, 2020-03-12)

$git --version
git version 2.26.0
$lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

After executing cookiecutter gh:nwb-extensions/ndx-template using default values I get

$cookiecutter gh:nwb-extensions/ndx-template
You've downloaded /home/firma/.cookiecutters/ndx-template before. Is it okay to delete and re-download it? [yes]:
namespace [ndx-my-namespace]:
description [An NWB:N extension]:
author [My Name]:
email [[email protected]]:
github_username [myname]:
copyright [2020, My Name]:
version [0.1.0]:
release [alpha]:
license [BSD 3-Clause]:
py_pkg_name [ndx_my_namespace]:
------------------------------------------------------------------

Finished: An initial directory structure has been created.

You should now populate your master file /home/firma/devel/test/ndx-my-namespace/docs/source/index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

Cleaning file /home/firma/devel/test/ndx-my-namespace/docs/source/conf.py
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/_static/theme_overrides.css
Updating file /home/firma/devel/test/ndx-my-namespace/docs/source/conf.py
Updating file /home/firma/devel/test/ndx-my-namespace/docs/source/conf_doc_autogen.py
Creating file /home/firma/devel/test/ndx-my-namespace/docs/Makefile
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/description.rst
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/release_notes.rst
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/credits.rst
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/format.rst
Creating file /home/firma/devel/test/ndx-my-namespace/docs/source/index.rst
Creating file /home/firma/devel/test/ndx-my-namespace/docs/README.md
Leeres Git-Repository in /home/firma/devel/test/ndx-my-namespace/.git/ initialisiert
docs/source/credits.rst:13: leftover conflict marker
docs/source/credits.rst:21: leftover conflict marker
Traceback (most recent call last):
  File "/tmp/tmp8y5sens4.py", line 53, in <module>
    main()
  File "/tmp/tmp8y5sens4.py", line 42, in main
    _initialize_git()
  File "/tmp/tmp8y5sens4.py", line 32, in _initialize_git
    check_call(["git", "commit", "-m", "Initial commit"])
  File "/home/firma/.pyenv/versions/3.8.2/lib/python3.8/subprocess.py", line 364, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'commit', '-m', 'Initial commit']' returned non-zero exit status 1.
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)

I worked around that by defining

*.* conflict-marker-size=100

in ~/.config/git/attributes.

Allow fix of errors

Thanks for the nice template. I tried to create an extension to work on the ICEphys extension and ran into a problem (or rather an inconvenience). When running cookiecutter as follows:

 cookiecutter gh:nwb-extensions/ndx-template
namespace [ndx-my-namespace]: icephys_meta_struct            
description [An NWB:N extension]: Implement proposal for hierarchical metadata strucutre for intracellular electrophysiology data  
author [My Name]: Oliver Ruebel, Ryan Ly, Benjamin Dichter, Thomas Braun, Andrew Tritt
email [[email protected]]: [email protected]
github_username [myname]: oruebel
copyright [2019, Oliver Ruebel, Ryan Ly, Benjamin Dichter, Thomas Braun, Andrew Tritt]: 
version [0.1.0]: 
release [alpha]: 
license [BSD 3-Clause]: 
py_pkg_name [icephys_meta_struct]: 

It errors out with the following message:

ERROR: The name of your NDX extension should start with "ndx-".
ERROR: Stopping generation because pre_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)

While this behavior is correct, it would be nice to either get the warning at the beginning when the error occurs so that one can enter a correct name, rather than having to rerun the script an renter the corrected information. Again, this is not a priority, I just figured I should at least create an issue so we don't forget.

Setup Python API Docs

In addition to setting up the outline for the docs based on the schema, it would be nice if the cookiecutter could also setup the basic outline for creating docs for custom API classes (including optionally with sphynx galleries for tutorials for the extension). This is not critical but would further simplify things.

inner setup.py yaml links won't work

The links from the setup.py file to the yaml files are the correct relative path for the outer setup.py file but not for the inner one. Is there a way to bring the yaml files along for the inner setup.py?

2 requirements.txt files

Similarly to the setup.py issue, the NEXTSTEPS document describes 2 places for requirements.txt, one in the root and one in /src/pynwb. Can we get rid of this second one if there is no additional setup.py?

Add load_namespace to default __init__

As far as I can tell the default setup.py here

dst_dir = os.path.join(project_dir, 'src', 'pynwb', '{{ cookiecutter.py_pkg_name }}', 'spec')
seems to copy the YAML spec to a spec/ subfolder in the python package upon install. If this is the case, should we add in the default init.py file for PyNWB https://github.com/nwb-extensions/ndx-template/blob/master/%7B%7B%20cookiecutter.namespace%20%7D%7D/src/pynwb/%7B%7B%20cookiecutter.py_pkg_name%20%7D%7D/__init__.py also a command for loading the namespace, since all packages would have to do this? I guess this would look something like:

import os
from pynwb import load_namespaces
'{{ cookiecutter.namespace }}_specpath = os.path.join(os.path.dirname(__file__), 'spec', '{{ cookiecutter.namespace }}.namespace.yaml')
load_namespaces('{{ cookiecutter.namespace }}_specpath)

CI on dev is broken

In my attempts to get Azure Pipelines to work on PRs where the source is a fork (which still does not work), I broke CI that runs after a merge to the master branch. CI still runs correctly on PRs from branches within the ndx-template repo.

The culprit is this line:

cookiecutter -v --no-input --output-dir "/home/vsts/work/1" gh:nwb-extensions/ndx-template --checkout "$(System.PullRequest.SourceBranch)"

where $(System.PullRequest.SourceBranch) is not defined when run from a merge and not a PR.

Allow users to specify github user account / organization to publish repo in `nextsteps.md`

A minor issue. The cookiecutter template asks for a github username and uses that to populate the

https://github.com/{{ github_username_list[0] }}/{{ cookiecutter.namespace }}/releases/tag/{{ cookiecutter.version }}
src: https://github.com/{{ github_username_list[0] }}/{{ cookiecutter.namespace }}/ndx-extracellular-channels

For the majority of cases, this is OK but users may want to use a github organization or different user account instead.

Add extension function for NWBWidgets

Hi guys,

After discussing with @bendichter, we came up with a possible solution to standardize how an extension could automatically define a custom widget.

Basically, an extension could define a show_widget() function of the new class (the name should be standardized so that nwbwidgets can automatically discover custom functions).

In the pynwb.extension_name.__init.__.py:

# Make them accessible at the package level
MyExtensionClass = get_class("MyExtensionClass", "ndx-my-extension")
MyExtensionClass.show_widget = my_show_widget_function

Then, nwbwidgets.nwb2widget could automatically discover and dynamically extend the neurodata_vis_spec as follows:

def nwb2widget(node, neurodata_vis_spec=default_neurodata_vis_spec):
    # pseudo-code - could be recursive
    for key, entry in node.items():
        if hasattr(entry, "show_widget"):
            neurodata_vis_spec[type(entry)] = entry.show_widget
    return nwb2widget_base(node, neurodata_vis_spec)

This is just an idea, happy to discuss other options.

What do you guys think? @rly @oruebel @bendichter

Support list of authors and contacts

Currently cookiecutter asks for a single author. When multiple authors are entered as a comma-separated list, then this will be placed in the namespace file as a single string, e.g.,:

author: ' Author-A, Author-B'

instead of a YAML list:

author:
  - Author-A
  - Author-B

The same is true of the contacts. It would be nice to be able to render these as a list in YAML as well. This is by no means critical, since this can be edited very easily, but I figured I should document this in an issue.

Add setup.cfg configuration

In addition to the setup.py file it would be useful to also include a setup.cfg file with the recommended configurations for flake8 etc. with the template. A simple default setup.cfg could look like this (derived from what we have in HDMF):

[wheel]
universal = 1

[flake8]
max-line-length = 120
max-complexity = 17
exclude =
  .git,
  .tox,
  __pycache__,
  build/,
  dist/,
  docs/source/conf.py
  versioneer.py

[metadata]
description-file = README.rst

If we also want to recommend the use of versioneer then we should include those settings here as well.

RFC: Approach to support loading of extensions

Approach below are not exclusive

(1) Use namespace filepath returned by extension python package

import ndx_my_extension

from pynwb import load_namespaces

load_namespaces(ndx_my_extension.namespace_filepath)

(2) systematic installation of spec files as datafiles

Instead of setting package_data, the spec files could be listed as data_files always installed under a directory called ndx_specs

(3) Use entrypoint to register extensions

Add support for entrypoints allowing to automatically register the ndx python extension after its installation.

pynwb will need to be updated to support the following:

  • listing installed extensions using a pynwb functions (e.g pnwb.list_registered_namespaces())
  • implement a function named pynwb.load_registered_namespaces()
  • automatic loading of registered namespaces on import. Do we want that ? @oruebel @bendichter

automatic loading of registered namespaces on import. Do we want that ?

To support both cases: automatic loading and explicit loading of extension. The following could be done:

pip install pynwb[extension_autoload]

By installing the extra [extension_autoload], pynwb would load the namespace of all installed extensions on import.

suggested path forward

As first step, I suggest to go with (1), it will:

  • not prevent from implementing (2) and/or (3)
  • nicely abstract the location of the spec files

Cross-reference docs to other extensions

Currently, intersphinx is used to create cross-references to other documentation, e.g. the core schema. However, this reference to the core is hard-coded in docs/source/conf.py:

intersphinx_mapping = {'core': ('https://nwb-schema.readthedocs.io/en/latest/', None)}

Change this behavior to allow cross-reference to docs of other extensions if the current extension uses other extensions.

Namespace YAML files will have to include an (optional) URL to the documentation of the included neurodatatypes:

namespaces:
- author: Ben Dichter
  contact: [email protected]
  doc: ecog extensions
  name: ecog
  schema:
  - namespace: core
  - neurodata_types:
    - NWBDataInterface
    - Subject
  - documentation: https://nwb-schema.readthedocs.io/en/latest/
  - source: ecog.extensions.yaml
  version: 1.2.1

This will also require a change to NWBNamespaceBuilder in PyNWB and NamespaceBuilder in HDMF:

ns_builder.add_spec(ext_path, neurodata_type, doc_url)

Useful refs:
https://stackoverflow.com/questions/30939867/how-to-properly-write-cross-references-to-external-documentation-with-intersphin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.