Giter Club home page Giter Club logo

siibra-python's Introduction

License PyPI version doi Python versions Documentation Status

siibra - Software interface for interacting with brain atlases

Copyright 2020-2023, Forschungszentrum Jülich GmbH

Authors: Big Data Analytics Group, Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich GmbH

siibra is a Python client to a brain atlas framework that integrates brain parcellations and reference spaces at different spatial scales, and connects them with a broad range of multimodal regional data features. It aims to facilitate programmatic and reproducible incorporation of brain parcellations and brain region features from different sources into neuroscience workflows.

Note: siibra-python is still in development. While care is taken that it works reliably, its API is not yet stable and you may still encounter bugs when using it.

siibra provides structured access to parcellation schemes in different brain reference spaces, including volumetric reference templates at macroscopic and microscopic resolutions as well as surface representations. It supports both discretely labelled and statistical (probabilistic) parcellation maps, which can be used to assign brain regions to spatial locations and image signals, to retrieve region-specific neuroscience datasets from multiple online repositories, and to sample information from high-resolution image data. The datasets anchored to brain regions address features of molecular, cellular and architecture as well as connectivity, and are complemented with live queries to external repositories as well as dynamic extraction from "big" image volumes such as the 20 micrometer BigBrain model.

siibra was developed in the frame of the Human Brain Project for accessing the EBRAINS human brain atlas. It stores most of its contents as sustainable and open datasets in the EBRAINS Knowledge Graph, and is designed to support the OpenMINDS metadata standards. Its functionalities include common actions known from the interactive viewer siibra-explorer hosted at EBRAINS. In fact, the viewer is a good resource for exploring siibra’s core functionalities interactively: Selecting different parcellations, browsing and searching brain region hierarchies, downloading maps, identifying brain regions, and accessing multimodal features and connectivity information associated with brain regions. Feature queries in siibra are parameterized by data modality and anatomical location, while the latter could be a brain region, brain parcellation, or location in reference space. Beyond the explorative focus of siibra-explorer, the Python library supports a range of data analysis functions suitable for typical neuroscience workflows.

siibra hides much of the complexity that would be required to collect and interact with the individual parcellations, templates and data repositories. By encapsulating many aspects of interacting with different maps and reference templates spaces, it also minimizes common errors like misinterpretation of coordinates from different reference spaces, confusing label indices of brain regions, or using inconsistent versions of parcellation maps. It aims to provide a safe way of using maps defined across multiple spatial scales for reproducible analysis.

Installation

siibra is available on pypi. To install the latest released version, simply run pip install siibra. In order to work with the latest version from github, use pip install git+https://github.com/FZJ-INM1-BDA/siibra-python.git@main.

There is also an image based on jupyter:scipy-notebook, which already includes siibra.

docker run -dit \
      -p 10000:8888 \
      --rm \
      --name siibra \
      docker-registry.ebrains.eu/siibra/siibra-python:latest

Documentation & Help

siibra-python’s documentation is hosted on https://siibra-python.readthedocs.io. The documentation includes a catalogue of documented code examples that walk you through the different concepts and functionalities. As a new user, it is recommended to go through these examples - they are easy and will quickly provide you with the right code snippets that get you started. Furthermore, a set of jupyter notebooks demonstrating more extensive example use cases are maintained in the siibra-tutorials repository. We are working on a full API documentation of the library. You find the current status on readthedocs, but be aware that it is not yet complete and as up-to-date as the code examples.

If you run into issues, please open a ticket on EBRAINS support or file bugs and feature requests on github. Please keep in mind that siibra-python is still in development. While care is taken to make everything work reliably, the API of the library is not yet stable, and the software is not yet fully tested.

How to contribute

If you want to contribute to siibra, feel free to fork it and open a pull request with your changes. You are also welcome to contribute to discussions in the issue tracker and of course to report issues you are facing. If you find the software useful, please reference this repository URL in publications and derived work. You can also star the project to show us that you are using it.

Acknowledgements

This software code is funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

How to cite

Please cite the version used according to the citation file or all versions by Timo Dickscheid, Xiayun Gui, Ahmet Nihat Simsek, Vadim Marcenko, Louisa Köhnen, Sebastian Bludau, & Katrin Amunts. (2023). siibra-python - Software interface for interacting with brain atlases. Zenodo. https://doi.org/10.5281/zenodo.7885728.

siibra-python's People

Contributors

ahmetnsimsek avatar ch-schiffer avatar dickscheid avatar i-zaak avatar marcenko avatar skoehnen avatar xgui3783 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

siibra-python's Issues

Precompute regionprops for standard regions

Computing the surface area of a region is computationally expensive (marching cubes), but is constant for the same region over time. The spatial properties of brain regions should be precomputed and stored with the parcellation to avoid costly recomputing and the client side.

check mirror of the predefined configurations

We are running a mirror of the predefined configurations, which is accessed if the main configuration repo is not reachable. Since v0.2 introduced breaking changes to the configurations, it is important that the mirror has the latest changes.

We need to check wether the mirroring of configurations is working.

Region.build_mask() of left hemisphere region produces a two-hemisphere mask

To reproduce:

atlas = siibra.atlases['human']
atlas.select(parcellation='julich 2.5',region='v1 left')
v1_mask = atlas.selected_region.build_mask("mni152")
plotting.plot_roi(v1_mask)

Expected: binary mask for left hemisphere's V1

Obtained: binary mask for V1 in both hemispheres.

For continuousmaps, using get_regional_map, the output is correct.

Streamline volumeSrc data structure

The volumeSrc json defintions and data strucrtures in siibra-python are not very clean. In particular we need to discuss revise this deep structure (dict of dict of lists):

if 'volumeSrc' in obj:

It is unclear and not documented what the 'key' (index to second dict hierarchy) is supposed to be and how it is interpreted. It is in general difficult to work with and understand such a deep structure, and it is unclear how to deal with multiple volume sources for the same space - which one to choose then?

Some first thoughts:

  • use the same "volumeSrc" structure in definitions of spaces, parcellation maps and regions. Currently, it's a list of volume sources in space defintions, a dict indexed by space for parcellation maps, and a 2-level dict indexed by space id and then "purpose" ("collect", "pmap") in region definitions. They should better always be lists, with appropriate attributes to disambiguiate them
  • let the from_json constructor of VolumeSrc resolve appropriately, so it can be tested against space, volume_type, "purpose", etc.

Restructure connectivity modalities

From a discussion with codemart, the following idea came up:

Currently, connectivity types are split into profiles and matrices. This makes not sense, since these are both the same modality, just different formats (profiles are in principle just the rows of the matrices). The format is determined by the concept used to issue the query: when querying with a brain region object, a profile is expected; when querying with a parcellation, a matrix is expected.

Instead, the modality should be split into: functional connectivity, structural connection strengths, structural connection lengths (cf. the HBP deliverable document about the multiscale connectome with some thought on this).

cannot select ch 123 left/right

version/hash: 6831d2d
to reproduce:

import siibra as sb
atlas=sb.atlases.MULTILEVEL_HUMAN_ATLAS
atlas.select_region('123 left')
[siibra:ERROR]  Cannot select region. The spec "123 left" is not unique. It matches: Ch 123 (Basal Forebrain) - left hemisphere, Ch 123 (Basal Forebrain) - left hemisphere

Regional datasets in rat brain include results for human

To reproduce:

import siibra
atlas = siibra.atlases['rat']
nc = atlas.get_region('neocortex')
for r in siibra.get_features(nc, 'ebrains'):
    print(r.name)

Expected behavior: Returns only rat datasets.

Current behaviour: INcludes several human datasets, e.g. maps from the Julich-Brain parcellation.

[question] should parcellation introduce `groupBy` functionality

in siibra-explorer, parcellations are (sometimes) grouped according to, say, functional modes, fiber bundle. Should siibra-python also have this functionality?

If yes to above, how should this be implemented?

  • with a parcellation.label attribute, which expects a list of string, so user can filter by label
  • with a pseudo path as a string attribute, and the client can reconstruct the tree, to allow for a hierarchy organisation of parcellation
  • other suggestions?

pinging @dickscheid @marcenko @skoehnen

get_feature does not filter feature by parcellation

after select_parcellation, get_feature is expected to get features from the selected parcellation. 0.0.9.dev3 currently does not.

>>> import siibra as sb
[siibra:INFO]  Selected parcellation "Julich-Brain Cytoarchitectonic Maps 2.5"
[siibra:INFO]  Version: 0.0.9.dev3
>>> atlas = sb.atlases.MULTILEVEL_HUMAN_ATLAS
>>> atlas.select_parcellation(sb.parcellations['minds/core/parcellationatlas/v1.0.0/94c1125b-b87e-45e4-901c-00daee7f2579-25'])
[siibra:INFO]  Selected parcellation "Julich-Brain Cytoarchitectonic Maps 2.5"
>>> atlas.selected_parcellation.id
'minds/core/parcellationatlas/v1.0.0/94c1125b-b87e-45e4-901c-00daee7f2579-25'
>>> atlas.select_region('hoc1 left')
[siibra:INFO]  Selected region Area hOc1 (V1, 17, CalcS) - left hemisphere
Area hOc1 (V1, 17, CalcS) - left hemisphere
>>> conn=atlas.get_features(sb.modalities.ConnectivityProfile)
>>> conn[1].parcellation.id
'minds/core/parcellationatlas/v1.0.0/94c1125b-b87e-45e4-901c-00daee7f2579'
>>> conn[1].parcellation.name
'Julich-Brain Cytoarchitectonic Maps 1.18'
>>> 

Image function of BigBrainVolume expects bounding box to be in voxel space

If passing a bounding box to the clip argument to the Image function of BigBrainVolume, it expects the bounding box to be specified in voxel space of the specified resolution, which is highly inconvenient for users.
The bounding box should be defined in millimeter space and the class should do the conversion itself (which it can do using it's affine matrix).

`RegionProps` constructor returns error for hemisphere'ed region(s)

version/hash: 6831d2d

to reproduce:

import siibra as sb
atlas=sb.atlases.MULTILEVEL_HUMAN_ATLAS
atlas.select_region('4p left')
regionprop=sb.region.RegionProps(
    atlas.selected_region,
    sb.spaces.MNI152_2009C_NONL_ASYM
)

expected result: returns regionprop

actual result: AttributeError:

[siibra:WARNING]  No mask could be computed for Area 4p (PreCG) - left hemisphere
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/xiao/dev/projects/siibra-api/venv/lib/python3.6/site-packages/siibra/region.py", line 488, in __init__
    M = np.asanyarray(mask.dataobj) 
AttributeError: 'NoneType' object has no attribute 'dataobj'

some diagnosis seems to suggest that ParcellationMap is indexed by non hemisphered region (ie, parent node of 4p left)

Regions extracted using same bounding box from BigBrain histology and isocortex segmentation are not aligned

Given a bounding box, I tried to extract a region from the BigBrain histological space and the corresponding isocortex segmentation.
Since I used the same bounding box for both, I would expect the resulting volumes to be aligned (up to the precision of the cortex segmentation), but they are not.
This is the code I used:

import numpy as np
import brainscapes as bs
from nilearn import plotting
from cloudvolume import Bbox

bb_img = bs.bigbrain.BigBrainVolume(bs.spaces.BIG_BRAIN_HISTOLOGY.url)

# Define two points in BigBrain space
p0 = np.array((-3.979, -61.256, 3.906))
p1 = np.array((5.863, -55.356, -2.487))

def mm_to_vox(p0, p1, img, mip):
    # Bounding box needs to be defined in voxel space, so we need to apply the inverse affine matrix to the points
    inv_aff = np.linalg.inv(img.affine(mip))
    p0_vox = np.dot(inv_aff[:3, :3], p0) + inv_aff[:3, -1]
    p1_vox = np.dot(inv_aff[:3, :3], p1) + inv_aff[:3, -1]
    return p0_vox, p1_vox

# Read region from BigBrain
mip = 0
p0_vox, p1_vox = mm_to_vox(p0, p1, img=bb_img, mip=mip)
# Define bounding box
bbox = Bbox(p0_vox, p1_vox)
img = bb_img.Image(clip=bbox, mip=mip, force=True)

mask_url = "https://neuroglancer.humanbrainproject.eu/precomputed/BigBrainRelease.2015/classif/"
mask_img = bs.bigbrain.BigBrainVolume(mask_url)

# Bounding box needs to be redefined, as mask_img has different voxel space
p0_vox, p1_vox = mm_to_vox(p0, p1, img=mask_img, mip=mip)
# Define bounding box
bbox = Bbox(p0_vox, p1_vox)
mask = mask_img.Image(clip=bbox, mip=mip, force=True)

# Plot on top of each other. I would expect these images to be aligned, since I used the same bounding box, but they are not.
plotting.view_img(mask,
                  bg_img=img,
                  cmap="gray",
                  vim=0,
                  vmax=255,
                  resampling_interpolation="nearest",
                  symmetric_cmap=False)

I could imagine that there is an issue with how the bounding box coordiantes are rounded internally to match to the voxel space.

cannot import siibra==0.0.9.dev1

environment:

> python --version
3.6.9
> pip --version
pip 21.0.1 from /home/xiao/dev/tmp/siibra-python/venv/lib/python3.6/site-packages/pip (python 3.6)

to reproduce:

> pip install siibra==0.0.9.dev1
> python
>>> import siibra as sb

error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/xiao/dev/tmp/siibra-python/venv/lib/python3.6/site-packages/siibra/__init__.py", line 26, in <module>
    with open(path.join(PKG_DIR,"..","VERSION"),"r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/xiao/dev/tmp/siibra-python/venv/lib/python3.6/site-packages/siibra/../VERSION'

Receptor feature extraction not defined to be parcellation-sensitive

The ReceptorQuery class caches extracted feature in a class instance list to avoid redundant queries for different parcellations / region selections (since object construction of the extractor happens for every feature query). However, during the loading process, region names are decoded with the parcellation of the first object constructed. This can lead to problems if a query with a new extractor object which has a different parcellation defining the same region is called later on, for example, a different version of the same parcellation.

pip install error

Hi,

I tried to install siibra using pip on two separate systems (Ubuntu 20.04 and Windows 10 both using Python coming with Anaconda). In both cases, the installation process exists with errors.

From my Ubuntu system:

  Building wheel for psutil (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mario/anaconda3/envs/hbpatlas/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-5f77gmol
       cwd: /tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/
  Complete output (41 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.8
  creating build/lib.linux-x86_64-3.8/psutil
  copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_psosx.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_compat.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_exceptions.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_common.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/__init__.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_psaix.py -> build/lib.linux-x86_64-3.8/psutil
  copying psutil/_psposix.py -> build/lib.linux-x86_64-3.8/psutil
  creating build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.8/psutil/tests
  copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.8/psutil/tests
  running build_ext
  building 'psutil._psutil_linux' extension
  creating build/temp.linux-x86_64-3.8
  creating build/temp.linux-x86_64-3.8/psutil
  gcc -pthread -B /home/mario/anaconda3/envs/hbpatlas/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=543 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/home/mario/anaconda3/envs/hbpatlas/include/python3.8 -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.8/psutil/_psutil_common.o
  unable to execute 'gcc': No such file or directory
  error: command 'gcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for psutil
  Running setup.py clean for psutil
  Building wheel for posix-ipc (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mario/anaconda3/envs/hbpatlas/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-r42v1ccb/posix-ipc_be0f64e0767d45f49caaff0f1cfc8c1a/setup.py'"'"'; __file__='"'"'/tmp/pip-install-r42v1ccb/posix-ipc_be0f64e0767d45f49caaff0f1cfc8c1a/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-ne9azowm
       cwd: /tmp/pip-install-r42v1ccb/posix-ipc_be0f64e0767d45f49caaff0f1cfc8c1a/
  Complete output (23 lines):
  ******************************************************************************
  * Setup can't determine if it needs to link to the realtime libraries on your
  * system, so it will default to 'no' which may not be correct.
  *
  * Please report this message and your operating system info to the package
  * maintainer listed in the README file.
  ******************************************************************************
  ******************************************************************************
  * Setup can't determine the value of PAGE_SIZE on your system, so it will
  * default to 4096 which may not be correct.
  *
  * Please report this message and your operating system info to the package
  * maintainer listed in the README file.
  ******************************************************************************
  running bdist_wheel
  running build
  running build_ext
  building 'posix_ipc' extension
  creating build
  creating build/temp.linux-x86_64-3.8
  gcc -pthread -B /home/mario/anaconda3/envs/hbpatlas/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mario/anaconda3/envs/hbpatlas/include/python3.8 -c posix_ipc_module.c -o build/temp.linux-x86_64-3.8/posix_ipc_module.o
  unable to execute 'gcc': No such file or directory
  error: command 'gcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for posix-ipc
  Running setup.py clean for posix-ipc
Failed to build psutil posix-ipc
Installing collected packages: psutil, posix-ipc, patsy, pandas, nibabel, networkx, matplotlib, json5, imageio, fpzip, fastremap, DracoPy, compressed-segmentation, cloud-files, args, statsmodels, scikit-image, python-gitlab, nilearn, memoization, cloud-volume, clint, appdirs, anytree
    Running setup.py install for psutil ... error
    ERROR: Command errored out with exit status 1:
     command: /home/mario/anaconda3/envs/hbpatlas/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-n2s2go14/install-record.txt --single-version-externally-managed --compile --install-headers /home/mario/anaconda3/envs/hbpatlas/include/python3.8/psutil
         cwd: /tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/
    Complete output (41 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.8
    creating build/lib.linux-x86_64-3.8/psutil
    copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_psosx.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_compat.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_exceptions.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_common.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/__init__.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_psaix.py -> build/lib.linux-x86_64-3.8/psutil
    copying psutil/_psposix.py -> build/lib.linux-x86_64-3.8/psutil
    creating build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.8/psutil/tests
    copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.8/psutil/tests
    running build_ext
    building 'psutil._psutil_linux' extension
    creating build/temp.linux-x86_64-3.8
    creating build/temp.linux-x86_64-3.8/psutil
    gcc -pthread -B /home/mario/anaconda3/envs/hbpatlas/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_VERSION=543 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/home/mario/anaconda3/envs/hbpatlas/include/python3.8 -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.8/psutil/_psutil_common.o
    unable to execute 'gcc': No such file or directory
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /home/mario/anaconda3/envs/hbpatlas/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-r42v1ccb/psutil_1bc071388f984cd48c81348fa3ceeec2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-n2s2go14/install-record.txt --single-version-externally-managed --compile --install-headers /home/mario/anaconda3/envs/hbpatlas/include/python3.8/psutil Check the logs for full command output.```

Accelerate first-time import of package

At the first time in an empty environment, importing the siibra package takes quite a while since it pulls the configuration and some initial data, inlcuding a list of gene names. This should be avoided, loading of required data should follow a lazy strategy.

[bug] cannot import Gitlab from gitlab

from etienne:

ImportError                               Traceback (most recent call last)
<ipython-input-1-0026f1f8a7a3> in <module>
----> 1 import siibra
      2 siibra.logger.setLevel('INFO')
      3 siibra.__version__

/run/media/etienne/DATA/Toolbox/BraiNets/siibra-python/siibra/__init__.py in <module>
     27 from os import path
     28
---> 29 from .space import REGISTRY as spaces
     30 from .parcellation import REGISTRY as parcellations
     31 from .atlas import REGISTRY as atlases

/run/media/etienne/DATA/Toolbox/BraiNets/siibra-python/siibra/space.py in <module>
     14
     15 from .commons import create_key
---> 16 from .config import ConfigurationRegistry
     17 from .retrieval import download_file
     18 from .bigbrain import BigBrainVolume

/run/media/etienne/DATA/Toolbox/BraiNets/siibra-python/siibra/config.py in <module>
     16 from . import logger,__version__
     17 from .commons import create_key
---> 18 from gitlab import Gitlab
     19
     20 # Until openminds is fully supported,

ImportError: cannot import name 'Gitlab' from 'gitlab' (/home/etienne/anaconda3/lib/python3.7/site-packages/gitlab/__init__.py)

querying for gene expression without selecting a region causes machine to freeze

following from #38

to reproduce:

import siibra as sb
atlas=sb.atlases.MULTILEVEL_HUMAN_ATLAS
# deliberately not selecting a region
# atlas.select_region("V1")
features = atlas.get_features(
    siibra.modalities.GeneExpression, 
    gene=siibra.features.gene_names.MAOA)

expected result: raise exception (need to select region), or use a more efficient way to producing the mask

actual result: tries to calculate masks for every sub regions, and eventually freezes/OOMs

build from dockerfile also results in error

command

docker build -t brainscape .

results in:

Sending build context to Docker daemon  220.2MB
Step 1/6 : FROM python:3.8-alpine
 ---> 8744555ae7bb
Step 2/6 : RUN apk update
 ---> Using cache
 ---> 6e57a4254f4a
Step 3/6 : RUN apk add make automake gcc g++ subversion python3-dev
 ---> Using cache
 ---> 1d26bffbff6d
Step 4/6 : ADD . /brainscapes_client
 ---> fd8a352f7c37
Step 5/6 : WORKDIR /brainscapes_client
 ---> Running in b970d5d565ba
Removing intermediate container b970d5d565ba
 ---> 4d6c90a6a6de
Step 6/6 : RUN pip install -r requirements.txt
 ---> Running in bd29d796eb5f
Collecting requests
  Downloading requests-2.24.0-py2.py3-none-any.whl (61 kB)
Collecting nibabel
  Downloading nibabel-3.2.0-py3-none-any.whl (3.3 MB)
Collecting anytree
  Downloading anytree-2.8.0-py2.py3-none-any.whl (41 kB)
Collecting pandas
  Downloading pandas-1.1.3.tar.gz (5.2 MB)
  Installing build dependencies: started
  Installing build dependencies: still running...
  Installing build dependencies: still running...
  Installing build dependencies: still running...
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: still running...
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
ERROR: Could not find a version that satisfies the requirement PIL (from -r requirements.txt (line 5)) (from versions: none)
ERROR: No matching distribution found for PIL (from -r requirements.txt (line 5))
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

issues with installing siibra via pip

Hi,

when installing siibra via the pip command, I think the wrong version of numpy gets installed alongside:

Successfully installed numpy-1.16.6

even when the newest numpy version was previously installed.

When trying to import siibra the following error occurs:

ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

after manually installing the newest numpy version subsequently to the siibra installation, the siibra import works.

Best
L

Error when getting template for space

Trying to get a template for a space throws a RuntimeError (see example below)

  • Spaces tested: all spaces in siibra-python
  • Version: latest develop version
  • System: Windows

Example:

import siibra

template = siibra.spaces.BIG_BRAIN.get_template().fetch()

results in a RuntimeError:

RuntimeError: Could not resolve template image for Big Brain. This is most probably due to a misconfiguration of the volume src.

Rename functions affine and Image of BigBrainVolume

The names of functions affine and Image of the BigBrainVolume class are misleading.
Based on the naming, both of them are easily confused with an attribute or a properties.
I would not expect an entity called affine to be a function or to accept an argument.
In addition, Image does not confirm to the naming convention for functions or attributes, it looks more like a nested class.
I would suggest renaming the functions to imply what they are doing from the name, for example buildAffine or getImage.

[Bug] Error on getting RegionProps when an experimental parcellation version is forced

A short example to reproduce the error:

`
import siibra as bs
from siibra import parcellations

atlas_id = 'juelich/iav/atlas/v1.0.0/1'
atlas = bs.atlases[atlas_id]

parcelaltion_id = 'minds/core/parcellationatlas/v1.0.0/94c1125b-b87e-45e4-901c-00daee7f2579-273'
atlas.select_parcellation(parcelaltion_id, force=True)

reg_name = 'cerebellar nuclei'
selected_region = atlas.find_regions(reg_name)[0]
atlas.select_region(selected_region)

space_id = 'minds/core/referencespace/v1.0.0/dafcffc5-4826-4bf1-8ff6-46b8a31ff8e2'

r_props = selected_region.spatialprops(bs.spaces[space_id], force=True)
`

This results in IndexError: list index out of range

Changing: atlas.select_parcellation(parcelaltion_id, force=True)
To: atlas.select_parcellation(parcelaltion_id, force=False)

Returns a valid result without an error. So using an experimental parcellation version prevents the creation of RegionProps.

parcellation missing `.version` metadata

for example, julich brain v2.5 is the next version of julich brain v1.18.

This is, I believe, currently missing.

Together with versioning metadata, we should also introduce shortname for the version (not just the parcellation)

Caching includes auth token in hash

The cached_get implementation in request uses all kwargs for creating the hash. This includes the authentication token for EBRAINS queries, which is not elegant as it iwll refuse to use the cached data if the HBP_AUTH_TOKEN is not set any more.

RegionProps Error

When creating RegionProps for the following Input (see also code bellow):

  • Parcellation: Julich-Brain Probabilistic Cytoarchitectonic Maps (v2.5)
  • Space: MNI Colin 27
  • Region: Area PFm (IPL) - right hemisphere

I get the following Error: TypeError: Non-integer label_image types are ambiguous
This error aslo occurs on other combinations of Parcellation/Space/Region

atlas = REGISTRY.MULTILEVEL_HUMAN_ATLAS

# Julich-Brain Probabilistic Cytoarchitectonic Maps (v2.5)
atlas.select_parcellation('minds/core/parcellationatlas/v1.0.0/94c1125b-b87e-45e4-901c-00daee7f2579-25')

# Region 'Area PFm (IPL) - right hemisphere'
selected_region = atlas.regiontree.find('Area PFm (IPL) - right hemisphere')
atlas.select_region(selected_region[0])

# MNI Colin 27
space_id = 'minds/core/referencespace/v1.0.0/7f39f7be-445b-47c0-9791-e971c0b6d992'

r_props = regionprops.RegionProps(atlas, find_space_by_id(atlas, space_id))

Deal gracefully with connection problems to remote repositories

Will retrieve a list of gene acronyms from Allen Atlas now.
This may take a minute.
Traceback (most recent call last):
   File "aioserver.py", line 25, in <module>
     import brainscapes
   File "/webjugex/brainscapes/brainscapes/__init__.py", line 24, in
<module>
     from .atlas import REGISTRY as atlases
   File "/webjugex/brainscapes/brainscapes/atlas.py", line 21, in <module>
     from . import parcellations, spaces, features, logger
   File "/webjugex/brainscapes/brainscapes/features/__init__.py", line
30, in <module>
     extractor_types,gene_names,modalities = __init__()
   File "/webjugex/brainscapes/brainscapes/features/__init__.py", line
22, in __init__
     from .genes import AllenBrainAtlasQuery
   File "/webjugex/brainscapes/brainscapes/features/genes.py", line 65,
in <module>
     class AllenBrainAtlasQuery(FeatureExtractor):
   File "/webjugex/brainscapes/brainscapes/features/genes.py", line 123,
in AllenBrainAtlasQuery
     +"This may take a minute."))
   File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
     return _default_decoder.decode(s)
   File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   File "/usr/local/lib/python3.7/json/decoder.py", line 355, in raw_decode
     raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Happens quite seldom, but we should maybe catch the error if it happens.

---------- old comments ---------
Dickscheid, Timo

This is a very general thing. How shall we deal with a situation where the remote repositories could not be reached? Brainscapes works offline with cached data, but without a network connection on initial setup it doesn't quite make sense to use brainscapes. To be discussed, any opinions?

To provide a suggestion, I would distinguish some cases:

  1. Remote configuration of brainscapes cannot be retrieved - fail with an exception, ask to check neetwork connection. No reasonable use of brainscapes will be possible.
  2. Retrieval of certain data features fails, as in this issue. I would print a warning about the problem but continue, with this feature type missing.

However, 2. has implications for the cache. We need to make sure we do not store an empty cache item that will be re-opened later on. If no cache item is generated, brainscapes will try to run the query again next time.

`atlas.get_features` method should be cached

min reproduce:

import siibra as sb
atlas=sb.atlases.MULTILEVEL_HUMAN_ATLAS
atlas.select_region('hoc1 left')
connpr1=atlas.get_features(sb.modalities.ConnectivityProfile) # takes a very long time
connpr2=atlas.get_features(sb.modalities.ConnectivityProfile) # also takes a very long time

expected behaviour: second call should return cached result, rather than fetch result from scratch

mildly mangled region name(s)

commit: 3eb030e

reproduce:

import siibra as sb
atlas=sb.atlases.MULTILEVEL_HUMAN_ATLAS
atlas.select_region('frontal-II gapmap left')

expected result:

Frontal-II (GapMap) - left hemisphere

or

Frontal-II (GapMap) left

actual result:

FrontalII (GapMap) left

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.