Giter Club home page Giter Club logo

tvb-recon's People

Contributors

dionperd avatar liadomide avatar popaula937 avatar sipv avatar sraghuvanshi avatar umarbrowser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tvb-recon's Issues

Error in running "run_sequentially.py"

After loading the docker image "tvb-recon" along with path of input data folder

Input: python run_sequentially.py "1"

Output

Starting to process the following subjects: %s [1]
Starting to process the subject: TVB1
Configured atlas default for patient inside folder /home/submitter/data/TVB1/configs
Checking currently running job ids...
Error:

Extra Info: You probably saw this error because the condor_schedd is not
running on the machine you are trying to query. If the condor_schedd is not
running, the Condor system will not be able to find an address and port to
connect to and satisfy this request. Please make sure the Condor daemons are
running and try again.

Extra Info: If the condor_schedd is running on the machine you are trying to
query and you still see the error, the most likely cause is that you have
setup a personal Condor, you have not defined SCHEDD_NAME in your
condor_config file, and something is wrong with your SCHEDD_ADDRESS_FILE
setting. You must define either or both of those settings in your config
file, or you must use the -name option to condor_q. Please see the Condor
manual for details on SCHEDD_NAME and SCHEDD_ADDRESS_FILE.
Currently running job ids are: []
Starting pegasus run for subject: TVB1with atlas: default
main_pegasus.sh: 7: main_pegasus.sh: Bad substitution
/opt/tvb-recon
Traceback (most recent call last):
File "/opt/conda/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/opt/conda/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/opt/tvb-recon/tvb/recon/dax/main.py", line 7, in
from tvb.recon.dax.configuration import Configuration, ConfigKey, SensorsType
File "tvb/recon/dax/configuration.py", line 4, in
LOGGER = get_logger(name)
File "tvb/recon/logger.py", line 25, in get_logger
_ensure_log_folder_exists()
File "tvb/recon/logger.py", line 14, in _ensure_log_folder_exists
os.mkdir(OUTPUT_FOLDER)
OSError: [Errno 13] Permission denied: 'output'
Removing Generate 5tt MIF -> Tracts SIFT
Removing gen_mapping_details -> convert_output
Removing Convert APARC+ASEG to NIFTI with good orientation -> convert_output
Removing Recon-all for T1 -> qc_snapshot
Removing Recon-all for T1 -> qc_snapshot
Traceback (most recent call last):
File "/usr/bin/pegasus-graphviz", line 507, in
main()
File "/usr/bin/pegasus-graphviz", line 504, in main
emit_dot(dag, options.label, options.outfile, options.width, options.height)
File "/usr/bin/pegasus-graphviz", line 412, in init
self.out = open(outfile, 'w')
IOError: [Errno 13] Permission denied: '/home/submitter/data/TVB1/configs/dax/main_bnm.dot'
Error: Could not open "/home/submitter/data/TVB1/configs/dax/main_bnm.png" for writing : Permission denied
2022.06.21 08:02:52.726 UTC:
2022.06.21 08:02:52.732 UTC: -----------------------------------------------------------------------
2022.06.21 08:02:52.737 UTC: File for submitting this DAG to Condor : TVB-PIPELINE-0.dag.condor.sub
2022.06.21 08:02:52.744 UTC: Log of DAGMan debugging messages : TVB-PIPELINE-0.dag.dagman.out
2022.06.21 08:02:52.750 UTC: Log of Condor library output : TVB-PIPELINE-0.dag.lib.out
2022.06.21 08:02:52.756 UTC: Log of Condor library error messages : TVB-PIPELINE-0.dag.lib.err
2022.06.21 08:02:52.762 UTC: Log of the life of condor_dagman itself : TVB-PIPELINE-0.dag.dagman.log
2022.06.21 08:02:52.768 UTC:
2022.06.21 08:02:52.785 UTC: -----------------------------------------------------------------------
2022.06.21 08:02:58.964 UTC: Created Pegasus database in: sqlite:////home/submitter/.pegasus/workflow.db
2022.06.21 08:02:58.969 UTC: Your database is compatible with Pegasus version: 4.8.2
2022.06.21 08:02:59.114 UTC: Submitting to condor TVB-PIPELINE-0.dag.condor.sub
2022.06.21 08:02:59.128 UTC: [ERROR]
2022.06.21 08:02:59.133 UTC: [ERROR] ERROR: Can't find address of local schedd
2022.06.21 08:02:59.139 UTC: [ERROR] ERROR: Running condor_submit /usr/bin/condor_submit TVB-PIPELINE-0.dag.condor.sub failed with exit code 1 at /usr/bin/pegasus-run line 327.
2022.06.21 08:02:59.145 UTC: [FATAL ERROR]
[1] java.lang.RuntimeException: Unable to submit the workflow using pegasus-run at edu.isi.pegasus.planner.client.CPlanner.executeCommand(CPlanner.java:695)
Checking currently running job ids...
Error:

Extra Info: You probably saw this error because the condor_schedd is not
running on the machine you are trying to query. If the condor_schedd is not
running, the Condor system will not be able to find an address and port to
connect to and satisfy this request. Please make sure the Condor daemons are
running and try again.

Extra Info: If the condor_schedd is running on the machine you are trying to
query and you still see the error, the most likely cause is that you have
setup a personal Condor, you have not defined SCHEDD_NAME in your
condor_config file, and something is wrong with your SCHEDD_ADDRESS_FILE
setting. You must define either or both of those settings in your config
file, or you must use the -name option to condor_q. Please see the Condor
manual for details on SCHEDD_NAME and SCHEDD_ADDRESS_FILE.
Currently running job ids are: []
Traceback (most recent call last):
File "run_sequentially.py", line 185, in
current_job_id = new_job_ids[0]
IndexError: list index out of range
submitter@5637a3c2a40e:/opt/tvb-recon/pegasus$ condor_ststus
bash: condor_ststus: command not found
submitter@5637a3c2a40e:/opt/tvb-recon/pegasus$ condor_status
Error: communication error
CEDAR:6001:Failed to connect to <172.17.0.3:9618>

** I am inside a proxy network and has configured docker to run in the proxy network.

Move datasets into an LFS

Git doesn't handle large files well. This repo has a bunch of it; let's move it into a LFS store.

run_sequentially.py error

Hi, I am trying to run run_sequentially inside the docker container but getting this error.

How should I proceed?

submitter@ome/tvb-recon/:/opt/tvb-recon:/opt/tvb-recon/pegasus$ python run_sequentially.py "1"
Starting to process the following subjects: %s [1]
Starting to process the subject: TVB1
Configured atlas default for patient inside folder /home/submitter/data/TVB1/configs
Checking currently running job ids...
Currently running job ids are: []
Starting pegasus run for subject: TVB1with atlas: default
main_pegasus.sh: 7: main_pegasus.sh: Bad substitution
/opt/tvb-recon
2021-06-12 18:33:29,386 - tvb.recon.dax.configuration - INFO - Parsing patient configuration file /home/submitter/data/TVB1/configs/patient_flow.properties
Traceback (most recent call last):
File "/opt/conda/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/opt/conda/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/opt/tvb-recon/tvb/recon/dax/main.py", line 55, in
tracts_generation = TractsGeneration(config.props[ConfigKey.DWI_MULTI_SHELL], config.props[ConfigKey.MRTRIX_THRDS],
KeyError: <ConfigKey.DWI_MULTI_SHELL: 'dwi.multi.shell'>
Traceback (most recent call last):
File "/usr/bin/pegasus-graphviz", line 507, in
main()
File "/usr/bin/pegasus-graphviz", line 497, in main
dag = parse_daxfile(dagfile, options.files)
File "/usr/bin/pegasus-graphviz", line 225, in parse_daxfile
f = open(fname,"r")
IOError: [Errno 2] No such file or directory: '/home/submitter/data/TVB1/configs/dax/main_bnm.dax'
Error: dot: can't open /home/submitter/data/TVB1/configs/dax/main_bnm.dot
2021.06.12 18:33:29.784 UTC: [ERROR] Problem while determining the version of dax class java.lang.RuntimeException: java.io.FileNotFoundException: The file (/home/submitter/data/TVB1/configs/dax/main_bnm.dax ) specified does not exist
2021.06.12 18:33:29.787 UTC: [FATAL ERROR]
[1]: Instantiating DAXParser at edu.isi.pegasus.planner.parser.DAXParserFactory.loadDAXParser(DAXParserFactory.java:235)
[2]: Invalid static initializer method name for DAXParser3 at edu.isi.pegasus.common.util.DynamicLoader.instantiate(DynamicLoader.java:131)
ERROR while logging metrics The metrics file location is not yet initialized
Checking currently running job ids...
Currently running job ids are: []
Traceback (most recent call last):
File "run_sequentially.py", line 185, in
current_job_id = new_job_ids[0]
IndexError: list index out of range

run_sequentially.py Error

Hello, I'm running run_ sequentially. py, but I didn't use docker. I installed the required environment according to the requirements, but the following errors occurred. Can anyone help me?
Screenshot from 2022-02-10 16-39-33

Brain segmentation template of TVB-recon

Hello
I used the TVB-recon pipeline to process DTI and DWI images, and generated a connectivity.a2009s folder, which including a weights.txt, there are 167 brain nodes in total. I want to know what is the brain segmentation template used to generate this weights.txt , and whether it is aparc.a2009s+aseg?I run my own script using aparc. a2009s+aseg, but the final generated weight matrix has only 163 nodes.

bash: cd:pegasus: No such file or directory

Hello, I'm running tvb-recon with docker. I use git clone to download the code locally. The saved path is / home / nana / downloads /tvb-recon. The patient folder structure is also constructed according to the requirements of readme. The saved path is / home / nana / downloads / TVB_ Patients,I pull the docker image,and successfully entered it, but when I cd pegasus, no such file or directory is displayed, and the local folder exists in this folder, Can you help me see what's wrong
Screenshot from 2022-02-13 16-35-48
?

Unable to run tvb-recon docker image

I tried to run the docker image on my local machine using the command given in readme.

docker run -it -v your_path_to_TVB_patients/TVB_patients/:/home/submitter/data -v your_path_to_tvb_recon/tvb-recon/:/opt/tvb-recon popaula937/tvb-recon:master-pr50 /bin/bash

However it shows the following error:-

Unable to find image 'popaula937/tvb-recon:master-pr50' locally
docker: Error response from daemon: pull access denied for popaula937/tvb-recon, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

Please look into it.

Provisioning

The scripts for setting up a VM are out of date,

https://github.com/ins-amu/bnm-recon-tools/tree/master/docs/setup-vm

and could be rewritten for Ansible to improve robustness. It's probably a good idea for us to be using a Vagrant (VBox) or Docker VM where possible, so that at least across dev machines, the environment is identical:

  • Vagrantfile provides a local VM so everyone's using the same environment
  • same Ansible playbook used to setup VM & cluster env
  • VM sets up environment on secondary (non-OS) disk so we can share to get quick started.

NumPy style docstrings

As a project in the scientific community, docstrings are expected to be in NumPy format. PyCharm defaults to :param: style, but it will do NumPy for you, just take a look in the right place:

  • File | Settings | Tools | Python Integrated Tools for Windows and Linux
  • PyCharm | Preferences | Tools | Python Integrated Tools for OS X

and change "Docstring Format" to "NumPy". If it's not present, upgrade PyCharm.

This task is to rewrite the existing ones and add a pre-commit check.

Error in running "run_sequentially.py" "1"

Hi support team,

I am trying to run: python run__sequentially.py" "1" using the docker but I am getting the following error and couldnt solve it. how can I proceed?

(base) D:\TVB\connectivity\tvb-recon>docker run -it -v D:/TVB/connectivity/tvb-recon/TVB_patients/:/home/submitter/data -v D:/TVB/connectivity/tvb-recon/:/opt/tvb-recon thevirtualbrain/tvb-recon /bin/bash
submitter@bfcd7c0961c0:/opt/tvb-recon$ sudo condor_master

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for submitter:
submitter@bfcd7c0961c0:/opt/tvb-recon$ condor_status
Name OpSys Arch State Activity LoadAv Mem ActvtyTime

slot1@bfcd7c0961c0 LINUX X86_64 Unclaimed Benchmar 0.110 496 0+00:00:04
slot2@bfcd7c0961c0 LINUX X86_64 Unclaimed Idle 0.000 496 0+00:00:05
slot3@bfcd7c0961c0 LINUX X86_64 Unclaimed Idle 0.000 496 0+00:00:06
slot4@bfcd7c0961c0 LINUX X86_64 Unclaimed Idle 0.000 496 0+00:00:07
Total Owner Claimed Unclaimed Matched Preempting Backfill

    X86_64/LINUX     4     0       0         4       0          0        0

           Total     4     0       0         4       0          0        0

submitter@bfcd7c0961c0:/opt/tvb-recon$ cd pegasus/
submitter@bfcd7c0961c0:/opt/tvb-recon/pegasus$ python run_sequentially.py "1"
Starting to process the following subjects: %s [1]
Starting to process the subject: TVB1
/home/submitter/data/TVB1/configs
Configured atlas default for patient inside folder /home/submitter/data/TVB1/configs
Checking currently running job ids...
Currently running job ids are: []
Starting pegasus run for subject: TVB1with atlas: default
: not founds.sh: 6: main_pegasus.sh:
main_pegasus.sh: 7: main_pegasus.sh: Bad substitution
: not founds.sh: 8: main_pegasus.sh:
/..n_pegasus.sh: 9: cd: can't cd to /opt/tvb-recon/pegasus
: not founds.sh: 10: main_pegasus.sh:
: not founds.sh: 11: main_pegasus.sh: pwd
: not founds.sh: 12: main_pegasus.sh:
bash: pegasus/generate_dax.sh: No such file or directory
: not founds.sh: 15: main_pegasus.sh:
Traceback (most recent call last):
File "/usr/bin/pegasus-graphviz", line 507, in
main()
File "/usr/bin/pegasus-graphviz", line 497, in main
dag = parse_daxfile(dagfile, options.files)
File "/usr/bin/pegasus-graphviz", line 225, in parse_daxfile
f = open(fname,"r")
IOError: [Errno 2] No such file or directory: '/home/submitter/data/TVB1/configs/dax/main_bnm.dax\r'
: not founds.sh: 18: main_pegasus.sh:
Error: dot: can't open /home/submitter/data/TVB1/configs/dax/main_bnm.dot
: not founds.sh: 21: main_pegasus.sh:
bash: pegasus/plan_dax.sh: No such file or directory
Checking currently running job ids...
Currently running job ids are: []
Traceback (most recent call last):
File "run_sequentially.py", line 187, in
current_job_id = new_job_ids[0]
IndexError: list index out of range
submitter@bfcd7c0961c0:/opt/tvb-recon/pegasus$

thank you for your help and support.

TVB compatible datasets

Current bash scripts based pipeline (https://github.com/the-virtual-brain/tvb-recon/blob/94d7cd25aaf257a70e965d85adaa278901512511/bin/main.sh) does not produce TVB compatible structural datasets (i.e. files areas.txt, average_orientations.txt, weights.txt, tract_lengths.txt, cortical.txt, centres.txt), but stops with the connectome.

Some code meant to generate this TVB dataset is in the old scripts pipeline, but it is somehow out-of-date (considering the versions of the software in the virtual machine). Furthermore, it is quite complex: it depends on at least six python scripts, remesher in C and some region mapping code in MATLAB.

I would need this functionality ASAP, but before I start playing with this (either dusting off the original python scripts, or rewriting them to the new python library) can you summarize the status of the related work in the new python version of the pipeline? @maedoc @dionperd

For example, some of the code related to remeshing (https://github.com/the-virtual-brain/tvb-recon/blob/94d7cd25aaf257a70e965d85adaa278901512511/tvb/recon/algo/head_sensors.py) and subparcellation (https://github.com/the-virtual-brain/tvb-recon/blob/94d7cd25aaf257a70e965d85adaa278901512511/tvb/recon/algo/service/subparcellation.py) is ported to the python library. Does it work, and is it a final version?

Migrate TVB to support Python 3

Hi there.

I came across some post on medium and i saw that the python software foundation are going to stop supporting python 2 on 2020, Then i came with the idea why not start porting the virtual brain from now to start supporting the latest version of python 3.

Please core developers see that issue, I will like to offer my contribution.

@liadomide @bogdanneacsa @maedoc @pausz @twotribes @liadomide

Thanks you.

References:
https://legacy.python.org/dev/peps/pep-0373/
https://pythonclock.org/

Tests run depends on working directory

It would be convenient if the tests could find test data regardless of the current working directory.

Also, take inspiration from MNE's approach to handling test data.

Bash runner

Instead of invoking commands immediately, generate Bash script, for comparison with existing Bash scripts.

About average_orientations.txt

Hello, in the output/connectivity.a2009s file generated by TVB-recon, there is a file called average_ orientations.txt, I want to know what this is. Recently, I want to use openmeeg to generate the gain matrix of SEEG. I want to take the center of the brain region as a point dipole, but only the location of the center of the brain region, and openmeeg needs the orientation of the dipole, so I want to know whether the average_orientations.txt can be used as the orientation of the dipole in the center of the brain region.

Missing config file errors

Hey all,

This is my first time using tvb-recon. I have my dataset directory structure organized as described in the readme. I run the docker command like so:

docker run -it -v /Users/rafi/Documents/Stanford/Saggar_Lab/HPI_Reflect/TVB_patients/:/home/submitter/data -v /Users/rafi/Documents/Stanford/Saggar_Lab/tvb-recon/:/opt/tvb-recon thevirtualbrain/tvb-recon /bin/bash

Yet after I run sudo condor_master and cd into pegasus and run run_sequentially.py, I get the following error:

Starting to process the following subjects: %s [1]
Starting to process the subject: TVB1
Traceback (most recent call last):
  File "run_sequentially.py", line 172, in <module>
    prepare_config_for_new_atlas(current_dir, atlas)
  File "run_sequentially.py", line 97, in prepare_config_for_new_atlas
    with open(current_patient_props_path, "r") as current_patient_props_file:
IOError: [Errno 2] No such file or directory: '/home/submitter/data/TVB1/configs/patient_flow.properties'

I thought the pipeline would automatically create the default configuration files from what I understood. Then I tried copying the configuration files from the pegasus/configs folder into the TVB1/configs folder. I reran and I'm still getting missing file errors.

Starting to process the following subjects: %s [1]
Starting to process the subject: TVB1
Configured atlas default for patient inside folder /home/submitter/data/TVB1/configs
Checking currently running job ids...
Currently running job ids are: []
Starting pegasus run for subject: TVB1with atlas: default
main_pegasus.sh: 7: main_pegasus.sh: Bad substitution
/opt/tvb-recon
2018-08-29 23:01:39,473 - tvb.recon.dax.configuration - INFO - Parsing patient configuration file /home/submitter/data/TVB1/configs/patient_flow.properties
Traceback (most recent call last):
  File "/opt/conda/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/opt/conda/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/opt/tvb-recon/tvb/recon/dax/__main__.py", line 81, in <module>
    job_aparc_aseg_in_d, job_mapping_details)
  File "tvb/recon/dax/tracts_generation.py", line 86, in add_tracts_generation_steps
    gm_mif = File(DWIFiles.GM_MIF.value)
  File "/usr/lib/python2.7/dist-packages/enum/__init__.py", line 390, in __getattr__
    raise AttributeError(name)
AttributeError: GM_MIF
Traceback (most recent call last):
  File "/usr/bin/pegasus-graphviz", line 507, in <module>
    main()
  File "/usr/bin/pegasus-graphviz", line 497, in main
    dag = parse_daxfile(dagfile, options.files)
  File "/usr/bin/pegasus-graphviz", line 225, in parse_daxfile
    f = open(fname,"r")
IOError: [Errno 2] No such file or directory: '/home/submitter/data/TVB1/configs/dax/main_bnm.dax'
Error: dot: can't open /home/submitter/data/TVB1/configs/dax/main_bnm.dot
2018.08.29 23:01:39.741 UTC: [FATAL ERROR]
 [1]: Unable to instantiate Site Catalog  at edu.isi.pegasus.planner.catalog.site.SiteFactory.loadInstance(SiteFactory.java:234)
 [2]: edu.isi.pegasus.planner.catalog.site.impl.XML caught edu.isi.pegasus.planner.catalog.site.SiteCatalogException Cannot read or access file $path/sites.xml at edu.isi.pegasus.planner.catalog.site.impl.XML.connect(XML.java:146)
ERROR while logging metrics The metrics file location is not yet initialized
Checking currently running job ids...
Currently running job ids are: []
Traceback (most recent call last):
  File "run_sequentially.py", line 185, in <module>
    current_job_id = new_job_ids[0]
IndexError: list index out of range

I'm not really sure what to do at this point and would really appreciate any help!

Sample Input Data Files

As in the readme it is mentioned that the input data folder must be made of:-

mri

  • t1_input.nii.gz
  • dwi_raw.nii
  • dwi.bvec
  • dwi.bval

However the sample mri files given in the repository are different.
They are- aparc+aseg.nii , brain.nii and T1.nii.
I want to run a test so as to see its working.
So please tell me how I have to specify them in the folder.
Thanks.

Which T1 file to use

Hello, I used mricron to convert DICOM format into NIFT files, but finally generated three t1.nii files, including source files and files beginning with o and co. I checked the relevant materials. Among them, the files beginning with o are mainly reorient, while co is cut the neck. If I want to use tvb-recon to generate connection matrix, which file should I use?
b2a2bf2e2a1c62b26a45ba008109f16

Gain matrix workflow

Gain matrices map source space (neural mass state variables) to sensor space (MEG, EEG and sEEG recordings). They are required for simulating MEG, EEG and sEEG; they are also required for fitting data with those modalities. While most software assumes one single gain matrix, in cases where we have multiple source spaces and sensors spaces (cortical + subcortical; MEG & EEG), then we are computing M * N gain matrices.

Our current scripts on these topics don't work, but they outline the necessary steps, using the OpenMEEG command line utilities. OpenMEEG works primarily with text files for describing head geometry, interface conductance, source geometry & sensor geometry, and MATLAB format data files.

We can certainly format all our data to such files, but note that we want to work with corresponding time series data: we should consider using the MNE-Python library for this, as MNE data structures coordinate time series data with geometry information about source and sensor spaces (though MNE's use of OpenMEEG needs to be built). In general, for any sort of time series we have, it'd be good to build on MNE instead of recoding (badly) their functionality.

Our workflow looks like

  • loading time series datasets
  • extracting geometry information
  • aligning with T1 coord sys (has to be done manually, but MNE has some visual tools for this)
  • building the head, sensor, source head models
  • inverting head model
  • computing gain matrix per (source space, sensor space) pair
  • saving for later use

Consider xarray for improved data handling

I usually avoided enhancements over standard ndarray, but I think xarray [1] makes a good case, in combining both Pandas like convenience with labeled dimensions. Specifically, it seems like a step forward in making it easy to write correct code, especially when we might want it to be agnostic to layout. A trivial example is summing over time: with a plain array, you have to know which axis is time, then it's data.sum(axis=time_axis), whereas a labeled array knows that for you, so it's just data.sum('time'). It also helps coordinate work between multiple arrays sharing one or more axes, see [1] for more.

The plain, typed alternative is to write a full wrapping class around every data type, and operations for all their interactions, but this is a lot of boilerplate code for the classes themselves and the tests.

[1] http://xarray.pydata.org/en/stable/why-xarray.html

show_aparc_aseg_with_new_values fails w/ -1 index

Testing locally, I see a failure in test_show_aparc_aseg_with_new_values(), where in

            for i in xrange(aparc_aseg_matrix.shape[0]):
                for j in xrange(aparc_aseg_matrix.shape[1]):
                    if aparc_aseg_matrix[i][j] > 0:
                        if fs_to_conn_indices_mapping.has_key(aparc_aseg_matrix[i][j]):
                            aparc_aseg_matrix[i][j] = conn_measure[
                                fs_to_conn_indices_mapping.get(aparc_aseg_matrix[i][j])]
                        else:
                            aparc_aseg_matrix[i][j] = -1

I've fs_to_conn_indices_mapping.get(aparc_aseg_matrix[i][j]) returning -1.0, which is not a valid index. I don't see why this would pass on Travis but not locally (macos 10 10).

Missing documentation

This repo is largely undocumented. Getting new users up and running is a huge effort. Once the library and workflows have stabilized, we should set up Sphinx API docs, overview and worked examples.

Lost functions

In prior reorganization, several functions were lost, including ones to generate head model files for OpenMEEG. These need to be recovered from Git history.

Airflow

Airflow is Airbnb's workflow system. It's another management system based around DAGs, distributed computing etc, has a nice CLI and GUI. It's also all in Python, a plus IMO.

I'm not saying we should use it, but if we ever need to scale up, this and Pegasus are both worth looking at.

FS6 new features

There are some other features we could take advantage of in FS6

  • Better hippocampal subfields segmentation would improve both connectivity & lead field info
  • Idem, for improvements in brain stem segmentation.
  • -parallel flags enables coarse & fine (OpenMP) grained parallelism

Need to review the release notes further.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.