Giter Club home page Giter Club logo

netneurotools's People

Contributors

eric2302 avatar fmilisav avatar gkiar avatar gshafiei avatar justinehansen avatar koudyk avatar liuzhenqi77 avatar nicolasgensollen avatar rmarkello avatar vincebaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

netneurotools's Issues

cluster.find_consensus returns labels that do not exist in the input array

hello, im using the cluster.find_consensus as a final step in my clustering pipeline. first i confirmed that it worked using a toy example, but now with my data im getting few instances where the algorithm outputs labels do not exist in the original array. While my input labels matrix ranges from 0 to 4 (or 1 to 5), i can get output where the consensus label will be larger than 5, going up to 13. Is it that because there was no convergence and how can i resolve it? Should i try to increase the number of clustering solutions?

Gamma value for consensus partition

In the example plot_consensus_clustering.py, 100 solutions was obtained using a gamma equal to 1.5.

ci = [bct.community_louvain(nonegative, gamma=1.5)[0] for n in range(100)]

Then, the example followed by the creation of consensus partition.

However, gamma equal to 1 is used throughout the operation of creating the consensus partition.

Shouldn't this value set to equal to 1.5?

Trace of the code will bring to the modularity_louvain_und_sign under 'bct': L1197, which have a default gamma value of 1

consensus = bct.clustering.consensus_und(agreement, threshold, 10)

https://github.com/aestrivex/bctpy/blob/32c7fe7345b281c2d4e184f5379c425c36f3bbc7/bct/algorithms/clustering.py#L353

https://github.com/aestrivex/bctpy/blob/32c7fe7345b281c2d4e184f5379c425c36f3bbc7/bct/algorithms/modularity.py#L1197

Under L1197, the default gamma value is 1.

modify plot_fsaverage to manually input parcels that shouldn't be included

The names of the parcels that shouldn't be included will vary depending on the parcellation used. It might be better if the user manually inputs these names. Also, sometimes, these parcels will be located in only one of the two hemispheres, so it might be wiser to have the user input two different "drop" lists (one for the left hemisphere and another for the right).

Unknown vertices between fsaverage Cammoun regions

Hi, I'm trying to run an algorithm on the Cammoun parcellation in fsaverage space. However, I ran into issues and believe that they are caused by vertices that are labelled as unknown that exist between other parcels. Attached is a plot of the parcellation projected onto the inflated surface and vertices labelled as 'unknown' are shown in red:

Screenshot 2023-07-08 at 16 12 59

Is there anything I can do to fix this?

`fetch_civet` missing from pypi release

I was hoping to use the function netneurotools.datasets.fetchers.fetch_civet. However, it seems like this function hasn't made it into a released version yet. Are there any plans to push a new version to PyPi in the near future?

Conversion to float in freesurfer.parcels_to_vertices

Here's a very minor issue that annoyed me this morning :
In freesurfer.parcels_to_vertices, we might want to handle cases where the parcellated data to be projected on the vertices are int values since np.nan cannot be inserted into numpy arrays of int. We could for example convert ints to floats. Without proper handling of this case, the user might be confused as to what goes wrong when the "cannot convert float NaN to integer" ValueError is raised.

Add option to use Leiden algorithm in modularity.consensus_modularity()

The issue

The Louvain algorithm has some known deficiencies that can lead to badly connected (i.e., singleton or disconnected) communities. This is undesirable in many instances and can require using "bad" gamma settings when performing consensus community detection on certain networks to avoid such communities.

Proposed solution

The newly implemented Leiden algorithm proposes a solution to this issue of badly connected communities with the Louvain algorithm. Adding a parameter to the modularity.consensus_modularity() function to optionally use the Leiden algorithm would be awesome. Note, however, that the linked implementation is licensed under the GPL-3.0, so integrating it directly into netneurotools may not be desirable.

Question: functional consensus return value

Hey!

I was considering using the functional consensus utility in netneurotools, but wanted to make sure I understood the outputs before doing so; I have two (related) questions, and a third for fun:

  1. The function docstring suggests that the returned network is thresholded. From the description, it suggests that the thresholded/thrown out edges are those which change signs across bootstraps. Then, all remaining non-zero values are returned as the correlation value computed using the entire dataset of timepoints, samples, etc... Is this a correct interpretation?
  2. Does the confidence interval influence which edges are thrown out? E.g., if set to <=50, it wouldn't throw anything out because the sign of each correlation would be consistent for at least that many bootstraps; similarly, if set to 100, only those which never change signs will be retained, and if 95 those which change in up to 5% of simulations are kept. Is this correct?
  3. I'm considering using this function in a test-retest context, rather than to generate a consensus across individuals. Has it been used in a test retest context before? While naively this seems like it shouldn't be an issue, is there any magic going on that you can think of off-hand that may lead to problems when using highly similar measurements across the "subjects" dimension (which here means "sessions" or "configurations")?

Thanks!

fetch_fsaverage() downloads in "~\nnt-data\tpl-fsaverage"

If the user does not provide a subject_dir in plot_fsaverage(), then fetch_fsaverage is called, which fetches the data and then the subjects_dir is obtained using the _get_data_dir() function and in the case where the env. variable is not set, then this directory will be "~ /nnt-data". However, the data fetched by the fetch_fsaverage() function will be by default downloaded in the "~/nnt-data/tpl-fsaverage" folder. Thus, an error will be raised: "Cannot find specified subject id fsaverage in provided subject directory ~\nnt-data".

To fix this issue, we want the _get_data_dir() function to return the "~/nnt-data/tpl-fsaverage" folder.

Allow spins with only one hemisphere

Currently trying to use stats.gen_spinsamples() with coordinates from only one hemisphere raises the following error:

ValueError                                Traceback (most recent call last)
----> 1 stats.gen_spinsamples(coords[hemi == 0], hemi[hemi == 0], seed=1234, n_rotate=10)

~/gitrepos/netneurotools/netneurotools/stats.py in gen_spinsamples(coords, hemiid, n_rotate, check_duplicates, method, exact, seed, verbose, return_cost)
    667                          'right hemisphere coordinates, respectively. '
    668                          + 'Provided array contains values: {}'
--> 669                          .format(np.unique(hemiid)))
    670 
    671     # empty array to store resampling indices

ValueError: Hemiid must have values in {0, 1} denoting left and right hemisphere coordinates, respectively. Provided array contains values: [0]

Even if we fixed the check here, I'm not sure the rest of the function would proceed without error. Nonetheless, this is definitely worth addressing cause there are many instances where it might be desirable to only have one hemisphere!

Add dominance analysis function

I have an example script here to get some dominance analysis stats. It's faster than the original package, but can still be further optimized / parallelized. Since we are using it more, maybe we could discuss what extra features we need and integrate it into netneurotools.

Error with importing the plotting module

I get an error with importing the plotting module. That is in freesurfer.py, line 13:
ImportError: cannot import name '_stats' from 'scipy.ndimage.measurements'

version of scipy == 1.10.0

Not compatible with `nilearn` version 0.10.1 onwards

There has been a significant refactoring effort on private utilities used throughout nilearn.

As a result, importing the nilearn file fetching utilities produces an ImportError when netneurotools is used alongside a recent enough version of nilearn. It should be easily fixable by some try / catch logic.

Some ideas for improving the library

Hi all,
I've been using Netneurotools for a long time and I have some things you can easily add to improve usability.

  1. Add a citation instruction. I ended up just writing down the GitHub link, but you can add a Cite this repository using a citation.cff file. Here's an example: https://github.com/kuffmode/YANAT/blob/main/CITATION.cff
  2. I would add a check for signed matrices. I remember you can easily pass a signed matrix to communicability that would work and you would end up with a “signed communicability matrix” but as far as I know, negative values don't make sense in these models.
  3. Point plot returns a figure that is unnecessary. I ended up tinkering with it so it returns the ax instead. In this case, people can define their own figures the way they want and embedd a point plot wherever they like. Check function brain_plotter here: https://github.com/kuffmode/OI-and-CMs/blob/main/utils.py#L422

If you prefer, I can do these myself and do a pull request. In any case, well done!

Add `fetch_brain_annotations()` function

Discussed in a lab meeting, this might provide users with (surface-level?) data for a number of interesting brain attributes, including (but not limited to):

  • Cortical thickness,
  • T1w/T2w ratio,
  • FC gradients (1 and 2?)
  • PC1 (or something similar) of whole-brain gene expression,
  • FC strength
  • SC degree
  • MEG power spectra maps
  • hctsa gradients (Shafiei et al., 2020, biorxiv)

Users could then just parcellate the data (however they want) and be on their merry way. We'd need to make sure the data can be freely shared and whatnot (i.e., HCP data requires a registration so are not ideal for this...).

Can't find fetch_vazquez_rodriguez2019 attribute

Hi,

I am new to Pyhton and I am trying to run your "Spatial permutations for significance testing" example in Jupyter Notebook. After installing netneurotools in Python 3, I get the following error:
Screen Shot 2020-01-23 at 12 27 56 PM

I looked at fetchers.py and found the definition of the function but somehow it doesn't seem to be able to find the attribute. Am I missing something?

Best,
Giulia

modularity.get_modularity_sig returns community-level modularity

It would be great if we could have an optional parameter to toggle whether we're interested in testing the significance of the modularity for each communities or for all the communities. Sometimes we're more interested in the latter than the former, but currently only allow for the former.

This would be as simple as summing the modularity estimates for each of the permutations and testing against these sums rather than testing against the arrays of modularity estimates (one estimate per community)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.