netneurolab / netneurotools Goto Github PK
View Code? Open in Web Editor NEWUseful tools from the Network Neuroscience Lab
Home Page: https://netneurolab.github.io/netneurotools
License: BSD 3-Clause "New" or "Revised" License
Useful tools from the Network Neuroscience Lab
Home Page: https://netneurolab.github.io/netneurotools
License: BSD 3-Clause "New" or "Revised" License
hello, im using the cluster.find_consensus
as a final step in my clustering pipeline. first i confirmed that it worked using a toy example, but now with my data im getting few instances where the algorithm outputs labels do not exist in the original array. While my input labels matrix ranges from 0 to 4 (or 1 to 5), i can get output where the consensus label will be larger than 5, going up to 13. Is it that because there was no convergence and how can i resolve it? Should i try to increase the number of clustering solutions?
In the example plot_consensus_clustering.py, 100 solutions was obtained using a gamma equal to 1.5.
Then, the example followed by the creation of consensus partition.
However, gamma equal to 1 is used throughout the operation of creating the consensus partition.
Shouldn't this value set to equal to 1.5?
Trace of the code will bring to the modularity_louvain_und_sign under 'bct': L1197, which have a default gamma value of 1
netneurotools/netneurotools/cluster.py
Line 365 in 503d9ad
Under L1197, the default gamma value is 1.
The names of the parcels that shouldn't be included will vary depending on the parcellation used. It might be better if the user manually inputs these names. Also, sometimes, these parcels will be located in only one of the two hemispheres, so it might be wiser to have the user input two different "drop" lists (one for the left hemisphere and another for the right).
Hi, sir
how to set colormap always centered on zero, when max~=-1*min.
Thank you!
Hi, I'm trying to run an algorithm on the Cammoun parcellation in fsaverage space. However, I ran into issues and believe that they are caused by vertices that are labelled as unknown that exist between other parcels. Attached is a plot of the parcellation projected onto the inflated surface and vertices labelled as 'unknown' are shown in red:
Is there anything I can do to fix this?
I was hoping to use the function netneurotools.datasets.fetchers.fetch_civet
. However, it seems like this function hasn't made it into a released version yet. Are there any plans to push a new version to PyPi in the near future?
Here's a very minor issue that annoyed me this morning :
In freesurfer.parcels_to_vertices, we might want to handle cases where the parcellated data to be projected on the vertices are int values since np.nan cannot be inserted into numpy arrays of int. We could for example convert ints to floats. Without proper handling of this case, the user might be confused as to what goes wrong when the "cannot convert float NaN to integer" ValueError is raised.
The Louvain algorithm has some known deficiencies that can lead to badly connected (i.e., singleton or disconnected) communities. This is undesirable in many instances and can require using "bad" gamma settings when performing consensus community detection on certain networks to avoid such communities.
The newly implemented Leiden algorithm proposes a solution to this issue of badly connected communities with the Louvain algorithm. Adding a parameter to the modularity.consensus_modularity()
function to optionally use the Leiden algorithm would be awesome. Note, however, that the linked implementation is licensed under the GPL-3.0, so integrating it directly into netneurotools
may not be desirable.
Hey!
I was considering using the functional consensus utility in netneurotools, but wanted to make sure I understood the outputs before doing so; I have two (related) questions, and a third for fun:
Thanks!
If the user does not provide a subject_dir in plot_fsaverage(), then fetch_fsaverage is called, which fetches the data and then the subjects_dir is obtained using the _get_data_dir() function and in the case where the env. variable is not set, then this directory will be "~ /nnt-data". However, the data fetched by the fetch_fsaverage() function will be by default downloaded in the "~/nnt-data/tpl-fsaverage" folder. Thus, an error will be raised: "Cannot find specified subject id fsaverage in provided subject directory ~\nnt-data".
To fix this issue, we want the _get_data_dir() function to return the "~/nnt-data/tpl-fsaverage" folder.
Many Axes3D
functions are being implemented in recent versions of matplotlib.
matplotlib/matplotlib/issues/1077 and matplotlib/matplotlib/issues/22570 are closed by matplotlib/matplotlib/pull/22705, matplotlib/matplotlib/pull/23017, and matplotlib/matplotlib/pull/23409, and released in 3.6.0
. And there's matplotlib/matplotlib/pull/23552 to be released in 3.7.0
. Just make sure our change in /pull/104 would still work.
Currently trying to use stats.gen_spinsamples()
with coordinates from only one hemisphere raises the following error:
ValueError Traceback (most recent call last)
----> 1 stats.gen_spinsamples(coords[hemi == 0], hemi[hemi == 0], seed=1234, n_rotate=10)
~/gitrepos/netneurotools/netneurotools/stats.py in gen_spinsamples(coords, hemiid, n_rotate, check_duplicates, method, exact, seed, verbose, return_cost)
667 'right hemisphere coordinates, respectively. '
668 + 'Provided array contains values: {}'
--> 669 .format(np.unique(hemiid)))
670
671 # empty array to store resampling indices
ValueError: Hemiid must have values in {0, 1} denoting left and right hemisphere coordinates, respectively. Provided array contains values: [0]
Even if we fixed the check here, I'm not sure the rest of the function would proceed without error. Nonetheless, this is definitely worth addressing cause there are many instances where it might be desirable to only have one hemisphere!
I have an example script here to get some dominance analysis stats. It's faster than the original package, but can still be further optimized / parallelized. Since we are using it more, maybe we could discuss what extra features we need and integrate it into netneurotools.
I get an error with importing the plotting module. That is in freesurfer.py, line 13:
ImportError: cannot import name '_stats' from 'scipy.ndimage.measurements'
version of scipy == 1.10.0
There has been a significant refactoring effort on private utilities used throughout nilearn
.
As a result, importing the nilearn
file fetching utilities produces an ImportError
when netneurotools
is used alongside a recent enough version of nilearn
. It should be easily fixable by some try / catch logic.
Hi all,
I've been using Netneurotools for a long time and I have some things you can easily add to improve usability.
citation
instruction. I ended up just writing down the GitHub link, but you can add a Cite this repository
using a citation.cff
file. Here's an example: https://github.com/kuffmode/YANAT/blob/main/CITATION.cffbrain_plotter
here: https://github.com/kuffmode/OI-and-CMs/blob/main/utils.py#L422If you prefer, I can do these myself and do a pull request. In any case, well done!
If there's a FreeSurfer installation currently available just grab the relevant data from that rather than downloading things unnecessarily!
Discussed in a lab meeting, this might provide users with (surface-level?) data for a number of interesting brain attributes, including (but not limited to):
Users could then just parcellate the data (however they want) and be on their merry way. We'd need to make sure the data can be freely shared and whatnot (i.e., HCP data requires a registration so are not ideal for this...).
Hi,
I am new to Pyhton and I am trying to run your "Spatial permutations for significance testing" example in Jupyter Notebook. After installing netneurotools in Python 3, I get the following error:
I looked at fetchers.py and found the definition of the function but somehow it doesn't seem to be able to find the attribute. Am I missing something?
Best,
Giulia
netneurotools.datasets.utils
depends on pkg_resources
which has been removed from Python 3.12.
Currently, the doc-string erroneously describes what the function does / returns and should be updated to reflect its current function.
It would be great if we could have an optional parameter to toggle whether we're interested in testing the significance of the modularity for each communities or for all the communities. Sometimes we're more interested in the latter than the former, but currently only allow for the former.
This would be as simple as summing the modularity estimates for each of the permutations and testing against these sums rather than testing against the arrays of modularity estimates (one estimate per community)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.