Giter Club home page Giter Club logo

exploreasl / exploreasl Goto Github PK

View Code? Open in Web Editor NEW
43.0 7.0 11.0 428.61 MB

ExploreASL: releases can be found in the main branch or within the releases tab. If you want to contribute, please contact us at [email protected]. Development details can be found under the wiki tab. Code documentation can be found on the documentation website https://exploreasl.github.io/Documentation

Home Page: https://www.exploreasl.org

License: Other

MATLAB 98.81% Python 0.07% C 1.12%
image processing matlab toolbox asl exploreasl perfusion qc

exploreasl's Introduction

ExploreASL

All Contributors DOI View ExploreASL on File Exchange GitHub tag (latest by date) GitHub last commit

Description

ExploreASL is a pipeline and toolbox for image processing and statistics of arterial spin labeling perfusion MR images. It is designed as a multi-OS, open-source, collaborative framework that facilitates cross-pollination between image processing method developers and clinical investigators.

The software provides a complete head-to-tail approach that runs fully automatically, encompassing all necessary tasks from data import and structural segmentation, registration, and normalization, up to CBF quantification. In addition, the software package includes quality control (QC) procedures and region-of-interest (ROI) as well as voxel-wise analysis of the extracted data. To date, ExploreASL has been used for processing ~10000 ASL datasets from all major MRI vendors and ASL sequences and a variety of patient populations, representing ~30 studies. The ultimate goal of ExploreASL is to combine data from multiple studies to identify disease-related perfusion patterns that may prove crucial in using ASL as a diagnostic tool and enhance our understanding of the interplay of perfusion and structural changes in neurodegenerative pathophysiology.

Additionally, this (semi-)automatic pipeline allows us to minimize manual intervention, which increases the reproducibility of studies.

Documentation

Reference manual and tutorials for each ExploreASL version are found on the GitHub website. A general description of ExploreASL is in the Neuroimage paper. Additional resources are on the ExploreASL website including the walkthrough document and how-to videos, but these are not regularly updated with new versions. For any help please use the GitHub Discussion or contact the ExploreASL team at [email protected].

Installation

To use ExploreASL within Matlab, you can download a stable release version from the GitHub releases section or from Zenodo. Alternatively the software can also be found on Dockerhub. Navigate within Matlab to the ExporeASL directory, to make ExploreASL the current working directory. To start ExploreASL from Matlab, type:

ExploreASL

Workflow

ExploreASL Workflow

Acknowledgments

This project is supported by the Dutch Heart Foundation (2020T049), the Eurostars-2 joint programme with co-funding from the European Union Horizon 2020 research and innovation programme (ASPIRE E!113701, including the Netherlands Enterprise Agency (RvO), and by the EU Joint Program for Neurodegenerative Disease Research, including the Netherlands Organisation for health Research and Development and Alzheimer Nederland (DEBBIE JPND2020-568-106.

This project has previously received support from the following EU/EFPIA Innovative Medicines Initiatives (1 and 2) Joint Undertakings: EPAD grant no. 115736, AMYPAD grant no. 115952 and Amsterdam Neuroscience. The authors wish to thank the COST-AID (European Cooperation in Science and Technology - Arterial spin labeling Initiative in Dementia) Action BM1103 and the Open Source Initiative for Perfusion Imaging (OSIPI) and the ISMRM Perfusion Study groups for facilitating meetings for researchers to discuss the implementation of ExploreASL. The authors acknowledge Guillaume Flandin, Robert Dahnke, and Paul Schmidt for reviewing the structural module for its implementation of SPM12, CAT12, and LST, respectively; Krzysztof Gorgolewksi for his advice on the BIDS implementation; Jens Maus for help with MEX compilation; Cyril Pernet for providing the SPM Univariate Plus QC scripts.

How to cite

The following provides an example as how to correctly cite ExploreASL and its third-party tools. The versions of the included third-party tools are described in CHANGES.md for each ExploreASL release. The bare minimum of references (refs) are ref1 and ref2.

The data were analysed using ExploreASL ref1 version x.x.x ref2, including SPM12 version xxxx ref3, CAT12 version xxxxref4, and LST version x.x.xref5. This Matlab-based software was used with Matlab (MathWorks, MA, USA) version x.x (yearx)ref6.

References

The release numbers of ExploreASL (e.g. 1.9.0) follow semantic versioning.

  1. The ExploreASL paper, describing the full pipeline and decisions for processing steps.
  2. The Zenodo DOI for the actual ExploreASL release used to analyse the data, e.g. the latest release).
  3. The SPM12 references Ashburner, 2012 & Flandin and Friston, 2008. Note that the SPM version (e.g. 7219) is adapted and extended for use with ExploreASL.
  4. The CAT12 reference Gaser, 2009. Note that the CAT12 version (e.g. 1364) is adapted for use with ExploreASL.
  5. The LST references Schmidt, 2017 & de Sitter, 2017. Note that the LST version (e.g. 2.0.15) is adapted for use with ExploreASL.
  6. Matlab publishes a release twice yearly. You can provide the release number (e.g. 9.4) or year number (e.g. 2018a), or both.

Contributors ✨

Thanks goes to these wonderful people (emoji key):


Henk Mutsaerts

πŸ‘¨β€πŸ”¬ πŸ–‹ πŸ’»

Jan Petr

πŸ‘¨β€πŸ”¬ πŸ–‹ πŸ’»

Michael Stritt

πŸ’» πŸ–‹ πŸ“–

Mathijs Dijsselhof

πŸ–‹ 🧠

Beatriz Padrela

πŸ–‹ 🧠

Paul Groot

πŸ’» πŸ–‹

Pieter Vandemaele

πŸ’» πŸ€” 🧠

MauricePasternak

πŸ“Š πŸ’» 🎨

Patricia Clement

🧠 πŸ€” πŸ“–

Sandeep Ganji

πŸ–‹ πŸ€” 🧠

Martin Craig

πŸ–‹ πŸ’» 🧠

DaveThoma5

πŸ€” 🧠

Amnah Mahroo

πŸ€” 🧠

luislorenzini

πŸ’» πŸ”§

jozsait

πŸ’» 🚧

This project follows the all-contributors specification. Contributions of any kind welcome!

exploreasl's People

Contributors

beatrizpadrela avatar henkmutsaerts avatar jan-petr avatar luislorenzini avatar maartenhammer avatar mauricepasternak avatar mdijsselhof avatar michaelstritt avatar pfcgroot avatar pvdemael avatar yevap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

exploreasl's Issues

Make sure that all system calls to linux/Mac use xASL_adm_UnixPath

When using a system command to call a linux (or MacOS) command (e.g. for faster zipping and/or zipping without JavaVM), there can be issues with filenames. E.g. Matlab allows paths with spaces, that need to be appropriately escaped for usage in linux. The xASL_adm_UnixPath function does this, but needs to be applied on the appropriate locations.

Mask WMH_SEGM for custom lesion masks

Lesions such as tumors have hyperintensity on FLAIR, and would be automatically segmented as WMH_SEGM.nii. Instead, they should be masked out in case any custom - e.g. manually segmented - lesion masks/ROIs are provided, either in FLAIR or T1w space. Also, a warning should be issued, especially if the custom lesion mask is not binary.

Lesion masks: ^Lesion_(FLAIR|T1)_\d*.nii$

Reference to field 'SUBJECTS' that does not exist

hi, ExploreASL was installed with matlab2018b. When a dataset path was input to ExploreASL, the following problems arise: Reference to field 'SUBJECTS' that does not exist.
error in xASL_Iteration (line 56)
SelectedSubjects = x.SUBJECTS;

error in ExploreASL_Master (line 99)
[~, x] = xASL_Iteration(x,'xASL_module_Structural');

Do you have a more detailed user manual?

error in Compiling ExploreASL

Compiling ExploreASL
{Error using ExploreASL_make_standalone (line 223)
Error: The specified file <Modules\SubModule_Structural> was not found. If you wrapped the file name with double quotes, try replacing
the double quotes with single quotes and rerun the command.

Calculation background suppression efficiency for pseudo-M0

When no M0 is obtained, and the control image needs to be used as pseudo-M0, the suppression efficiency needs to be calculated. For 2D EPI this is slice-wise. When the option "UseControlAsM0" is selected, and background suppression is on, then this parameter - that should be a vector for 2D multislice - should be checked and applied when the pseudo-M0 is created.

Quantification issue with Philips scale slopes

In the old import, dcm2nii JSON was not used and all data were saved in the ASL4D_parms.mat. It certain situations, the NIFTI scale slope and DICOM scale slopes differed. Prioritizing the NIFTI scale slope in case of difference can bias the CBF. We need to prioritize the DICOM scale slopes, but still report this difference - new version of ExploreASL_Import should not generate these discrepancies, so this issue only concerns previously imported data.

Prefix visualization functions

Visualization functions are no also prefixed "xASL_im", for easier overview its better to separate these from the image processing functions xASL_im as xASL_vis (visualization)

BIDS

Preparing the ExploreASL for full BIDS compatibility.

Minor bugfixes

Some bugs are not really affecting the pipeline's behavior or only in very special cases. These will fall undert his issue.

Warnings

Certain input may deserve warnings to the user

Unit testing

  • Develop individual unit tests for modules and submodules
  • Get ExploreASL closer to the goal of fulfilling SOUP/FDA requirements

DCMTK won't run in ExploreASL_Import

When running without the image processing toolbox, the compiled DCMTK package should run instead.
There seems to be an incorrect input parameter check

Add bBiasfieldCorrectionFLAIR

The biasfield correction on the FLAIR should be an option to enable or disable, where it is disabled by default. LST may itself try a biasfield correction. In FLAIRs with a large lesion load, biasfield modeling can go wrong, and in rare cases modeling the biasfield of the T1w and applying this to the FLAIR may help. In FLAIRs with low lesion load and large biasfields, this option should be enabled (see at the end HERE).

Quadrupling of backslashes in JSON

When converting JSONs data_pars from the old format to the new format it escapes the backslashes, but instead of outputting two of them, which is the correct way to do that in JSON '\' it creates four '\\'. This is not a real problem as this is then later corrected when used for a path, but it should still be removed.

Issue reported by Maurice, and discovered in variables: mypath, subjectregexp.

Zipping without JVM

When running Matlab (or compiler) without the Java Virtual Machine, it will crash the gzip/gunzip, so we need to input an option to run it from the CLI if ~ispc, preferably by default (as this will probably also be faster)

Add atlas startup options for mTrial

  • Precuneous in Hammers atlas -> correct spelling
  • Add options to population module
  • Create option to specify different atlases (and put MNI structural back to default)
  • For mTrial demo, add vascular atlas & HO_cort_CONN

Release summary: Add Mindboggle atlas to ExploreASL and restructure general atlas access in population module.

Create PV-corrected GM & WM CBF maps in native space

For some users it would be useful to select their own ROI and position it over a PV corrected CBF map, rather than having to use this on the ROI level. @jan-petr: can you add the creation of PV-corrected CBF maps in native space as default option to the quantification submodule?

Thanks!

Presmooth takes long

When there is a considerable difference in effective spatial resolution between T1w & ASL, presmooth can take quite long. This can be the case with e.g. 3D GRASE (5x5x10 mm) vs high resolution T1w (0.7x0.7x0.7mm). Can we speed this up?

ASL-based pre-registration to help T1w-MNI registration+segmentation

When T1w has poor contrast (e.g. neonates) and/or is usually deformed (e.g. craniosynostosis), other scantypes with better tissue contrast can serve as "pre-registration" to get the T1w (and other images) roughly (e.g. rigid-body or even affine) in the correct position, before running the structural module. While this has been done before with DTI (I believe), this would be the first to use ASL for this.

Revamp

Improve inline comments and headers

Implement BASIL

The following steps are needed as pre-work for BASIL

  1. BASIL prefers masks to limit the voxels on which to run the modeling, but ExploreASL currently masks after the quantification. The masking wrapper should therefore be moved to before the quantification wrapper
    1. CBF will be replaced by PWI in the xASL_wrp_CreateAnalysisMask, after which this part can be moved in front of the Quantification
  2. For better modeling BASIL requires the full 4D timeseries. Currently the resample wrapper does the averaging, unless there is a PWI4D option that states to keep the 4D timeseries. Instead, the averaging should move from the resample wrapper to the quantification wrapper, to ensure that the quantification wrapper always has access to the full 4D timeseries. The control-label subtraction can stay in the resample wrapper. The PWI4D flag can stay, but would be applied in the quantification wrapper only, where it would disable deleting of the temporary time series (this is for advanced users that want to keep the 4D time series for modelling purposes).
    1. Keep subtraction in xASL_wrp_Resample
    2. Move averaging to xASL_wrp_Quantify
    3. Remove PWI4D option from xASL_wrp_Resample
    4. Keep PWI4D option in xASL_wrp_Quantify

Single function for user to create average maps

For CICERO specifically, but also for other users, it would be useful to have a single function with a tsv-file as input, that contains the subjects/sessions from which an average map will be created.

Input should be

  • name (of the tsv-file)
  • participants/sessions (inside the tsv-file)
  • prefix of the filetype to make an average map of (e.g. qCBF)

Improve the code for the native space processing

Native space processing needs to be cleaned.

Tasks (clear agreement on these)

  • Add header explanations & comments to xASL_wrp_PreparePV (6th step)
  • xASL_im_CreateAnalysisMasks.m use more clear booleans - bROIStats_NativeSpace & bROIStats_StandardSpace, instead of bSkip* for xASL_im_CreateAnalysisMask & bNativeSpaceAnalysis (bNativeSpaceAnalysis is too vague, bSkip vague and negative)
  • TotalGM & DeepWM maps are modified during the execution. And they shouldn't be.
  • In the population module, we do not want to be dependent on individual participant information that we still need to calculate, since we can (potentially) get discrepancies between what was calculated inside the ASL module and the population module. So when we calculate the effective resolution in the ASL module, we save it in the json sidecar of the CBF.nii so we have the calculation at a single location and always use the same setting throughout the pipeline

To be discussed

General principles for revamping:

  • pipelines for both spaces become as equal as possible/simplification
  • everything that deals with the space happens separately from the GetROI_etc pipeline
    JAN: wrp_GetROIStatistics already doesn't have any space-related things except for parameter checking and saving. For xASL_stat_GetROIStatistics - it's nice that you want to make it space independent, but the entire function is tailored for standard space and thus it needs to load and handle the native space data very differently. We do the same processing, but loading all functions, transforming of images etc is just difficult to write space-independent. We can try as much as we can, but it's not possible to do this on 100% without large changes.

HENK: The goal is to get rid of the "if NativeSpace" statements -> we don't want our calculations etc to be space-dependent and we want our code to be easily reasonable.
So either 1) there are separate functions for dealing with space (e.g., all the warping etc is done before the calculation is done) AND/OR 2) we always warp for any space, just by default it checks and sees that no warping is necessary.

-> so A) we always do the same for each space (ASL native space, T1 native space, standard space), B) we prefer extra (slower) checks over extra if-statements (currently we have a lot of extra if-statements).

  • It is more important that we do the same in both spaces, than that we do per space what is most optimal for the specific space. E.g, if a space has more resolution for redefining an ROI, we don't care, because the ROIs always come from group templates, other cohorts, etc so never really perfectly registered, never perfectly representable, so no point doing expanding or whatever in 1 mm instead of in 1.5 mm.

xASL_module_Population:

  1. remove x.S.InputDataStrNative. Only use x.S.InputDataStr and always add (q|r|) in front of xASL_adm_GetFileList for searching the datatypes. This should work in both/all spaces. Alternatively, "if native space" removes a q or r inside xASL_wrp_GetROIstatistics in the separate native space function.

The whole list below is a di-section of xASL_stats_GetROIstatistics, which should be space-independent. So I'm coming up with solutions below to move the space dealing of ROIs to another function/wrapper, so that the GetStatistics can be exactly that, getting statistics. Another function would be e.g. xASL_stats_PrepareROIspaces or something.
JAN: I'm happy to make a preparatory function to move most things, but you will still not be able to make stats_GetROITstats completely space-independent - to start with, WBMask is preloaded for standard-space, but you always have to load a new one for native space. We can of course solve this with a pre-load function. But you circle through subjects/session within GetROIStats, so that means that a preload function for native space won't solve everything and some subject-specific native space handling will have to be here inside. So we would have to separate a sub-function of GetROIStats that would be space-independent.

HENK: No problemo. The fix would be: we always have an individual WBmask, just the WBmask in standard space is always the same for individual. I'm fine using a subfunction to check this and skip saving a 1000 unnecessary WBmasks on disk, but we make it the same for all cases. Notice that once we're doing this, it is easiest to make it also immediately T1-space compatible.
So all ROI stats can deal with any space. File I/O specific things go to separate/sub-functions so they can easily be replaced later by BIDS (note that in BIDS it will be in the future:
/studyName/sub-001/ses-20240201/perf/*space_native_asl.nii
/studyName/sub-001/ses-20240201/perf/*space_mni_asl.nii

instead of the different folders we have now. So we should prepare ourselves for this, and have our "hard-coded" directory/name structure that differs between spaces in a separate function.

xASL_wrp_GetROIstatistics:

  • x.S.output_ID = [x.S.output_ID '_NativeSpace']
    -> we remove this whole if-statement. There is a separate native space subfunction that adds _NativeSpace if native space. We never add _StandardSpace as this is default.

xASL_stat_GetROIstatistics:
x.S.masks.WBmask & x.LeftMask (line 92) -> somehow we use different masks dependent on the space. This should be removed. We always use the same mask, only there is a general subfunction for warping to native space if native space. So this should simplify 50 if native space statements by moving this resampling to a single location.
JAN: Of course it is a different mask - one is in native space and the other in standard space. The native space one has to be reloaded for each subject. And we can't distinguish left/right in native space by splitting the image in half. So all these differences have to be handled - there's simply no way around it.

HENK: Very simple: all mask adjustments are made in standard space (including left-right, dilations, etc) and those individual adjusted masks are individually warped to native space. Otherwise, each space we add will be hell (e.g., we will add T1-native space) plus the ROIs cannot be compared between spaces

%% 0.b Native space atlas input at line 152
Here you process differently depending on the space. This should go. We do exactly the same in both spaces except for the resampling. We don't merge or joint names or whatever. Or atlases, ROIs, everything is space independent except for the resampling step. So no separate "x.S.InputAtlasNativeName" as example...

%% 0.c Determine whether group mask exists
Again, masking is identical for both spaces. Also in the native space, we need the group mask. Otherwise you bias the analysis and include different voxels depending on the different FoVs. So again here, all masking susceptibility etc etc all the same for both spaces, except for the need to resample.
JAN: The mask is of course the same. But standard space loads it once, native space needs to load it each time. That's the only difference in the code. Not in the mask, but in the way it's loaded.

HENK: See above. We make it the same in the code (using sub-functions). So standard space is also re-loaded everytime. All spaces get the same treatment/code. Except for perhaps sub-functions that make it more efficient for some spaces instead of a 1000 loading/saving.

%% 1. For all ROIs, skip ROIs smaller than 1 mL (296 voxels @ 1.5x1.5x1.5 mm)
Also here, we skip exactly the same ROIs for both spaces. Calculating the volume is exactly the same because both spaces are NIfTIs with similar headers etc. No need to have an InputNativeSpace if statement.
JAN: Yes, calculating the volumes yes, but calculating the voxel volume is space-dependent.

HENK: No. For each space we calculate the voxel-volume in the same way. Just never assume 1.5x1.5x1.5 mm but read in the header. Again, you can accelerate this in a subfunction for an assumed space.

%% 2. For all ROIs, expand ROIs to contain sufficient pWM for PVEc line 285
Again here, we do exactly the same for both spaces. So either we have resampled before, and just do the same but in another space (with the same volumetric/mm/real world distance criteria), OR we expand ROIs always in standard space and warp the expanded ROI to the native space.

I WILL CONTINUE HERE, BUT DO YOU AGREE WITH THE ABOVE PRINCIPLES/EXAMPLES?
JAN: I agree with the principles, but you seem to miss important points in how the files are loaded for different spaces - the things you propose cannot be made so easily as you propose.

HENK: See my explanations. Let's discuss this before continuing.

  • Transfer atlas ROIs to the native space for native space analyses
    HENK: I agree, I just would do this more structured. Now the atlas creation & ROI computation functions are mixed, they should be separate. xASL_stat_GetROIstatistics only does the ROI computation, perhaps calling other functions for creating native space ROIs or not. Or this is a general solution for any space, which always warps atlases to another space, unless the space is equal (which it is in 100% of the times). So either/and:
  1. more general (always warp if different space, depending on the calculated space difference
  2. atlas definition/warping separate from ROI computation (now the whole code is mixed and unreadable, if this then that but if not this then we don't do this except for when this is that and that is not this...)
    ((if you have a better idea to create more structure/modularity for this that would also be great)
  • GZIP: I would remove the zipping & deleting, and I will put on the "ExploreASL Wishlist” to use the .nii.gz / .mat option. Note that at the end of the pipeline (Population module) everything is zipped.
    JAN: I agree in principle. But which zipping you mean?
    HENK: I cannot remember. Perhaps you zip some things when going to the native space in the population module? I'm talking about some inefficiency here.

  • Interpolation -> not sure if we need to do the presmooth>spline>p>0.5, seems a lot of computation time for minor change. Perhaps include the x.Quality=0 option, in which this is simply nearest-neighbour interpolated for quicker testing. This should be there in general (but I see you use the x.Quality).
    JAN: We definitively need to presmooth to have the native space masks downsampled in a correct resolution. I will check if we can resmooth an atlas once and then resample for each subject and thus avoiding smoothing for every subject, which would accelerate things a lot. x.Quality==0 with nearest neighbor is something that I am happy to do of course.
    HENK -> The problem is probably that you need to presmooth to the final resolution, which can change between ASL scans (i.e. we cannot assume that all scans have the same, there is no "individual scan information" in the population module, there is only group information and the group can be multi-site etc.
    -> My point was that was that you are taking a population-average ROI (can be from a different population, never exactly optimally registered) to the native space, where we are certain there are always misalignments. The presmoothing will perhaps include a few more or less voxels per ROI in the ASL space, which should not change the ROI-average (provided that we have a large enough ROI, without which we don't calculate anyway). Plus, ROIs need to be mutually exclusive. So if you presmooth, you need to check that a voxel at the border of two ROIs is not partly included in two ROIs at the native target space, which would mean presmoothing each label individually, warping all labels individually to the native space, and there doing the major voting to check which voxel gets which label.
    Because in the end we need integer labels in the native space.
    So several reasons that the presmoothing has negligible effect on the accuracy of the ROI average, and can even arbitrarily change it. (e.g., who says that presmoothing is better than warping with nearest-neighbor and doing some dilations, major voting and ensuring mutual exclusivity of the ROIs in this way).

  • NativeSpaceAnalysis -> should go to the xASL_wrp_GetROIstatistics
    JAN: Can you be more specific.
    HENK: -> I think this refers to a (sub)function. I would need to check.

  • xASL_stat_GetROIstatistics -> * x.LeftMask -> make space-independent: you always need a x.LeftMask input (which can be a 4D list of path names). So the stat function always loads the LeftMask through xASL_io_Nifti2Im, and the wrp function prepares this. It is a cell list with path names or images, so in case of standard space this would be an image (created in wrp as (1:60,:,:) = 0) and for native space this is a list of temporary mask NIfTIs. Note thatxASL_io_Nifti2Im allows both a image matrix or path input.
    JAN: Not sure if I understand.

HENK: what I meant here is that xASL_stat_GetROIstatistics should be space-independent, and you hacked it to make it space-dependent (if this then that but if that then this so 1000 if statements that make it dirty and unreadable). xASL_stat_GetROIstatistics could easily be unreadable, in the LeftMask example, there is always a leftmask NIfTI provided in the same space as the dataImage, and they are dealt with. The creation of ROIs (in whatever space) is done in a separate function before xASL_stat_GetROIstatistics.

  • x.S.InputMasks -> stat function always iterates over 4D, for standard space this is simply a single 3D volume. Wrp function manages this (so copy paste this nearly equally to wrp)
    JAN: Can you be more specific about this?

  • HasGroupSusceptMask -> should be space-independent, this depends on the sequence. PM: check how we deal with this in multi-sequence study, as we did in GENFI
    JAN: Can you be more specific?

  • Skip ROIs smaller than 1 mL -> always calculate the ROI volume (also for standard space) & skip those that are too small. So this code should be space-independent.
    JAN: I agree. And we are doing this in standard space. Are we not doing it in native space??

  • x.WBmask -> is this managed correctly? Normally this is a standard space mask, this could be always the AnalysisMask (which could also be 4D)
    JAN: Can you be more specific?

  • Also the other ROI, mask, etc loading etc -> can we keep all as 4D iterations, which would be 4D for different subjects or 1 3D volume in case of standard space, where the space option is done in the wrp function?
    The stat function should simply iterate over its inputs, not do any resampling etc.
    JAN: Can you please explain more? Native space analysis always can over 1 single subject.

Docker integration

Allowing more flexible input and output, for integration with a docker.
Note that in principle, a Docker should always wrap around software, the software should not be adapted to a specific environment. The software ExploreASL should be as general as possible. Therefore, we should only do general improvements here, that will work in more situations.

Registration error with poor CBF contrast

In cases with poor CBF contrast, the automatic registration correctly detected that this registration worsened and would reset with a control-based registration only. However, xASL_wrp_RegisterASL.m would still give an error because the scaling of Mean_PWI_Clipped.nii went wrong. This will be fixed in this issue.

xASL lesions are not correctly loaded to CAT

The lesions from Lesion.nii should be loaded to CAT12 and used for cost function masking. This was working for CAT12.5 but not for the new CAT12.7 because it was added only in cat_run_job1070.m but not to cat_run_job.m that is now used in CAT12.7 instead of the older version.

ASL registration using Affine and DCT registration

The ASL can be sometimes deformed with respect to T1w (or with respect to MNI in case a direct ASL-MNI registration without T1w is done). In that case, you might want to do a affine or affine+DCT registration on top of the Rigid body registration.

Quantification is off for test cases

When testing the ASL quantification between v1.2.0 and 2 weeks before v1.0.0, I noticed the following (providing a summary here):

Factor 2 higher CBF values in 1.2.0 compared to pre-1.0.0

  • Philips_2DEPI_FLAIR_DisabledQuantification_M0_LoQ

CBF values factor 2 too low

  • Philips_2DEPI_Lesion_NoFLAIR_LoQ
  • Philips_3DGRASE_FLAIR_M0_2ndRun_HiQ
  • Philips_3DGRASE_MultiSessions_T1wAlreadyDone_LoQ

See my e-mail dd 03-08-2020 for details

CustomScripts

Scripts written for specific studies, mostly import-related. Can be re-used for other studies. Should be neglected by the main pipeline.

Assigning "weights" to .STATUS files

ExploreASL_GUI_temp

I have implemented a "watcher" system that will allow the GUI to have insight into the progress during an ExploreASL run. It works even under parallel-processing conditions without any fear of window freezing. Verification through more complex combinations of study number and cores has yet to be confirmed.

As discussed on July 23rd's meeting, for providing more accurate progress bar visual feedback to the user, we could implement a "weights" system for each status bar.

To recap, idea is:

  1. every .STATUS file will have associated with it some positive integer representation of the amount of processing power that went to complete said task. The integer value for a particular value is not as important as it's magnitude relative to other STATUS files.

  2. prior to MATLAB starting up, the GUI will interpret the expected number of STATUS files that should appear. The sum of the expected value for all STATUS files in all subjects/sessions will become the hidden maximum value of the progress bar. From there, it's simply a matter of incrementing the progressbar by whatever value a completed STATUS file is worth.

So my requests would be: if I could receive a list of all the STATUS filenames that may be generated. And for each file:

  • it's weight
  • the preferred description to appear in the text output window (see GUI screenshot), as currently only the file's basename is printed out
  • and if it's not too much to ask...a brief description of what conditions could influence whether that STATUS file will appear or not, as this would greatly help with point #2 above.

Standardize JSON reading and writing

Currently we have xASL_import_json.m, spm_jsonread, spm_jsonwrite for reading and writing of JSONs
We should:

  • rename these into xASL_jsonRead & xASL_jsonWrite (like xASL_tsvRead & xASL_tsvWrite)
  • move them to SPMmodified/xASL
  • change throughout
  • xASL_import_json is already a wrapper, that needs cleaning & renaming
  • spm_jsonwrite needs a wrapper. We can move some stuff outside of our edited spm_jsonwrite perhaps

NEEDs further discussion. spm_jsonread and spm_jsonwrite work perfectly and they do not really need wrappers because no extra functionality is need - that is an unnecessary overhead. All our JSON files should adhere to the JSON format and they should not need wrappers - this only makes things more complicated and introduce potential errors (several of them fixed already that could have been avoided).

There are one exception to this - the dataPar and studyPar files. There, the content of JSONs need an extra interpretation (removing extra x-level), and fixing (numerical arrays entered as strings). However, the correct JSON fields generated by spm_jsonwrite do not save numerical arrays incorrectly as strings - so this behavior is really needed only for the incorrectly edited studyPar files.

Proposed solution:

  1. Use spm_jsonwrite everywhere - that's JSON standard and no wrapper is needed.
  2. Use spm_jsonread everywhere where possible - to read automatically and correctly saved JSON files. Go through code and change the calls of xASL_import_json to spm_jsonread where possible.
  3. Only leave xASL_import_json for those files that contain strange manually edited constructions in JSON and thus need parsing - like studyPar and DataPar. But make sure that the JSON of those templates and all examples in Flavors, TestDataSet, 10-TestDataset are corrected.
  4. Renames xASL_import_json to xASL_io_ReadDataPar - make a joint function for this with m-files. And make sure to report a warning for all incorrect constructions, so that people edit JSONs correctly and we can phase out this function in the future.

Release note

JSON i/o unified and using spm_jsonread and spm_jsonwrite for all operations.

Improve internal documentation

Goal: Create a documentation similar to NIFTYTORCH or MONAI ?

  • Add README files to subfolders to improve the orientation within the ExploreASL project.
  • Improve code readability and overall access for new developers.
  • Integrate all the README files into an interactive documentation

CAT12.5 update to 12.7

The newer CAT12 version does a better job at segmenting T1w, especially with differently sized brains. Jan Petr is adapting and implementing this in ExploreASL.

Siemens import issue in CCLL order

Dcm2nii creates several separates control/label files instead of a 4D volume for certain Siemens datasets - especially in the GENFI cohort. These files need to be merged in a single 4D volume in the import. There's is a non-trivial syntax of the files names that can be used to reconstruct the correct file order.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.