Giter Club home page Giter Club logo

hcppipelines's People

Contributors

bpinsard avatar chrisgorgo avatar heffjos avatar jarodroland avatar jokedurnez avatar mgxd avatar monicaycli avatar neurosutton avatar pre-commit-ci[bot] avatar remi-gau avatar rhancockn avatar roopa-pai avatar shubhamrajgithub avatar soichih avatar stebo85 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcppipelines's Issues

No module named 'conda' error on docker build

I am trying to docker build:
docker build -t bids/hcppipelines -f Dockerfile .
on MacOS and I get the following error:

...
The following packages will be UPDATED:

ca-certificates: 2018.03.07-0                              --> 2019.9.11-hecc5488_0 conda-forge
certifi:         2018.4.16-py27_0                          --> 2019.9.11-py37_0     conda-forge
libffi:          3.2.1-hd88cf55_4                          --> 3.2.1-he1b5a44_1006  conda-forge
libgcc-ng:       7.2.0-hdf63c60_3                          --> 9.1.0-hdf63c60_0                
libstdcxx-ng:    7.2.0-hdf63c60_3                          --> 9.1.0-hdf63c60_0                
ncurses:         6.1-hf484d3e_0                            --> 6.1-hf484d3e_1002    conda-forge
openssl:         1.0.2o-h20670df_0                         --> 1.1.1c-h516909a_0    conda-forge
pip:             10.0.1-py27_0                             --> 19.2.3-py37_0        conda-forge
python:          2.7.15-h1571d57_0                         --> 3.7.3-h33d41f4_1     conda-forge
readline:        7.0-ha6073c6_4                            --> 8.0-hf8c457e_0       conda-forge
setuptools:      39.2.0-py27_0                             --> 41.2.0-py37_0        conda-forge
six:             1.11.0-py27h5f960f1_1                     --> 1.12.0-py37_1000     conda-forge
sqlite:          3.23.1-he433501_0                         --> 3.29.0-hcee41ef_1    conda-forge
tk:              8.6.7-hc745277_3                          --> 8.6.9-hed695b0_1003  conda-forge
wheel:           0.31.1-py27_0                             --> 0.33.6-py37_0        conda-forge
zlib:            1.2.11-ha838bed_2                         --> 1.2.11-h516909a_1006 conda-forge

Downloading and Extracting Packages
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Traceback (most recent call last):
File "/usr/local/miniconda/bin/conda", line 7, in
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
The command '/bin/sh -c curl -fsSLO https://repo.continuum.io/miniconda/Miniconda2-4.5.4-Linux-x86_64.sh && bash Miniconda2-4.5.4-Linux-x86_64.sh -b -p /usr/local/miniconda && rm Miniconda2-4.5.4-Linux-x86_64.sh && conda config --add channels conda-forge && conda install -y mkl=2019.3 mkl-service=2.0.2 numpy=1.16.4 nibabel=2.4.1 pandas=0.24.2 && sync && conda clean -tipsy && sync && pip install --no-cache-dir pybids==0.9.1' returned a non-zero code: 1

Container image seems to be corrupted(?) on Dockerhub

I am having problem running bids/hcppipeline container via singularity. It looks like there is something wrong with /var/lib/apt/lists inside the container.

tar: var/lib/apt/lists/.wh.archive.ubuntu.com_ubuntu_dists_trusty_Release.gpg: Cannot open: Permission denied
tar: var/lib/apt/lists/.wh.lock: Cannot open: Permission denied
tar: var/lib/apt/lists/.wh.partial: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors

The directory looks like this inside the container

$ docker run --rm -it --entrypoint=bash brainlife/bids-hcppipelines 
root@84edf7a39311:/# cd /var/lib/apt/lists
root@84edf7a39311:/var/lib/apt/lists# ls -la
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_Release: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_Release.gpg: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_main_i18n_Translation-en: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_restricted_i18n_Translation-en: No such file or directory
ls: cannot access lock: No such file or directory
ls: cannot access partial: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_InRelease: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_main_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_restricted_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_restricted_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_universe_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-security_universe_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_InRelease: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_main_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_main_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_restricted_source_Sources.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages.gz: No such file or directory
ls: cannot access archive.ubuntu.com_ubuntu_dists_trusty_universe_source_Sources.gz: No such file or directory
ls: cannot access deb.nodesource.com_node%5f4.x_dists_trusty_InRelease: No such file or directory
ls: cannot access deb.nodesource.com_node%5f4.x_dists_trusty_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access deb.nodesource.com_node%5f4.x_dists_trusty_main_source_Sources.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_data_InRelease: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_data_contrib_binary-amd64_Packages.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_data_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_data_non-free_binary-amd64_Packages.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_trusty_InRelease: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_trusty_contrib_binary-amd64_Packages.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_trusty_main_binary-amd64_Packages.gz: No such file or directory
ls: cannot access neurodeb.pirsquared.org_dists_trusty_non-free_binary-amd64_Packages.gz: No such file or directory
total 4
drwxr-xr-x 1 root root 4096 Nov  2  2016 .
drwxr-xr-x 1 root root   28 Nov  2  2016 ..
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_InRelease
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_main_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_restricted_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_restricted_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_universe_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-security_universe_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_InRelease
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_main_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_restricted_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty-updates_universe_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_Release
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_Release.gpg
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_main_i18n_Translation-en
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_main_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_restricted_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_restricted_i18n_Translation-en
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_restricted_source_Sources.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? archive.ubuntu.com_ubuntu_dists_trusty_universe_source_Sources.gz
?????????? ? ?    ?       ?            ? deb.nodesource.com_node%5f4.x_dists_trusty_InRelease
?????????? ? ?    ?       ?            ? deb.nodesource.com_node%5f4.x_dists_trusty_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? deb.nodesource.com_node%5f4.x_dists_trusty_main_source_Sources.gz
?????????? ? ?    ?       ?            ? lock
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_data_InRelease
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_data_contrib_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_data_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_data_non-free_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_trusty_InRelease
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_trusty_contrib_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_trusty_main_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? neurodeb.pirsquared.org_dists_trusty_non-free_binary-amd64_Packages.gz
?????????? ? ?    ?       ?            ? partial

It seems that I can run this container fine via docker, or even singularity if I first convert it to the singularity image, but not directly via singularity using docker://bids/hcppipeline syntax.

Error in PostFreeSurfer stage

I'm getting the following error at the PostFreeSurfer stage after apparently successful PreFreeSurfer and FreeSurfer stages:

terminate called after throwing an instance of 'RBD_COMMON::BaseException'

/opt/HCP-Pipelines/PostFreeSurfer/scripts/FreeSurfer2CaretConvertAndRegisterNonlinear.sh: line 205:  1150 Aborted                 (core dumped) ${MSMBINDIR}/msm --conf=${MSMCONFIGDIR}/MSMSulcStrainFinalconf --inmesh="$AtlasSpaceFolder"/"$NativeFolder"/${Subject}.${Hemisphere}.sphere.rot.native.surf.gii --refmesh="$AtlasSpaceFolder"/"$Subject"."$Hemisphere".sphere."$HighResMesh"k_fs_LR.surf.gii --indata="$AtlasSpaceFolder"/"$NativeFolder"/${Subject}.${Hemisphere}.sulc.native.shape.gii --refdata="$AtlasSpaceFolder"/${Subject}.${Hemisphere}.refsulc."$HighResMesh"k_fs_LR.shape.gii --out="$AtlasSpaceFolder"/"$NativeFolder"/MSMSulc/${Hemisphere}. --verbose


Traceback (most recent call last):
  File "/run.py", line 349, in <module>
    stage_func()
  File "/run.py", line 114, in run_post_freesurfer
    run(cmd, cwd=args["path"], env={"OMP_NUM_THREADS": str(args["n_cpus"])})
  File "/run.py", line 30, in run
    raise Exception("Non zero return code: %d"%process.returncode)
Exception: Non zero return code: 134

The error occurs with both the latest release version and the dev version. The call is:

docker run -ti --rm -v /mnt/clinical/PrismaQC/2018_12_07/bids:/bids_dir:ro -v ~/Temp/hcpp_output:/output_dir 415ddb225c5b bids_dir output_dir participant --participant_label qc31 --stages PostFreeSurfer --license_key FAKEKEY

Adding --privileged and/or calling with sudo did not change the result. Setting the OMP_NUM_THREADS environment variable explicitly for the docker run call did not change the result.

Set run.py ENTRYPOINT

The default command should be configured as

ENTRYPOINT ["/run.py"]

instead of

CMD ["/run.py"]

for Singularity compatibility.

wb_command not installed in container

When I run the latest version of the container I get the error:

/opt/HCP-Pipelines/PreFreeSurfer/scripts/BiasFieldCorrection_sqrtT1wXT2w.sh: line 145: /opt/workbench/bin_linux64/wb_command: No such file or directory
Mon Jul 26 09:46:15 EDT 2021:BiasFieldCorrection_sqrtT1wXT2w.sh: While running '/opt/HCP-Pipelines/PreFreeSurfer/scripts/BiasFieldCorrection_sqrtT1wXT2w.sh --workingdir=./outputs/sub-021/T1w/BiasFieldCorrection_sqrtT1wXT2w --T1im=./outputs/sub-021/T1w/T1w_acpc_dc --T1brain=./outputs/sub-021/T1w/T1w_acpc_dc_brain --T2im=./outputs/sub-021/T1w/T2w_acpc_dc --obias=./outputs/sub-021/T1w/BiasField_acpc_dc --oT1im=./outputs/sub-021/T1w/T1w_acpc_dc_restore --oT1brain=./outputs/sub-021/T1w/T1w_acpc_dc_restore_brain --oT2im=./outputs/sub-021/T1w/T2w_acpc_dc_restore --oT2brain=./outputs/sub-021/T1w/T2w_acpc_dc_restore_brain':
Mon Jul 26 09:46:15 EDT 2021:BiasFieldCorrection_sqrtT1wXT2w.sh: ERROR: '/opt/workbench/bin_linux64/wb_command' command failed with return code: 127
Mon Jul 26 09:46:15 EDT 2021:BiasFieldCorrection_sqrtT1wXT2w.sh: ERROR: '/opt/workbench/bin_linux64/wb_command' command failed with return code: 127

find /opt -name wb_command returned nothing so it seems the HCP scripts didn't install correctly. This was working in at least version v3.17.0-18:

$ find /opt -name wb_command
/opt/workbench/bin_linux64/wb_command
/opt/workbench/exe_linux64/wb_command

Python type error

Hi,

I’m trying to run the HCPPipelines BIDS app on some of my data using a singularity image that I built from the docker container (version 4.1.3-1).

There is no problem when running it with the “legacy” option. However, when running it with the “hcp” option to include the T2 image, fieldmaps, etc., I immediately get the following error message:

Traceback (most recent call last):
File “/run.py”, line 326, in
fieldmap_set[“magnitude1”],
TypeError: list indices must be integers or slices, not str

Is there a way to solve this problem?

Any help is much appreciated!

Thank you,
Max

Implement LegacyStyleData

Implement the --processing-mode="LegacyStyleData" HCP option for

  • PreFreeSurfer
  • FreeSurfer
  • PostFreeSurfer
  • fMRIVolume

ENH: Removing all the empty lines from the stdout?

Hi,

Is there a reason to have an empty line between each stdout line? E.g.:

[...]
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: GEB0InputName: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: TE: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SpinEchoPhaseEncodeNegative: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SpinEchoPhaseEncodePositive: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: DwellTime: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SEUnwarpDir: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: T1wSampleSpacing: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: T2wSampleSpacing: NONE

Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: UnwarpDir: NONE

[...]

I suggest telling print not to add a newline every time it prints out, by simply changing the following line:
https://github.com/BIDS-Apps/HCPPipelines/blob/1650b0575490e05ebd1d673030c0a03ad76d6586/run.py#L25

to:

print(line, end='')

That way the output looks like:

[...]
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: GEB0InputName: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: TE: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SpinEchoPhaseEncodeNegative: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SpinEchoPhaseEncodePositive: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: DwellTime: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: SEUnwarpDir: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: T1wSampleSpacing: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: T2wSampleSpacing: NONE
Thu Aug 9 15:37:38 UTC 2018 - PreFreeSurferPipeline.sh: UnwarpDir: NONE
[...]

fieldmap for BOLD file not found even though it is present

Hello,

I posted this on the hcp-users group but it seems to be more specific to the BIDS app:

I am trying to run the HCPPipelines BIDS app on a sample subject from my colleague's data. After the PostFreeSurferPipeline is completed, I get the error:

  File "/run.py", line 435, in <module>
    f"No fieldmaps found for BOLD {fmritcs}. Consider --procesing_mode [legacy | auto ]."
AssertionError: No fieldmaps found for BOLD /bids_dataset/sub-121c/func/sub-121c_task-rest_bold.nii.gz. Consider --procesing_mode [legacy | auto ].

However in the subdirectory sub-121c/fmap, I have the following files:

sub-121c_dir-PA_epi.json    sub-121c_magnitude2.json
sub-121c_dir-PA_epi.nii.gz  sub-121c_magnitude2.nii.gz
sub-121c_magnitude1.json    sub-121c_phasediff.json
sub-121c_magnitude1.nii.gz  sub-121c_phasediff.nii.gz

In the sub-121c_dir-PA_epi.json there is the line

    "IntendedFor": "func/sub-121c_task-rest_bold.nii.gz"

and the corresponding sub-121c_dir-PA_epi.nii.gz is present.

The BIDS validator correctly identifies the corresponding BOLD map func/sub-121c_task-rest_bold.nii.gz - I checked this by modifying the IntendedFor field. However the pipeline script somehow does not locate the fieldmap sub-121c_dir-PA_epi.nii.gz...
I would be grateful for any help..
Romuald

Check file existence

It would be helpful to check that the required input files for each requested processing stage exist. If not, a helpful error message can be displayed.

PostFreeSurferPipeline error

Hey everyone,

I'm trying to run the BIDS App using a singularity on a cluster, but some of subject keep failing when they reach the FreeSurfer2CaretConvertAndRegisterNonlinear.sh point.
I get the following error message: PostFreeSurferPipeline.sh: ERROR: '/opt/HCP-Pipelines/PostFreeSurfer/scripts/FreeSurfer2CaretConvertAndRegisterNonlinear.sh' command failed with return code: 139

Does anyone have suggestion on how to address this?

Thank you!

BIDs issue- 'BIDS root does not exist'

Hello, I hope this finds you well.

I am trying to run the unprocessed HCP Wu-Minn 1200 dataset using the HCP PreProcessing Pipeline Docker App. To ensure my data was in BIDS format, I found the BIDs validator website and ensured my data was in BIDs format. I then tried to run my data, and received a BIDS-related error stating a BIDs root doesn't exist (when the path it's referring to definitley exists). Help would be really appreciated!!

(base) dk00549@bigdata-master01:/vol/research/nemo/HCP/Scripts$ cat c524.p0.error
Traceback (most recent call last):
File "/run.py", line 246, in
layout = BIDSLayout(args.bids_dir, derivatives=False, absolute_paths=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 230, in init
self._validate_root()
File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 460, in _validate_root
raise ValueError("BIDS root does not exist: %s" % self.root)
ValueError: BIDS root does not exist: /vol/research/nemo/HCP/UnprocessedHCPSubjects

test data failing

Dear @rhancockn, @neurosutton, @bpinsard

@Remi-Gau and I are trying to get the HCPPipelines bids-app updated and tested again :)

We are currently running into the problem that the test data is not working, that's what we tried:

          wget https://raw.githubusercontent.com/bids-apps/maintenance-tools/main/utils/get_data_from_osf.sh
          bash get_data_from_osf.sh hcp_example_bids_v3
  
          docker run -ti --rm --read-only \
            -v ~/data/hcp_example_bids_v3:/bids_dataset \
              bids/${CIRCLE_PROJECT_REPONAME,,} \
                /bids_dataset \
                /outputs \
                participant --participant_label 100307 \
                --stages PreFreeSurfer \
                --processing_mode legacy \
                --license_key="*CxjskRdd7" \
                --n_cpus 2

Unfortunately this results in:

Traceback (most recent call last):
  File "/run.py", line 274, in <module>
    t1_spacing = layout.get_metadata(t1ws[0])["DwellTime"]
KeyError: 'DwellTime'

Exited with code exit status 1

Do you by any chance have an idea what this could be?

I also saw a couple of open pull requests. Would anyone of you be able to help us merge these pull requests and get this bidsapp updated and nicely working?

Thank you
Steffen

No T1w files found for subject

I am running a singularity container built from the Singularity file built from the repository. Here is the command I am using:

singularity run \
    -B ${Freesurfer}:/opt/freesurfer \
    ${jheffernan}/singularity_images/hcppipelines-${version}.simg \
    ${BidsDir} ${DerivativesDir} participant \
    --participant_label ${Id} \
    --stages PreFreeSurfer FreeSurfer PostFreeSurfer \
    --coreg MSMSulc \
    --license_key /opt/freesurfer/license.txt

The container exits with this assertion error:
image

There are T1w files in the anat directory. Here is a tree image of the subject directory:
image

I think this occurred, because I am not using sessions. The current code seems to expect a session.

Limiting the images used for anatomical processing

From what I can see in the code, it looks like the HCPPipelines get all _T1w and _T2w images and feed them into the anatomical pipeline. However, it could be the case that a dataset has some _T1w or _T2w images that are not meant to be used in freesurfer.

For example, we have a user who collects T1-weighted fast spin-echoes (FSE) with high in-plane resolution and thick slices for anatomical ROI delineation. These only cover a small slab of the brain. Since they are anatomical images, I extract them in the anat/ folder and, since they are T1 weighted, I add the _T1w modality, so they are labeled sub-01_acq-fse_run-01_T1w.

When I run the pipelines, all the T1w images present in the anat/ folder are passed onto the pipelines, which tries to average all of them. The way they are averaged is by using the field of view that is common to all of them, and that is only the small slab covered in the FSE run, instead of using the whole brain covered by the MPRAGE (which is labeled sub-01_acq-highres_run-01_T1w).

Therefore, I think there should be a way of limiting which anatomical images are used by the pipelines.

This could be done by using only those images matching either a string in the filename or a label in the .json file (like the "IntendedFor" in the field maps). (Maybe this should be addressed by modifying the bids-specification?)

ERROR: crypt() returned null with 4-line file

When I run the HCP pipelines like:

singularity run -e \
    -B $PWD/nifti:/bids_dataset:ro \
    -B $PWD/hcppipelines/sub-${sub}:/output \
    $PWD/simg/hcppipelines_latest.sif \
    /bids_dataset /output participant --participant_label $sub \
    --n_cpus 1 --processing-mode legacy --license_key "XXXXX"

it's erroring out with an odd message: ERROR: crypt() returned null with 4-line file. Does anyone have any idea what that error could indicate? Below is the full output, written to /output/sub-01/T1w/sub-01/scripts/recon-all.log

[...]
Tue Mar  9 16:27:16 EST 2021:PreFreeSurferPipeline.sh: Completed!

FreeSurfer in LegacyStyleData mode
/opt/HCP-Pipelines/FreeSurfer/FreeSurferPipeline.sh --subject="sub-01" --subjectDIR="/output/sub-01/T1w" --t1="/output/sub-01/T1w/T1w_acpc_dc_restore.nii.gz" --t1brain
="/output/sub-01/T1w/T1w_acpc_dc_restore_brain.nii.gz" --processing-mode="LegacyStyleData"
========================================
  DIRECTORY: /opt/HCP-Pipelines  
    PRODUCT: HCP Pipeline Scripts
    VERSION: v4.1.3
========================================
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: HCPPIPEDIR: /opt/HCP-Pipelines
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: FREESURFER_HOME: /opt/freesurfer
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Platform Information Follows:
Linux pennsive06 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: We were not able to locate one of the following required tools:
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: recon-all.v6.hires, conf2hires, or longmc
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING:
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: To be able to run this script using the standard versions of these tools,
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: we added /opt/HCP-Pipelines/FreeSurfer/custom to the beginning of the PATH.
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING:
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: If you intended to use some other version of these tools, please configure
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: your PATH before invoking this script, such that the tools you intended to
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: use can be found on the PATH.
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING:
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: WARNING: PATH set to: /opt/HCP-Pipelines/FreeSurfer/custom:/usr/local/fsl/bin:/usr/local/miniconda/bin:/opt/freesur
fer/bin:/opt/freesurfer/fsfast/bin:/opt/freesurfer/tktools:/opt/freesurfer/mni/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Showing HCP Pipelines version
========================================
  DIRECTORY: /opt/HCP-Pipelines  
    PRODUCT: HCP Pipeline Scripts
    VERSION: v4.1.3
========================================
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Showing recon-all.v6.hires version
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: /opt/HCP-Pipelines/FreeSurfer/custom/recon-all.v6.hires
freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Showing tkregister version
/opt/freesurfer/tktools/tkregister
stable6
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Showing mri_concatenate_lta version
/opt/freesurfer/bin/mri_concatenate_lta
stable6
Tue Mar  9 16:27:21 EST 2021:FreeSurferPipeline.sh: Showing mri_surf2surf version
/opt/freesurfer/bin/mri_surf2surf
stable6
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Showing fslmaths location
/usr/local/fsl/bin/fslmaths
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: INFO: Determined that FreeSurfer full version string is: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: INFO: Determined that FreeSurfer version is: 6.0.1
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Using named parameters
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: LegacyStyleData mode requested.
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: NOTICE: You are using a mode that enables processing of acquisitions that do not
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         conform to the HCP specification as described in Glasser et al. (2013)!
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         Be aware that if the HCP requirements are not met, the level of data quality
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         can not be guaranteed and the Glasser et al. (2013) paper should not be used
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         in support of this workflow. A manuscript with comprehensive evaluation for
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         the LegacyStyleData processing mode is in active preparation and should be
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:         appropriately cited when published.
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh:
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: The following LegacyStyleData settings were requested: --t2w-image= or --t2= not present or set to NONE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Subject Directory: /output/sub-01/T1w
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Subject: sub-01
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: T1w Image: /output/sub-01/T1w/T1w_acpc_dc_restore.nii.gz
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: T1w Brain: /output/sub-01/T1w/T1w_acpc_dc_restore_brain.nii.gz
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Include -conf2hires flag in recon-all: TRUE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: ProcessingMode: LegacyStyleData
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Starting main functionality
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Retrieve positional parameters
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: SubjectDIR: /output/sub-01/T1w
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: SubjectID: sub-01
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: T1wImage: /output/sub-01/T1w/T1w_acpc_dc_restore.nii.gz
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: T1wImageBrain: /output/sub-01/T1w/T1w_acpc_dc_restore_brain.nii.gz
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: T2wImage: NONE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: recon_all_seed:
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: flair: FALSE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: existing_subject: FALSE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: extra_reconall_args:
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: conf2hires: TRUE
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Figure out the number of cores to use.
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: num_cores: 1
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: Thresholding T1w image to eliminate negative voxel values
Tue Mar  9 16:27:22 EST 2021:FreeSurferPipeline.sh: ...This produces a new file named: /output/sub-01/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz
Tue Mar  9 16:27:23 EST 2021:FreeSurferPipeline.sh: Call custom recon-all: recon-all.v6.hires
Tue Mar  9 16:27:23 EST 2021:FreeSurferPipeline.sh: ...recon_all_cmd: recon-all.v6.hires -subjid sub-01 -sd /output/sub-01/T1w -all -i /output/sub-01/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz -emregmask /output/sub-01/T1w/T1w_acpc_dc_restore_brain.nii.gz -openmp 1 -conf2hires
Subject Stamp: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
Current Stamp: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
INFO: SUBJECTS_DIR is /output/sub-01/T1w
Actual FREESURFER_HOME /opt/freesurfer
Linux pennsive06 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
'/opt/HCP-Pipelines/FreeSurfer/custom/recon-all.v6.hires' -> '/output/sub-01/T1w/sub-01/scripts/recon-all.local-copy'
/output/sub-01/T1w/sub-01

 mri_convert /output/sub-01/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz /output/sub-01/T1w/sub-01/mri/orig/001.mgz

mri_convert.bin /output/sub-01/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz /output/sub-01/T1w/sub-01/mri/orig/001.mgz
$Id: mri_convert.c,v 1.226 2016/02/26 16:15:24 mreuter Exp $
reading from /output/sub-01/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz...
ERROR: crypt() returned null with 4-line file
Linux pennsive06 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

recon-all -s sub-01 exited with ERRORS at Tue Mar  9 16:27:25 EST 2021

For more details, see the log file /output/sub-01/T1w/sub-01/scripts/recon-all.log
To report a problem, see http://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting

Tue Mar  9 16:27:25 EST 2021:FreeSurferPipeline.sh: While running '/opt/HCP-Pipelines/FreeSurfer/FreeSurferPipeline.sh --subject=sub-01 --subjectDIR=/output/sub-01/T1w --t1=/output/sub-01/T1w/T1w_acpc_dc_restore.nii.gz --t1brain=/output/sub-01/T1w/T1w_acpc_dc_restore_brain.nii.gz --processing-mode=LegacyStyleData':
Tue Mar  9 16:27:25 EST 2021:FreeSurferPipeline.sh: ERROR: recon-all command failed with return_code: 1
Tue Mar  9 16:27:25 EST 2021:FreeSurferPipeline.sh: ERROR: recon-all command failed with return_code: 1
Tue Mar  9 16:27:25 EST 2021:FreeSurferPipeline.sh: ABORTING

Traceback (most recent call last):
  File "/run.py", line 402, in <module>
    stage_func()
  File "/run.py", line 103, in run_freesurfer
    "OMP_NUM_THREADS": str(args["n_cpus"])})
  File "/run.py", line 31, in run
    raise Exception("Non zero return code: %d"%process.returncode)
Exception: Non zero return code: 1
[INFO   ] == Command exit (modification check follows) =====
[INFO   ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG'
CommandError: 'singularity run -e -B /scratch/8275975/ds/nifti:/bids_dataset:ro -B /scratch/8275975/ds/hcppipelines/sub-01:/output /scratch/8275975/ds/simg/hcppipelines_latest.sif /bids_dataset /output participant --participant_label 01 --n_cpus 1 --processing-mode legacy --license_key 46946' failed with exitcode 1 under /scratch/8275975/ds

Originally I posted this issue in the HCP pipelines repository but the error seems to be coming from the BIDS app.

circle CI data

Hi,

Not sure where to ask this question...
I've been working on this app, and I had to update the bids-validator. Unfortunately now the test data is not compatible anymore with the validator (there is no file dataset_description.json). How can I change the test data on osf?

@chrisfilo @oesteban

Joke

ICA-FIX implementation for BIDS

Hi,

I ran the HCP Pipeline on a BIDS dataset using a singularity and I'm up to the stage of denoising,
I was wondering if anyone had already implemented ICA-FIX?

Thank you for your help!

argument for session

Is there any way to specify a session name (versus running through all existing sessions for every subject) in the --participant_label argument?

Thanks,
Paola

BIDs validation fails inside container on previously validated dataset

I'm working on using the recently released docker container for HCPpipeline v4.1.3 to process the HCP-Aging dataset. I've converted the dataset to BIDs format to the point that it passes validation with the BIDs validator docker app. However, inside the HCP container, validation fails with this message:

bids-validator /cifs/hariri-long/Studies/HCP-Aging/BIDS/sourcedata
ESC[31m1: Files with such naming scheme are not part of BIDS specification. This error is most commonly caused by typos in file names that make them not BIDS compatible. Please consult the specification and make sure your files are named correctly. If this is not a file naming issue (for example when including files not yet covered by the BIDS specification) you should include a ".bidsignore" file in your dataset. Please note that derived (processed) data should be placed in /derivatives folder and source data (such as DICOMS or behavioural logs in proprietary formats) should be placed in the /sourcedata folder. (code: 1 - NOT_INCLUDED)ESC[39m

            ./sub-6005242/func/sub-6005242_task-carit_dir-PA_bold.json
            ./sub-6005242/func/sub-6005242_task-facename_dir-PA_bold.nii.gz
            ./sub-6005242/func/sub-6005242_task-vismotor_dir-PA_bold.nii.gz

...
[plus a jillion more files]

Is there something I seem to be missing here? Thanks!

Error getting OMP_NUM_THREADS

Hi,

I've been trying to run HCPPipelines in a singularity container but I get the following error:

Traceback (most recent call last):
  File "/run.py", line 347, in <module>
    stage_func()
  File "/run.py", line 71, in run_pre_freesurfer
    run(cmd, cwd=args["path"], env={"OMP_NUM_THREADS": str(args["n_cpus"])})
  File "/run.py", line 30, in run
    raise Exception("Non zero return code: %d"%process.returncode)
Exception: Non zero return code: 1

This is my singularity call:
singularity run -H ${tmp_dir} -B ${bids_dir}:/bids -B ${output_dir}:/out ${sing_container} /bids /out participant --participant_label UT1UT10006 --license_key="*CYjRpyK" --n_cpus 3

I've also tried not including --n_cpus in my singularity call, leaving the default, and still get the same error.

FSL version

Reading WashU's HCP Pipelines documentation, FSL 5.0.6 is recommended, as 5.0.7+ cause the Task Analysis to fail. However, the Dockerfile is building FSL 5.0.9, is this intended?

Using the {participant} level mode

Hello,

I am trying to optimize my usage of this amazing tool and take advantage of parallelization on my university's server system. So far, I've been running the script in "participant level mode (for one participant)", but I have several cores available, and I need to run about 60 subjects or so.

Aside from specifying the "-n_cpus", how can I use the "{participant}" positional argument? Would this help with parallelization? What are the options other than "participant" that argument will accept?

Thanks in advance for your help!

Best,
Paola

Error with fMRI Volume

Hello,

I am running into a fatal error that I cannot quite track down how to alleviate. I start the fMRIVolume processing step. The script gets to the "DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh" and then an error message is sent to the screen:

Traceback (most recent call last):
File "/run.py", line 410, in
stage_func()
File "/run.py", line 139, in run_generic_fMRI_volume_processsing
run(cmd, cwd=args["path"], env={"OMP_NUM_THREADS": str(args["n_cpus"])})
File "/run.py", line 29, in run
raise Exception("Non zero return code: %d"%process.returncode)
Exception: Non zero return code: 1

This is the command that I typed:

docker run -v /home/timothy/sandbox_DO_NOT_DELETE/BIDS/263_ETOH_BIDS/local:/bids_dataset:ro -v /home/timothy/sandbox_DO_NOT_DELETE/BIDS/263_ETOH_BIDS/hcp_output/:/outputs bids/hcppipelines /bids_dataset /outputs participant --participant_label 8404 --stages fMRIVolume --n_cpus 1 --license_key 27185

BIDS-Model Event files to FSL EV's

Hi @chrisfilo and @effigies ,

I'm starting to work on some new tasks for HCP and thinking about how to get the fMRIVolume and fMRISurface HCP stages processed using the BIDS-apps container.

The main sticking-point that I see is converting the BIDS event files (long-form run/duration/onset) to FSL EV files (vectors per condition) for HCP. I played around with fitlins and the BIDS model extension a few months ago, but didn't get very far before being pulled away to other things.

Do you think the model spec is mature enough to add as an extension to the run.py adapter you have in this container, e.g. including a bids-model json file, converting .tsv -> ev, and using those ev's in the HCP fMRI stages? Or does this sound like more work than it's worth and something that other people wouldn't necessarily use, and I should just focus on shoe-horning in precomputed ev's to a standard HCPPIPE (non-bids) container?

I suspect it would be straight-forward if fitlins has code I could piggy-back on to easily create the EVs (as opposed to estimating the whole thing), but not trivial if it doesn't. I poked around and didn't see anything, but maybe there's another package that you know of that would do this? Or maybe, that would actually be the more useful (to the community)?

Anyway, I'm probably going to take a crack at this and just wanted to open the issue early. Any suggestions along the way would be appreciated. Thanks!

bad pull

Hello,

I am getting errors trying to pull this image (see below). Thoughts on why this is?

Best,
Scott

burwell@lts3:~ 9% udocker pull bids/hcppipelines
Downloading layer: sha256:c60055a51d748f34ebd4a7c4872c5d727e0ef96fbf9cd9b248e931b222828c23
Downloading layer: sha256:755da0cdb7d25b74b205ff1eccd26ea4eede693ec7cf2150ae4c1caafe6394b1
Downloading layer: sha256:969d017f67e62ae323a3e8077e3ac4a5b1bf4a27c349148c1f6c28bd6ca3bbb8
Downloading layer: sha256:37c9a911359525fa28aa16715d36954723a8924492b5216cc97d1099251a5023
Downloading layer: sha256:a3d9f847978686a04b694253ea6c6873fb60a495dc742a92d097ccc3c2855641
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:e181682a933139381e55c59ab909e3cebfc34514a07c76492a0543e209955cd8
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:6243852112bddcf8cdcaac4a148f877de8ecbd555bdf32c1819949e5037a6886
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:915b678c9ba22a2e07c60857f558e975b99719e1299026ff7c8c92554aef5c30
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:825e18fbb6991a9fe54c29022675c97b3dad9291791adb118b167a029f4371f0
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:676ed76fca1eac6f026ef41a288a5b5e4e3dee87a4b19ff89b7c6e5cb19e3da5
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:ec91459debe6273c95ecfbe28ffd3dc49daa90a0d9239fcc85a14e9027327e7d
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:8e3ad1b8a71bb2b884d8e38f6fb582bad28eef30c3f71bc57bbebad491445dfe
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:9e2eed054ccbf087ca3a597dcb617b49a4e77cecc1655b8a3699ed475b128545
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:fc012fef585cb8a7bb16a82e88806e09e2d1b22315e17a4dd801e5983cd57dcb
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:a6211a86ba4f27ea770b5c1abfd5c4f5dacbcc63e1ce1861a246a2f956dfef86
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:d52aa2b163ad7696c7f90dc322dfbf34a6d49bb2dc381d1a4a33d028d687c1b3
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:07505ae8f4eaf0f2851bd84bf3bc933d44706b8d981359665c4cc979fd6ceece
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:9fa2eedcd764378a973d0600588d5c713af198398413b11ebcc0c5e9cd6091b4
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
Downloading layer: sha256:e559a577c20a136c4d7afe95f75970ff8048acdaa71cd3e7944224f2e690632e
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:6d19080e63a7fca396af7295f13f2b69bc7eab64e47cf2c880a0ce969f1d49e8
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:06ce73715c8574b29e4e7a8f0d2c176cd717d3d2a23677afc7becab5f189d524
Error: in download: HTTP/1.1 400 Bad Request
Downloading layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4

KeyError: 'DwellTime'

With the dockerized version v4.1.3-1 I get the following error:

  File "/run.py", line 275, in <module>
    t1_spacing = layout.get_metadata(t1ws[0])["DwellTime"]
KeyError: 'DwellTime'

Any advice? Thanks!

Type error in the pre freesurfer step

I got an error when using the freeSurfer pipeline that should not be too difficult to fix. If there is no fieldmap for the T1/T2, the parameters are set to "NONE" which generates the error in the function run_pre_freesurfer(). The function requires integer and not string parameters.

Reduce the size of the Docker image

Hi,

The size of the Docker image pulled from Docker hub is huge, since it contains FSL, FreeSurfer, etc. However, as far as I can tell, there are many, many files from those distributions that are not used by the pipelines.

By deleting a lot of unused files and staging the building of the Docker image, I have managed to reduce the image size from 26.9 GB to 9.1 GB. A smaller Docker image not only means that you save disk space, but it also means a faster launching of a new container.

I have tested it on some datasets, but it could be (although I think it is unlikely) that running it on other datasets, because their peculiarities, will fail because of the files I removed from the Docker image.

So my questions are:

  1. Would you be interested in reducing the image size
  2. If so, do you have a test dataset to check my image, or could you check it against your test dataset?
    You can pull my image from DockerHub:
    docker pull cbinyu/bids_hcppipelines:latest
    After running the tests I could start a pull request incorporating my changes.

Validate stage options

Check that the user-provided options to the --stage argument are valid before proceeding with the pipeline.

No T2w files found for sub-{subject_label}. Consider --procesing_mode [legacy | auto ].

Hi,

I am trying run BIDS_Apps/HCPPipelines to process fMRI data. I am testing to run one subject which have two session and 4 resting state runs. We collected T2w and T1w for both session but I am getting the following error.

File "/mnt/tools/bids_HCP/HCPPipelines/run.py", line 290
f"No T2w files found for sub-{subject_label}. Consider --procesing_mode [legacy | auto ]."

Then I chose the --processing_mode[auto] and still I am getting the same error. However, we want to process this data using --processing_mode[hcp]

Here's the command I used to run HCPPipeline - BIDS_Apps

python /mnt/tools/bids_HCP/HCPPipelines/run.py [--processing_mode {hcp}] [--stages {PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface}] [--coreg {FS}] --license_key $PWD/license.txt [-v] /mnt/fMRIprep/scR21_for_bids_app /mnt/fMRIprep/scR21_HCP_bids_outputs --license_key $PWD/license.txt participant

This input dataset is BIDS compliant.

Can anyone help me to fix this issue?

Thank you
Best Regards
Sameera

Error in finding an MNI template (MNI152_T1_1.0mm)

Hi,
I'm trying to run the HCPpipeline on one of our servers and I get the following error message:

Image Exception : #22 :: ERROR: Could not open image /opt/HCP-Pipelines/global/templates/MNI152_T1_1.0mm
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
/opt/HCP-Pipelines/PreFreeSurfer/scripts/ACPCAlignment.sh: line 84: 2598 Aborted (core dumped) ${FSLDIR}/bin/flirt -interp spline -in "$WD"/robustroi.nii.gz -ref "$Reference" -omat "$WD"/roi2std.mat -out "$WD"/acpc_final.nii.gz -searchrx -30 30 -searchry -30 30 -searchrz -30 30

In the output directory, 3 new directories (MNINonLinear, T1w and T2w) are generated and the T1 files are also copied, but the xfms directories within them is empty.

Any idea on what may cause the issue and how I can fix it?

list type for "IntendedFor" field

In bids/hcppipelines:v3.17.0-12, there is an error when there is more than one target scan for a fieldmap:

Traceback (most recent call last):
File "/run.py", line 239, in
fieldmap_set = layout.get_fieldmap(t1ws[0])
File "/usr/local/lib/python2.7/dist-packages/bids/grabbids/bids_layout.py", line 70, in get_fieldmap
if path.endswith(metadata["IntendedFor"]):
TypeError: endswith first arg must be str, unicode, or tuple, not list


FYI: In the JSON file: "IntendedFor": ["func/sub-01_task-emotionregulation_run-01_bold.nii.gz", "func/sub-01_task-rest_run-01_bold.nii.gz"]

This passes the bids-validator as BIDS compliant.

Example log file is attached.

log1102.txt

Thanks!

Running FAST segmentation Exception: Not enough classes detected to init KMeans

Hello. We are running HCPpipeline docker://bids/hcppipelines:v4.1.3-1. When we feed t1w and t2w input file, we run into the following error message.

Fri Feb 18 18:17:36 EST 2022:T2wToT1wReg.sh: START: T2w2T1Reg
Running FAST segmentation
Exception: Not enough classes detected to init KMeans
Image Exception : #63 :: No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_pve_2
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_pve_2
/usr/local/fsl/bin/epi_reg: line 320: 13055 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_pve_2 -thr 0.5 -bin ${vout}_fast_wmseg
Image Exception : #63 :: No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_wmseg
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /output/sub-NDARINV0LEM88KP/T2w/T2wToT1wReg/T2w2T1w_fast_wmseg
/usr/local/fsl/bin/epi_reg: line 329: 13086 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_wmseg -edge -bin -mas ${vout}_fast_wmseg ${vout}_fast_wmedge

I think this means that FAST segmentation is failing to generate the output file T2w2T1w_fast_pve_2 because something is not right with the input files. Is that correct? If so, do you have any suggestion on what we should do to troubleshoot this problem?

We are running the container with the following options

https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/main

singularity exec -e \
    -B `pwd`/bids:/bids \
    -B `pwd`/output:/output \
    docker://bids/hcppipelines:v4.1.3-1 \
    ./run.py /bids /output participant \
    --n_cpus 8 \
    --stages $stage \
    --license_key "$FREESURFER_LICENSE" \
    --participant_label $sub \
    --processing_mode $processing_mode \
    $skipbidsvalidation

We are running a slightly modified version of run.py for our jobs and here is the current content > https://github.com/brainlife/app-hcp-pipeline/blob/v4.1.3/run.py

enh: Documentation Needed on BIDS layout or change BIDS layout requirements

Hello,

I ran into an issue with the fMRI volume portion of the bids apps, at first I thought there might be some sort error with the code, but eventually I realized that the verbosity in which I set the metadata collection into json sidecars during my BIDS caused the script to fail.

I used the heudiconv tool to convert, however, I used the --minmeta flag to minimize the amount of metadata collected. I did this in order to limit the storage space taken up by my bids conversions.

It would be beneficial to users strapped with storage constraints to run this app with minimal BIDS metadata.

BIDs validation fails on previously validated dataset

I'm working on using the recently released docker container for HCPpipeline v4.1.3 to process the HCP-Aging dataset. I've converted the dataset to BIDs format to the point that it passes validation with the BIDs validator docker app. However, inside the HCP container, validation fails with this message:

bids-validator /cifs/hariri-long/Studies/HCP-Aging/BIDS/sourcedata
ESC[31m1: Files with such naming scheme are not part of BIDS specification. This error is most commonly caused by typos in file names that make them not BIDS compatible. Please consult the specification and make sure your files are named correctly. If this is not a file naming issue (for example when including files not yet covered by the BIDS specification) you should include a ".bidsignore" file in your dataset. Please note that derived (processed) data should be placed in /derivatives folder and source data (such as DICOMS or behavioural logs in proprietary formats) should be placed in the /sourcedata folder. (code: 1 - NOT_INCLUDED)ESC[39m

            ./sub-6005242/func/sub-6005242_task-carit_dir-PA_bold.json
            ./sub-6005242/func/sub-6005242_task-facename_dir-PA_bold.nii.gz
            ./sub-6005242/func/sub-6005242_task-vismotor_dir-PA_bold.nii.gz

...
[plus a jillion more files]

Is there something I seem to be missing here? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.