Giter Club home page Giter Club logo

containers's Introduction

ReproMan

Supports python version GitHub release PyPI version fury.io Tests codecov.io Documentation

ReproMan aims to simplify creation and management of computing environments in Neuroimaging. While concentrating on Neuroimaging use-cases, it is by no means is limited to this field of science and tools will find utility in other fields as well.

Status

ReproMan is under rapid development. While the code base is still growing the focus is increasingly shifting towards robust and safe operation with a sensible API. There has been no major public release yet, as organization and configuration are still subject of considerable reorganization and standardization.

See CONTRIBUTING.md if you are interested in internals and/or contributing to the project.

Installation

ReproMan requires Python 3 (>= 3.8).

Linux'es and OSX (Windows yet TODO) - via pip

By default, installation via pip (pip install reproman) installs core functionality of reproman allowing for managing datasets etc. Additional installation schemes are available, so you could provide enhanced installation via pip install 'reproman[SCHEME]' where SCHEME could be

  • tests to also install dependencies used by unit-tests battery of the reproman
  • full to install all of possible dependencies, e.g. DataLad

For installation through pip you would need some external dependencies not shipped from it (e.g. docker, singularity, etc.) for which please refer to the next section.

Debian-based systems

On Debian-based systems we recommend to enable NeuroDebian from which we will soon provide recent releases of ReproMan. We will also provide backports of all necessary packages from that repository.

Dependencies

Python 3.8+ with header files possibly needed to build some extensions without wheels. They are provided by python3-dev on debian-based systems or python-devel on Red Hat systems.

Our setup.py and corresponding packaging describes all necessary python dependencies. On Debian-based systems we recommend to enable NeuroDebian since we use it to provide backports of recent fixed external modules we depend upon. Additionally, if you would like to develop and run our tests battery see CONTRIBUTING.md regarding additional dependencies.

A typical workflow for reproman run

This example is heavily based on the "Typical workflow" example created for ///repronim/containers which we refer you to discover more about YODA principles etc. In this reproman example we will follow exactly the same goal -- running MRIQC on a sample dataset -- but this time utilizing ReproMan's ability to run computation remotely. DataLad and ///repronim/containers will still be used for data and containers logistics, while reproman will establish a little HTCondor cluster in the AWS cloud, run the analysis, and fetch the results.

Step 1: Create the HTCondor AWS EC2 cluster

If it is the first time you are using ReproMan to interact with AWS cloud services, you should first provide ReproMan with secret credentials to interact with AWS. For that edit its configuration file (~/.config/reproman/reproman.cfg on Linux, ~/Library/Application Support/reproman/reproman.cfg on OSX)

[aws]
access_key_id = ...
secret_access_key = ...

Disclaimer/Warning: Never share or post those secrets publicly.

filling out the ...s. If reproman fails to find this information, error message Unable to locate credentials will appear.

Run (need to be done once, makes resource available for reproman login or reproman run):

reproman create aws-hpc2 -t aws-condor -b size=2 -b instance_type=t2.medium

to create a new ReproMan resource: 2 AWS EC2 instances, with HTCondor installed (we use NITRC-CE instances).

Disclaimer/Warning: It is important to monitor your cloud resources in the cloud provider dashboard(s) to ensure absent run away instances etc. to help avoid incuring heavy cost for used cloud services.

Step 2: Create analysis DataLad dataset and run computation on aws-hpc2

Following script is an exact replica from ///repronim/containers where only the datalad containers-run command, which fetches data locally and runs computation locally and serially, is replaced with reproman run which publishes dataset (without data) to the remote resource, fetches the data, runs computation via HTCondor in parallel across 2 nodes, and then fetches results back:

#!/bin/sh
(  # so it could be just copy pasted or used as a script
PS4='> '; set -xeu  # to see what we are doing and exit upon error
# Work in some temporary directory
cd $(mktemp -d ${TMPDIR:-/tmp}/repro-XXXXXXX)
# Create a dataset to contain mriqc output
datalad create -d ds000003-qc -c text2git
cd ds000003-qc
# Install our containers collection:
datalad install -d . ///repronim/containers
# (optionally) Freeze container of interest to the specific version desired
# to facilitate reproducibility of some older results
datalad run -m "Downgrade/Freeze mriqc container version" \
    containers/scripts/freeze_versions bids-mriqc=0.16.0
# Install input data:
datalad install -d . -s https://github.com/ReproNim/ds000003-demo sourcedata
# Setup git to ignore workdir to be used by pipelines
echo "workdir/" > .gitignore && datalad save -m "Ignore workdir" .gitignore
# Execute desired preprocessing in parallel across two subjects
# on remote AWS EC2 cluster, creating a provenance record
# in git history containing all condor submission scripts and logs, and
# fetching them locally
reproman run -r aws-hpc2 \
   --sub condor --orc datalad-pair \
   --jp "container=containers/bids-mriqc" --bp subj=02,13 --follow \
   --input 'sourcedata/sub-{p[subj]}' \
   --output . \
   '{inputs}' . participant group -w workdir --participant_label '{p[subj]}'
)

ReproMan: Execute documentation section provides more information on the underlying principles behind reproman run command.

Step 3: Remove resource

Whenever everything is computed and fetched, and you are satisfied with the results, use reproman delete aws-hpc2 to terminate remote cluster in AWS, to not cause unnecessary charges.

License

MIT/Expat

Disclaimer

It is in a beta stage -- majority of the functionality is usable but Documentation and API enhancements is WiP to make it better. Please do not be shy of filing an issue or a pull request. See CONTRIBUTING.md for the guidance.

containers's People

Contributors

adswa avatar asmacdo avatar bpinsard avatar chaselgrove avatar dependabot[bot] avatar jwodder avatar mjtravers avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

containers's Issues

special treatment to stop underlying docker container if running via shim?

follow up to datalad/datalad-container#144
where I tried just to kill our shim but docker container seems remain running:

(git-annex)lena:~/.tmp/repro-6XkWWkE/ds000003-qc[master]git-annex
$> containers/scripts/singularity_cmd: line 1: warning: wait_for: recursively setting old_sigint_handler to wait_sigint_handler: running_trap = 1

$> kill 3217993
kill: kill 3217993 failed: no such process

$> docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS               NAMES
39e56a15687a        repronim-containers-sing:latest   "/entrypoint.sh run …"   5 minutes ago       Up 5 minutes                            practical_gagarin

Add docker images

ATM it is singularity only, but we could support docker as well.
Largely to make the setup usable "natively" on OSX and Windows

Support M1 (arm64) Macs

so we need to

  • establish building both amd64 and arm64 docker images (now only amd64 and likely not automated)
  • adjust singularity_cmd to switch to -arm64 whenever uname -m says arm64.

NB I was wondering if we could move provisioning of image we do in https://github.com/ReproNim/containers/blob/master/scripts/Dockerfile.singularity-shim to somehow to be done at initial run-time -- then we could avoid needing to build our image at all. But it would also make it less trivial to update and harder to troubleshoot. so let's stay with built images but may be move over to quay.io for them

Robustify deployment or script to not produce broken singularity images

Prompted by #65 . See my last comment there for more detail. If docker partition runs out of space, create_singularities doesn't detect that and produces incomplete/broken image. Could be due to a bug in elderly singularity we use there (need to upgrade smaug finally to upgrade singularity as well), or might be our script doesn't handle failed singularity execution properly.

Add fMRIprep 22.0.1

A new fMRIprep patch was just released that some folks would like to use for projects - can this new image be added?

"Remote origin not usable by git-annex"

When I try to install this dataset, I get the following errors:

(main) login4.frontera(1136)$ datalad install https://github.com/ReproNim/containers.git
[INFO ] Remote origin not usable by git-annex; setting annex-ignore
[INFO ] https://github.com/ReproNim/containers.git/config download failed: Not Found
install(ok): /scratch1/03201/jbwexler/openneuro_derivatives/containers (dataset)

It still seems to work properly and allows me to datalad get files. However, things periodically break and I have to delete the entire dataset and reinstall. For example, I recently updated the dataset (datalad update --merge) and tried to get the newest fmriprep image, but ran into the following error:

(main) login4.frontera(1116)$ datalad get bids-fmriprep--20.2.6.sing
get(error): images/bids/bids-fmriprep--20.2.6.sing (file) [not available; (Note that these git remotes have annex-ignore set: origin)]

singularity-shim docker image is too large

I see it as

$> docker images | grep singularity-shim
mjtravers/singularity-shim                                               latest                             23ed220c0cd1        2 weeks ago         919MB
mjtravers/singularity-shim                                               <none>                             16e22847f322        5 weeks ago         919MB
mjtravers/singularity-shim                                               <none>                             6a7d567c53eb        6 weeks ago         919MB

so 900MB. original base images on https://hub.docker.com/r/singularityware/singularity/tags standard 356MB and there is some -slim of 65MB. Worth checking what is the difference -- may be -slim would be good enough for us. But I also wonder why I have 919MB image ;) any ideas @mjtravers ?

Typical workflow failing

The typical workflow in the README fails for me.

(venv) ch:~/datalad/data/ds000003-qc$ datalad containers-run -n containers/bids-mriqc --input sourcedata --output . '{inputs}' '{outputs}' participant group
[INFO   ] Making sure inputs are available (this may take some time) 
[WARNING] Running get resulted in stderr output: download failed: Bad Request
download failed: Bad Request
download failed: Bad Request
git-annex: get: 1 failed
 
[ERROR  ] from web...; from web...; from web...; Unable to access these remotes: web; Try making some of these repositories available:; 	00000000-0000-0000-0000-000000000001 -- web;  	7c7289ab-ad08-41eb-9318-f28e4fd957e7 -- yoh@smaug:/mnt/btrfs/datasets/datalad/crawl/repronim/containers; ; (Note that these git remotes have annex-ignore set: origin) [get(/home/ch/datalad/data/ds000003-qc/containers/images/bids/bids-mriqc--0.15.2.sing)] 
get(error): /home/ch/datalad/data/ds000003-qc/containers/images/bids/bids-mriqc--0.15.2.sing (file) [from web...; from web...; from web...; Unable to access these remotes: web; Try making some of these repositories available:; 	00000000-0000-0000-0000-000000000001 -- web;  	7c7289ab-ad08-41eb-9318-f28e4fd957e7 -- yoh@smaug:/mnt/btrfs/datasets/datalad/crawl/repronim/containers; ; (Note that these git remotes have annex-ignore set: origin)]
[ERROR  ] dataset containing given paths is not underneath the reference dataset Dataset(/home/ch/datalad/data/ds000003-qc): [PosixPath('/home/ch/datalad/data/ds000003-qc')] [status(/home/ch/datalad)] 
status(error): /home/ch/datalad [dataset containing given paths is not underneath the reference dataset Dataset(/home/ch/datalad/data/ds000003-qc): [PosixPath('/home/ch/datalad/data/ds000003-qc')]]
[INFO   ] == Command start (output follows) ===== 
ERROR  : Image path containers/images/bids/bids-mriqc--0.15.2.sing doesn't exist: No such file or directory
ABORT  : Retval = 255
[INFO   ] == Command exit (modification check follows) ===== 
[INFO   ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG' 
CommandError: ''containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.2.sing '"'"'sourcedata'"'"' '"'"''"'"' participant group'' failed with exitcode 255 under /home/ch/datalad/data/ds000003-qc

System is Debian 10; Datalad is Commit b4dd76b (reports Version 0.13.1); git is Version 2.20.1; datalad-container is Version 1.0.1.

Establish testing of containerized apps

Could probably be

  • regression tests
    • eventually some more advanced "testkraut"-like tests which annotate various steps in the execution
      (may be reproman's ability to trace into containers could be of help here, although ATM we do not react for every sub-command execution there)
  • per family generic tests (e.g. to verify that all bids-apps or boutiques have consistent interface)

Existing tests (not that many! ;) )

So it seems that we need a testing framework which for image(s) would list a set of "tests drivers" to pickup/execute tests, so we could point to various tests specifications and get them executed.

  • IIRC boutiques testing implementation had some nice "integration" within pytest but I've not tried it yet
  • good old testkraut by @mih defines tests specification (see testcase.py with examples under localtests and populates tests for nose (or pytest!) to pick them up -- so something to base things on possibly
NB with `pytest` it is possible to execute individual ones! (nose relies on import, fails)
(git)hopa:~exppsy/testkraut[master]git
$> python -m pytest -s -v testkraut/tests/test_localtests.py::LocalDogFoodTests::test_check_assertions                                  
================================================================= test session starts ==================================================================
platform linux2 -- Python 2.7.14+, pytest-3.10.1, py-1.7.0, pluggy-0.8.0 -- /home/yoh/deb/gits/pkg-exppsy/testkraut/venvs/dev/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/yoh/deb/gits/pkg-exppsy/testkraut/.hypothesis/examples')
rootdir: /home/yoh/deb/gits/pkg-exppsy/testkraut, inifile:
plugins: xdist-1.26.1, localserver-0.5.0, forked-1.0.1, cov-2.6.0, hypothesis-3.71.11, celery-4.2.1
collected 1 item                                                                                                                                       

testkraut/tests/test_localtests.py::LocalDogFoodTests::test_check_assertions PASSED

Attn @djarecka , @glatard

Quick help on container output

Does datalad container-run ... participant without specifying a --participant-label process all subjects? And does it create the bids folder structure in the output folder (i.e., sub-* folders)?

I ran mriqc with a --participant-label and worked fine, but I don't want to try the full dataset without knowing what the output will be, and don't like to play around with datalad runs which will create extra commits and maybe more data in .git folders.

Thank you.

need to account for renaming

reported by @dnkennedy

datalad containers-run
-n containers/repronim-simple-workflow
--input 'rawdata/sub-RC4*/ses-1/anat/sub-*_ses-1_T1w.nii.gz'
code/simple_workflow/run_demo_workflow.py
-o . -w data/workdir --plugin_args 'dict(n_procs=10)' '{inputs}'

I get the following warning:
WARNING: DEPRECATED USAGE: Forwarding
SINGULARITYENV_DATALAD_CONTAINER_NAME as environment variable will not be
supported in the future, use APPTAINERENV_DATALAD_CONTAINER_NAME instead

indeed we do

$> git grep SINGULARITYENV_                      
scripts/singularity_cmd:    export SINGULARITYENV_DATALAD_CONTAINER_NAME="$DATALAD_CONTAINER_NAME

and might need to use another name based on singularity version (not sure if it would not puke if we set both)

docker Singularity "proxy" to invoke singularity containers on non-Linux systems via docker

Since singularity is Linux specific, its images aren't usable on OSX/Windows. But I wonder if there could be a docker image/container (which we could carry here as well) with the sole purpose of "proxying" invocation to singularity inside? Then the https://github.com/ReproNim/containers/blob/master/scripts/singularity_cmd could even do needed proxying.

on a quick try -- 2.6.1 singularity cannot work in docker
root@3dd4a262774a:/tmp# ./ReproNim-reproin-master-latest.simg 
Singularity: action-suid (U=0,P=4648)> Could not virtualize file system namespace: Operation not permitted

ERROR  : Could not virtualize file system namespace: Operation not permitted
Singularity: action-suid (U=0,P=4648)> Retval = 255

ABORT  : Retval = 255
root@3dd4a262774a:/tmp# apt-cache policy singularity-container
singularity-container:
  Installed: 2.6.1-2~nd90+1
  Candidate: 2.6.1-2~nd90+1
  Version table:
 *** 2.6.1-2~nd90+1 500
        500 http://neuro.debian.net/debian stretch/main amd64 Packages
        100 /var/lib/dpkg/status

but apparently it is just a matter of --privileged docker execution! and then it works. So we can/should workout such an adapter and just document that it requires --privileged docker execution. Aspects to keep in mind

TODOs:

  • Dockerfile to produce such proxy-to-singularity docker image/container
  • scripts/singularity_cmd_via_docker helper
    • need for bind mounting paths into the docker container first to then be bind mounted into singularity... Since our helper has control over them (-W "$tmpdir" -H "$updir/binds/HOME" -B $PWD --pwd "$PWD") and we do not care to expose "outside" paths, as long as there is no more binds sneaked in into the call -- we should be all set to just bind mount $PWD, $updir/binds/HOME, and $tmpdir
    • (possibly?) need for mapping user ID into some internal to docker environment... may be even chmoding outputs upon completion? I had little to no experience with that yet, besides eventually chmodding those owned by root files ;) One approach could be to have a dedicated user within the "docker proxy" image, and bind current user to it?

related

utopia is the target: enable testing on Travis OSX instances

tricky part might be the installation of

git-annex

Here is my elderly script for OSX to install from .dmg but may be there is a better way now:

datalads-imac:~ yoh$ cat upgrade-annex.sh 
#!/bin/bash

curver=$(git annex version | awk '/version:/{print $3;}' | sed -e 's,-.*,,g')
annexdir=/Applications/git-annex.app
curverdir=$annexdir.$curver

rm -f git-annex.dmg
# release
# curl -O https://downloads.kitenet.net/git-annex/OSX/current/10.10_Yosemite/git-annex.dmg
# daily build
curl -O https://downloads.kitenet.net/git-annex/autobuild/x86_64-apple-yosemite/git-annex.dmg

hdiutil attach git-annex.dmg 

if [ ! -z "$curver" ] && [ ! -e "$curverdir" ]; then
	mv $annexdir $curverdir
fi

rsync -a /Volumes/git-annex/git-annex.app /Applications/
hdiutil  detach /Volumes/git-annex/

docker

although might already be there

Add neurodesk singularity containers to the collection

Just FYI @stebo85 - the Mr. NeuroDesk

docker singularity_cmd: Yarik's failure to succeed - quoting is not fully in effect

it works within docker/singularity sandwich:
$> docker run -it --privileged --rm -e UID=47521 -e GID=47522 -v /home/yoh/proj/repronim/containers:/home/yoh/proj/repronim/containers -v /home/yoh/proj/repronim/containers/binds/HOME:/home/yoh/proj/repronim/containers/binds/HOME -w /home/yoh/proj/repronim/containers mjtravers/singularity-shim:latest exec -e -c -B /home/yoh/proj/repronim/containers -H /home/yoh/proj/repronim/containers/binds/HOME --pwd /home/yoh/proj/repronim/containers scripts/tests/arg-test.simg  ash                                                                
Singularity> /singularity 'foo bar'
arg #1=<foo bar>
Singularity> /singularity 'foo bar' blah 45.5 /dir 'bar;' 'foo&' '${foo}'
arg #1=<foo bar>
arg #2=<blah>
arg #3=<45.5>
arg #4=</dir>
arg #5=<bar;>
arg #6=<foo&>
arg #7=<${foo}>
Singularity> %                                                                                                                                                                                              
but if passed to actual docker call then somewhere it looses it:
$> docker run -it --privileged --rm -e UID=47521 -e GID=47522 -v /home/yoh/proj/repronim/containers:/home/yoh/proj/repronim/containers -v /home/yoh/proj/repronim/containers/binds/HOME:/home/yoh/proj/repronim/containers/binds/HOME -w /home/yoh/proj/repronim/containers mjtravers/singularity-shim:latest exec -e -c -B /home/yoh/proj/repronim/containers -H /home/yoh/proj/repronim/containers/binds/HOME --pwd /home/yoh/proj/repronim/containers scripts/tests/arg-test.simg ash /singularity 'foo bar' blah 45.5 /dir 'bar;' 'foo&' '${foo}'
arg #1=<foo>
arg #2=<bar>
arg #3=<blah>
arg #4=<45.5>
arg #5=</dir>
arg #6=<bar>
and it does get it back if I wrap entire call into a single argument:
$> docker run -it --privileged --rm -e UID=47521 -e GID=47522 -v /home/yoh/proj/repronim/containers:/home/yoh/proj/repronim/containers -v /home/yoh/proj/repronim/containers/binds/HOME:/home/yoh/proj/repronim/containers/binds/HOME -w /home/yoh/proj/repronim/containers mjtravers/singularity-shim:latest exec -e -c -B /home/yoh/proj/repronim/containers -H /home/yoh/proj/repronim/containers/binds/HOME --pwd /home/yoh/proj/repronim/containers scripts/tests/arg-test.simg "ash /singularity 'foo bar' blah 45.5 /dir 'bar;' 'foo&' '${foo}'" 
arg #1=<foo bar>
arg #2=<blah>
arg #3=<45.5>
arg #4=</dir>
arg #5=<bar;>
arg #6=<foo&>
arg #7=<>

my docker ATM

 *** 18.09.5+dfsg1-1 300
        300 http://http.debian.net/debian experimental/main amd64 Packages
        100 /var/lib/dpkg/status

so I guess the next thing to do is to try 18.09.6 from docker itself? (that is what you had @kyleam and @mjtravers right?)

issue with fmriprep v21.0.0 image

The image is only 2.2GB whereas it should be 5.1GB. And it gives the following error when run: "/.singularity.d/runscript: 3: exec: /opt/conda/bin/fmriprep: Exec format error"

I don't think it's an issue with fmriprep as the image works fine when downloaded this way: singularity build fmriprep-21.0.0.simg docker://nipreps/fmriprep:21.0.0

bids/hcppipelines does not expose entrypoint, when original docker container does

(dev3) @beast:~/containers$ docker run -it --rm bids/hcppipelines --version
HCP Pielines BIDS App version v3.17.0-18

(dev3) @beast:~/containers$ singularity run images/bids/bids-hcppipelines--3.17.0-18.sing --version
GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
...

as you can see -- entry point is "bash" not the "app"... something is not right... probably because of the "custom" Singularity file -- we are not just pulling from docker into an image, we are creating a singularity image based on docker image and do not provide entry point whatsoever! but it is not clear then how it works for others, e.g.:

(dev3) amanelis@beast:~/containers$ singularity run images/bids/bids-mriqc--0.15.1.sing --help | head
usage: mriqc [-h] [--version]
             [--participant_label [PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]]
             [--session-id [SESSION_ID [SESSION_ID ...]]]
             [--run-id [RUN_ID [RUN_ID ...]]]
             [--task-id [TASK_ID [TASK_ID ...]]]
             [-m [MODALITIES [MODALITIES ...]]] [--dsname DSNAME]
             [-w WORK_DIR] [--verbose-reports] [--write-graph] [--dry-run]
             [--profile] [--use-plugin USE_PLUGIN] [--no-sub] [--email EMAIL]
             [-v] [--webapi-url WEBAPI_URL] [--webapi-port WEBAPI_PORT]
...

so the issue might be that container specific. Would be nice to automate tests for all bids apps to provide proper interface

"Semantic" shim to provide "ReproNim"ification of outputs

so we could e.g. feed results of mriqc etc into ReproPonds and ReproLakes. And idea which came up during monthly ReproNim meeting. Could be done as a part of our singularity_cmd shim execution but would need to be app/container specific and know where/how to parse command line (or react to differences detected with datalad status?)

FOI: created a public clone on datasets.datalad.org

attn @mjtravers

in the light of Singularity being down, I have made a clone on datasets.datalad.org, so if you add it as a remote, annex will be able to fetch images (for now published only the testing one, will publish the rest shortly) from there

(git-annex)hopa:~/proj/repronim/containers[master]
$> git remote add datalad.datasets.org http://datasets.datalad.org/repronim/containers/.git      
1 13118.....................................:Mon 08 Jul 2019 03:12:12 PM EDT:.
(git-annex)hopa:~/proj/repronim/containers[master]
$> git fetch datalad.datasets.org                                                     
remote: Counting objects: 252, done.
remote: Compressing objects: 100% (87/87), done.
remote: Total 252 (delta 154), reused 182 (delta 90)
Receiving objects: 100% (252/252), 16.58 KiB | 1.84 MiB/s, done.
Resolving deltas: 100% (154/154), completed with 60 local objects.
From http://datasets.datalad.org/repronim/containers/
 * [new branch]      git-annex     -> datalad.datasets.org/git-annex
 * [new branch]      master        -> datalad.datasets.org/master
 * [new branch]      synced/master -> datalad.datasets.org/synced/master
1 13119.....................................:Mon 08 Jul 2019 03:12:20 PM EDT:.
(git-annex)hopa:~/proj/repronim/containers[master]
$> git annex whereis scripts/tests/arg-test.simg 
(merging datalad.datasets.org/git-annex into git-annex...)
whereis scripts/tests/arg-test.simg (5 copies) 
  	00000000-0000-0000-0000-000000000001 -- web
   	23456fab-05d4-441e-8952-b8c6d90ad785 -- yoh@smaug:~/proj/repronim/containers [smaug]
   	3bcd23f7-1adf-4dce-9f2a-efdaab4151d8 -- yoh@hopa:~/proj/repronim/containers [here]
   	71c620b5-997f-4849-bb30-c42dbb48a51e -- yoh@falkor:/srv/datasets.datalad.org/www/repronim/containers [datalad.datasets.org]
   	7c7289ab-ad08-41eb-9318-f28e4fd957e7 -- yoh@smaug:/mnt/btrfs/datasets/datalad/crawl/repronim/containers

  web: https://www.googleapis.com/download/storage/v1/b/singularityhub/o/singularityhub%2Fgithub.com%2FReproNim%2Fcontainers%2F617bd98bd287ce90ff31d9aecca078e1464580f2%2F9a3ddb8c0a4e43776d53b272bd6a58e5%2F9a3ddb8c0a4e43776d53b272bd6a58e5.simg?generation=1561659368833857&alt=media
ok
1 13120.....................................:Mon 08 Jul 2019 03:12:29 PM EDT:.
(git-annex)hopa:~/proj/repronim/containers[master]
$> git annex drop scripts/tests/arg-test.simg
drop scripts/tests/arg-test.simg (checking datalad.datasets.org...) ok
(recording state in git...)
1 13121.....................................:Mon 08 Jul 2019 03:12:38 PM EDT:.
(git-annex)hopa:~/proj/repronim/containers[master]
$> git annex get scripts/tests/arg-test.simg    
get scripts/tests/arg-test.simg (from datalad.datasets.org...) 
(checksum...) ok                  
(recording state in git...)

Allow for setting to map PWD into consistent path inside container

https://github.com/ReproNim/containers/blob/master/scripts/singularity_cmd#L101

SARGS=( -e -B "$PWD" -H "$BHOME" --pwd "$PWD" "$@" )

Usecase came from fmriprep and alerted to by @satra : for any sizeable dataset we need first to "bootstrap" pybids DB using smth like (thanks @smeisler):

pybids layout --index-metadata --reset-db $bids_dir $database_outdir

and pass that location to fmriprep via bids-database-dir.

I even wonder if such mode of operation should be a default one, and default e.g. to /tmp ?

fails to run (mriqc) singularity container through docker

initially reported by @dnkennedy and then troubleshooted locally on Linux REPRONIM_USE_DOCKER=1 of our example in README.md

full dump from terminal
$> REPRONIM_USE_DOCKER=1 ./containers-test.sh
> mktemp -d /home/yoh/.tmp/repro-XXXXXXX
> cd /home/yoh/.tmp/repro-6XkWWkE
> datalad create -d ds000003-qc -c text2git
[INFO   ] Creating a new annex repo at /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc 
[INFO   ] Scanning for unlocked files (this may take some time) 
[INFO   ] Running procedure cfg_text2git 
[INFO   ] == Command start (output follows) ===== 
[INFO   ] == Command exit (modification check follows) ===== 
create(ok):../ /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc (dataset)
> cd ds000003-qc
> datalad install -d . ///repronim/containers
[INFO   ] Scanning for unlocked files (this may take some time)                                                                  
install(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers (dataset)                                                       
add(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers (file)
add(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/.gitmodules (file)
save(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc (dataset)
action summary:
  add (ok: 2)
  install (ok: 1)
  save (ok: 1)
> datalad run -m Downgrade/Freeze mriqc container version containers/scripts/freeze_versions bids-mriqc=0.15.1
[INFO   ] == Command start (output follows) ===== 
I: bids-mriqc -> 0.15.1
[INFO   ] == Command exit (modification check follows) ===== 
add(ok):../ /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers/.datalad/config (file)
save(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers (dataset)
add(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers (file)
save(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc (dataset)
action summary:
  add (ok: 2)
  save (ok: 2)
> datalad install -d . -s https://github.com/ReproNim/ds000003-demo sourcedata
[INFO   ] Scanning for unlocked files (this may take some time)                                                                  
[INFO   ] Remote origin not usable by git-annex; setting annex-ignore                                                            
[INFO   ] access to 1 dataset sibling s3-PRIVATE not auto-enabled, enable with:
| 		datalad siblings -d "/home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata" enable -s s3-PRIVATE 
install(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata (dataset)
add(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata (file)
add(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/.gitmodules (file)
save(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc (dataset)
action summary:
  add (ok: 2)
  install (ok: 1)
  save (ok: 1)
> datalad containers-run -n containers/bids-mriqc --input sourcedata --output . {inputs} {outputs} participant group
[INFO   ] Making sure inputs are available (this may take some time) 
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-02/anat/sub-02_T1w.nii.gz (file) [from s3-PUBLIC...]            
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-02/anat/sub-02_inplaneT2.nii.gz (file) [from s3-PUBLIC...]      
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-02/func/sub-02_task-rhymejudgment_bold.nii.gz (file) [from s3-PUBLIC...]
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-13/anat/sub-13_T1w.nii.gz (file) [from s3-PUBLIC...]
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-13/anat/sub-13_inplaneT2.nii.gz (file) [from s3-PUBLIC...]
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/sourcedata/sub-13/func/sub-13_task-rhymejudgment_bold.nii.gz (file) [from s3-PUBLIC...]
get(ok): /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc/containers/images/bids/bids-mriqc--0.15.1.sing (file) [from origin...]         
[INFO   ] == Command start (output follows) ===== 
FATAL:   container creation failed: mount /proc/self/fd/5->/usr/local/singularity/var/singularity/mnt/session/rootfs error: can't mount image /proc/self/fd/5: failed to mount squashfs filesystem: invalid argument
[INFO   ] == Command exit (modification check follows) ===== 
[INFO   ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG' 
CommandError: 'containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.1.sing 'sourcedata' '' participant group' failed with exitcode 255 under /home/yoh/.tmp/repro-6XkWWkE/ds000003-qc
REPRONIM_USE_DOCKER=1 ./containers-test.sh  36.18s user 43.33s system 14% cpu 9:27.22 total

Rewrite create_singularities in Python

The goal is to rewrite bash https://github.com/ReproNim/containers/blob/master/scripts/create_singularities in Python.

Rationale:

  • gain in readability/maintainability
  • do not need to be "advanced bash developers" needing to come up with fixes like d79d125
  • possibly ease testing of at least some helpers
  • if written as async, enjoy parallel execution/creation of containers
  • get rid of unused stuff: there is no longer singularity hub and its API so that code could be gone

While working on it, keep in mind upcoming extensions such as #74

running via docker seems to loose args along the way somewhere

Might be related/cause of #13: It seems that actually even simple args not passed through or unquoted somewhere?
Compare this plain execution (this is in current master 0.2-40-ga5d0e61):

$> scripts/singularity_cmd exec images/bids/bids-rshrf--1.0.0.sing sh -c 'echo "1234"'           
1234

to the one via docker which outputs nothing:

$> REPRONIM_USE_DOCKER=1 scripts/singularity_cmd exec images/bids/bids-rshrf--1.0.0.sing sh -c 'echo "1234"' 

or here is another version:

$> scripts/singularity_cmd exec images/bids/bids-rshrf--1.0.0.sing sh -c 'touch "1234"' && ls -l 1234       
-rw-r--r-- 1 yoh yoh 0 Jul  2 12:35 1234

$> rm 1234

$> REPRONIM_USE_DOCKER=1 scripts/singularity_cmd exec images/bids/bids-rshrf--1.0.0.sing sh -c 'touch "1234"' && ls -l 1234 
BusyBox v1.28.4 (2018-07-17 15:21:40 UTC) multi-call binary.

Usage: touch [-c] [-d DATE] [-t DATE] [-r FILE] FILE...

Update the last-modified date on the given FILE[s]

	-c	Don't create files
	-h	Don't follow links
	-d DT	Date/time to use
	-t DT	Date/time to use
	-r FILE	Use FILE's date/time

after fixing -- please add a unit test for such an invocation.

"Typical workflow" observations on Windows

As I have a Win 10 Pro build 2004 box for testing, @yarikoptic asked me to test the "Typical Workflow" from the README on Windows.

I'm testing under two different terminals/shells: A Git Bash and an Anaconda Prompt (both installed using the most recent instructions from the handbook).

Unfortunately, testing required more time than I anticipated, so I'm posting the issue "as is", although I haven't finished the workflow. I will return to this at a later point.

Git Bash:

  • Pro: Copy-pasting the script directly into the terminal works because Git Bash supports multi-line commands
  • datalad run -m 'Downgrade/Freeze mriqc container version' containers/scripts/freeze_versions bids-mriqc=0.15.1 fails when copy-pasting and when executing in a script with:
> datalad run -m 'Downgrade/Freeze mriqc container version' containers/scripts/freeze_versions bids-mriqc=0.15.1
[INFO] == Command start (output follows) =====
'"containers/scripts/freeze_versions"' is not recognized as an internal or external command,
operable program or batch file.
[INFO] == Command exit (modification check follows) =====
[INFO] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git\COMMIT_EDITMSG'
CommandError: '"containers/scripts/freeze_versions" "bids-mriqc=0.15.1"' failed with exitcode 1 under C:/Users/datalad/AppData/Local/Temp/repro-x8Gb5J4/ds000003-qc

A fix for this would be to invoke datalad run with bash <script>:

datalad@latitude-e7440 MINGW64 /tmp/repro-Yo4XEyy/ds000003-qc (adjusted/master(unlocked))
$ datalad run -m "Downgrade/Freeze mriqc container version" bash containers/scripts/freeze_versions bids-mriqc=0.15.1
[INFO] == Command start (output follows) =====
I: bids-mriqc -> 0.15.1
[INFO] == Command exit (modification check follows) =====
[INFO] Total: starting
[INFO]
add(ok): .datalad\config (file)
[INFO] Total: processed result for C:\Users\datalad\AppData\Local\Temp\repro-Yo4XEyy\ds000003-qc\containers
save(ok): containers (dataset)
add(ok): containers (file)
[INFO] Total: processed result for C:\Users\datalad\AppData\Local\Temp\repro-Yo4XEyy\ds000003-qc
save(ok): . (dataset)
[INFO] Total: done
action summary:
  add (ok: 2)
  save (ok: 2)

Anaconda Prompt:

  • Fail: You can't copy paste the command into the terminal. I believe Windows needs ^ at each line ending in multi-line commands (see the last Windows-Workaround in this section
  • I can't currently say definite things about what happens when I execute the script via "bash " in Anaconda prompt, because I'm seeing a very weird interaction between Windows and WSL2, where a bash session from Windows ends up in WSL2:
(base) C:\Users\datalad\repos>echo "This is anaconda prompt on native Windows 10!"
"This is anaconda prompt on native Windows 10!"

(base) C:\Users\datalad\repos>echo %username%
datalad

(base) C:\Users\datalad\repos>bash
adina@latitude-e7440:/mnt/c/Users/datalad/repos$ groups
adina adm cdrom sudo dip plugdev
adina@latitude-e7440:/mnt/c/Users/datalad/repos$ echo "WTAF"
WTAF
adina@latitude-e7440:/mnt/c/Users/datalad/repos$

I need to redo this after wiping the machine again.

Add boutiques

Some notes from email correspondence with @glatard on most popular boutiques ;-)

There are currently 7 tools with a singularity image in Boutiques:

$ bosh search singularity
[ INFO ] Showing 7 of 7 results.
ID TITLE DESCRIPTION
DOWNLOADS
zenodo.2587160 makeblastdb Application to create BLAST
databases, v... 738
zenodo.2587157 blast_formatter Stand-alone BLAST formatter
client, vers... 19
zenodo.2587156 blastdbcheck BLAST database integrity and
validity ch... 15
zenodo.1484547 BIDS App - FreeSurfer 6.0 BIDS App version of freesurfer
6.0, from... 10
zenodo.2541125 BEst

EEG/MEG source localisation
technique... 4
zenodo.2565170 MRIQC Automated Quality Control and
visual rep... 3
zenodo.2563057 BIDS App - fmriprep fMRIprep is a functional
magneticresonan... 3
I guess freesurfer, fmriprep and mriqc would be the most relevant.

"bosh exec prepare" will download the singularity image from the zenodo id:

$ bosh exec prepare zenodo.2563057 -x
[ INFO ] Using Zenodo endpoint https://zenodo.org
[ INFO ] Found cached file at
/home/glatard/.cache/boutiques/zenodo-2563057.json
[ INFO ] Running: singularity pull --name
"shots47s-bids-fmriprep-1.2.3.simg.tmp"
shub://shots47s/bids-fmriprep-1.2.3

add qsiprep image

We have a stable version 0.14.2, I'd be happy to add this here. The Docker image is at pennbbl/qsiprep:0.14.2

Tests fail on Debian for me

$> git describe
0.3-17-g14d96ff
(dev3) 1 27589.....................................:Thu 20 Feb 2020 03:46:27 PM EST:.
(git-annex)lena:~/proj/repronim/containers[enh-hash]
$> bats -t scripts/tests
1..5
FAIL: Arguments are not equal.
 #1=<<255>>
 #2=<<0>>
not ok 1 verifying arguments passed to singularity_cmd Docker shim
# (in test file scripts/tests/test_singularity_cmd.bats, line 29)
#   `export REPRONIM_USE_DOCKER=1' failed with status 255
# latest: Pulling from mjtravers/singularity-shim
# Digest: sha256:c9f208f8e7d27f381228ce6989d645acf879b26d95ed93e40a202e9e01ea71b7
# Status: Image is up to date for mjtravers/singularity-shim:latest
# docker.io/mjtravers/singularity-shim:latest
ok 2 verifying ability to singularity exec under /tmp subdir
FAIL: Arguments are not equal.
 #1=<<255>>
 #2=<<0>>
not ok 3 verifying ability to singularity exec under /tmp subdir (explicit use of docker)
# (from function `check_subdir' in file scripts/tests/test_singularity_cmd.bats, line 99,
#  in test file scripts/tests/test_singularity_cmd.bats, line 54)
#   `check_subdir "$(_mktemp_dir_under /tmp)"' failed with status 255
# latest: Pulling from mjtravers/singularity-shim
# Digest: sha256:c9f208f8e7d27f381228ce6989d645acf879b26d95ed93e40a202e9e01ea71b7
# Status: Image is up to date for mjtravers/singularity-shim:latest
# docker.io/mjtravers/singularity-shim:latest
ok 4 verifying ability to singularity exec under /home/yoh subdir
FAIL: Arguments are not equal.
 #1=<<255>>
 #2=<<0>>
not ok 5 verifying ability to singularity exec under /home/yoh subdir (explicit use of docker)
# (from function `check_subdir' in file scripts/tests/test_singularity_cmd.bats, line 99,
#  in test file scripts/tests/test_singularity_cmd.bats, line 65)
#   `check_subdir "$(_mktemp_dir_under $HOME)"' failed with status 255
# latest: Pulling from mjtravers/singularity-shim
# Digest: sha256:c9f208f8e7d27f381228ce6989d645acf879b26d95ed93e40a202e9e01ea71b7
# Status: Image is up to date for mjtravers/singularity-shim:latest
# docker.io/mjtravers/singularity-shim:latest

@mjtravers - do they still pass for you? I didn't investigate yet WTF has changed. FWIW I have

$> docker images | grep shim
mjtravers/singularity-shim              latest              adc09cd8a346        6 months ago        139MB

$> docker pull mjtravers/singularity-shim
Using default tag: latest
latest: Pulling from mjtravers/singularity-shim
Digest: sha256:c9f208f8e7d27f381228ce6989d645acf879b26d95ed93e40a202e9e01ea71b7
Status: Image is up to date for mjtravers/singularity-shim:latest
docker.io/mjtravers/singularity-shim:latest

Prototypical workflow #1

Originally "presented" in training materials issue: ReproNim/module-dataprocessing#26 (comment)

Here I would like to have it as a checklist ([x] (r) for "waiting the release(s)")

  • (r) datalad create -c text2git analysis-for-the-pi; cd analysis-for-the-pi
    text2git has an outstanding issue datalad/datalad#3361 which might redefine it, but otherwise - possible
  • datalad create -d . data/dicoms && cp ALL_DICOMS data/dicoms/
  • datalad install -d . https://github.com/ReproNim/containers/
  • workout heuristic for heudiconv under code/heudiconv-heuristic.py
  • (r) datalad create -d . -c bids data/bids
    -c bids is coming with 0.12 release of datalad and datalad-neuroimaging some time soonish (so - partially done)
  • datalad containers-run -n containers/heudiconv -f code/heudiconv-heuristic -o data/bids --files data/dicoms (TODO - container: #2)
  • Deface! apparently there is no "official" bids-app yet, but there is a number of defacers available, thus TODO - streamline (bids-app, container etc)
  • Carry out analys(es). For each one ATM subdataset should first be pre-created. Some (e.g., fmriprep might benefit from custom -c configs on what should go under git/annex)
    • datalad create -d . -c text2git data/mriqc
    • (r) datalad containers-run --explicit -n containers/bids-mriqc -i data/bids -o data/mriqc '{inputs}' '{outputs}' ... (TODO - test! TODO -- needs 0.3.2 release of -containers for proper '{inputs}' to not leak container file in there)
    • datalad create -d . -c text2git data/simple_workflow
    • datalad containers-run -n containers/simple_workflow -i data/bids -o data/simple_workflow ... '{inputs}' ... '{outputs}' (TODO - container: #2)
  • when all is good, look into upload to wherever (datalad create-sibling*, datalad publish) ;) TODO: full invocation example

Notes:

  • could be argued to step slightly away from YODA principle of derived datasets containing all needed information to reproduce themselves, because there is only a single containers/ subdataset at the super-dataset level, and derived datasets do not contain it. For the purpose of this workflow I am considering the top level super-dataset as the "reproducibility target". Having access to it will provide all needed information to reproduce any particular subdataset.
  • in principle aforementioned shortcoming could easily be resolved by installing containers/ dataset into each result subdataset, but then it would also require installation of original data "neighbor" dataset within. Could be a reckless clone or benefit from CoW on such as BTRFS. But for the initial presentation/use-case I think it should be good enough
  • from aforementioned example it seems to be very common to run a container which saves output to a new sub-dataset (if that one doesn't exist yet). I wonder if that anyhow could be assisted by datalad-container (TODO - issue)

FOI: to get version of OSX in cmdline

bash-3.2$ system_profiler SPSoftwareDataType
Software:

    System Software Overview:

      System Version: OS X 10.10.5 (14F2511)
      Kernel Version: Darwin 14.5.0
      Boot Volume: Macintosh HD
      Boot Mode: Normal
      Computer Name: DataLad's imac
      User Name: Yaroslav Halchenko (yoh)
      Secure Virtual Memory: Enabled
      Time since boot: 87 days 6:17

prototypical example fails on a fresh NITRC-CE EC2 instance: BIDS root does not exist

initial EC2 initialization and installation of datalad and datalad-container
(git)lena:~/proj/repronim/reproman[master]git
$> reproman -l debug create -t aws-ec2 my-nitrc-quick -b instance_type=t2.medium -b image=ami-0acbd99fe8c84efbb
You did not specify an EC2 SSH key-pair name to use when creating your EC2
environment.
Please enter a unique name to create a new key-pair or press [enter] to exit: my-nitrc-quick

That key name exists already, try again: my-nitrc-quick3

2020-12-15 08:35:02,531 [INFO   ] Created private key file /home/yoh/.local/share/reproman/ec2_keys/my-nitrc-quick3.pem 
2020-12-15 08:35:05,205 [INFO   ] Waiting for EC2 instance i-04cc7bfb6534fe133 to start running... 
2020-12-15 08:36:06,037 [INFO   ] EC2 instance i-04cc7bfb6534fe133 is running! 
2020-12-15 08:36:06,037 [INFO   ] Waiting for EC2 instance i-04cc7bfb6534fe133 to complete initialization... 
2020-12-15 08:38:52,706 [INFO   ] EC2 instance i-04cc7bfb6534fe133 initialized! 
2020-12-15 08:38:52,729 [INFO   ] Created the environment my-nitrc-quick 

$> reproman login my-nitrc-quick 

ubuntu@nitrcce:~$ sudo apt-get update -qqq
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64  InRelease: The following signatures were invalid: EXPKEYSIG 6ED91CA3AC1160CD NVIDIA CORPORATION (Open Source Projects) <[email protected]>
W: Failed to fetch https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64/InRelease  The following signatures were invalid: EXPKEYSIG 6ED91CA3AC1160CD NVIDIA CORPORATION (Open Source Projects) <[email protected]>
W: Some index files failed to download. They have been ignored, or old ones used instead.

ubuntu@nitrcce:~$ sudo apt-get install -y datalad datalad-container
Reading package lists... Done
...
Get:7 http://neuro.debian.net/debian bionic/main amd64 datalad all 0.13.6-1~nd18.04+1 [137 kB]
Get:8 http://neuro.debian.net/debian bionic/main amd64 python3-datalad all 0.13.6-1~nd18.04+1 [1109 kB]
Get:9 http://neuro.debian.net/debian bionic/main amd64 datalad-container all 1.1.0-1~nd18.04+1 [26.2 kB]
Get:10 http://neuro.debian.net/debian bionic/main amd64 singularity-container amd64 2.6.1-2+nd2~nd18.04+1 [351 kB]
.... took awhile -- that instance is slow :-/ ...
configuration and failed execution
ubuntu@nitrcce:~$ datalad install ///repronim/containers
It is highly recommended to configure Git before using DataLad. Set both 'user.name' and 'user.email' configuration variables.
It is highly recommended to configure Git before using DataLad. Set both 'user.name' and 'user.email' configuration variables.
[INFO   ] *** Please tell me who you are.                                                                                                                                                                                                                       
|                                                                                                                                                                                                                                                               
| Run
|
|   git config --global user.email "[email protected]"
|   git config --global user.name "Your Name"
|
| to set your account's default identity.
| Omit --global to set the identity only in this repository.
|
| fatal: unable to auto-detect email address (got 'ubuntu@nitrcce.(none)')
install(ok): /home/ubuntu/containers (dataset)
ubuntu@nitrcce:~$ git config --global user.email "[email protected]"
ubuntu@nitrcce:~$ git config --global user.name "Your Name"
ubuntu@nitrcce:~$ cd containers/
ubuntu@nitrcce:~/containers$ bash <(sed -n -e '/^ *#!/,/^```$/p' README.md | grep -v '```')
...
> datalad containers-run -n containers/bids-mriqc --input sourcedata --output . '{inputs}' '{outputs}' participant group
[INFO   ] Making sure inputs are available (this may take some time)
get(ok): sourcedata/sub-02/anat/sub-02_T1w.nii.gz (file) [from s3-PUBLIC...]                                                                                                                                                                                    
get(ok): sourcedata/sub-02/anat/sub-02_inplaneT2.nii.gz (file) [from s3-PUBLIC...]                                                                                                                                                                              
get(ok): sourcedata/sub-02/func/sub-02_task-rhymejudgment_bold.nii.gz (file) [from s3-PUBLIC...]
get(ok): sourcedata/sub-13/anat/sub-13_T1w.nii.gz (file) [from s3-PUBLIC...]
get(ok): sourcedata/sub-13/anat/sub-13_inplaneT2.nii.gz (file) [from s3-PUBLIC...]
get(ok): sourcedata/sub-13/func/sub-13_task-rhymejudgment_bold.nii.gz (file) [from s3-PUBLIC...]
get(ok): containers/images/bids/bids-mriqc--0.15.1.sing (file) [from origin...]                                                                                                                                                                                 
[INFO   ] == Command start (output follows) =====
2020-12-15 15:35:23,242 mriqc:IMPORTANT
    Running MRIQC version 0.15.1:
      * BIDS dataset path: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata.
      * Output folder: /tmp/repro-3vNHTmV/ds000003-qc.
      * Analysis levels: participant, group.

Process Process-2:
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/bin/mriqc_run.py", line 400, in init_mriqc
    ignore=['derivatives', 'sourcedata', r'^\..*'])
  File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 185, in __init__
    self._validate_root()
  File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 318, in _validate_root
    raise ValueError("BIDS root does not exist: %s" % self.root)
ValueError: BIDS root does not exist: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata
[INFO   ] == Command exit (modification check follows) =====
[INFO   ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG'
CommandError: 'containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.1.sing 'sourcedata' '' participant group' failed with exitcode 1 under /tmp/repro-3vNHTmV/ds000003-qc
more specific reproducing + not-reproducing with direct invocation when already in the container
ubuntu@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ singularity exec -W /tmp/singtmp.PacrEs -B /tmp/singtmp.PacrEs/tmp:/tmp -B /tmp/singtmp.PacrEs/var/tmp:/var/tmp -e -B /tmp/repro-3vNHTmV/ds000003-qc -H /tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME --pwd /tmp/repro-3vNHTmV/ds000003-qc containers/images/bids/bids-mriqc--0.15.1.sing bash
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)

bidsapp@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ export | grep SING
declare -x SINGULARITY_APPNAME=""
declare -x SINGULARITY_CONTAINER="/tmp/repro-3vNHTmV/ds000003-qc/containers/images/bids/bids-mriqc--0.15.1.sing"
declare -x SINGULARITY_NAME="bids-mriqc--0.15.1.sing"

bidsapp@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ ls /tmp/repro-3vNHTmV/ds000003-qc/sourcedata
CHANGES  README  dataset_description.json  participants.tsv  sub-02  sub-13  task-rhymejudgment_bold.json

bidsapp@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ /.singularity.d/runscript sourcedata '' participant group
2020-12-15 15:40:41,456 mriqc:IMPORTANT 
    Running MRIQC version 0.15.1:
      * BIDS dataset path: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata.
      * Output folder: /tmp/repro-3vNHTmV/ds000003-qc.
      * Analysis levels: participant, group.
    
201215-15:40:47,656 nipype.workflow INFO:
	 Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
2020-12-15 15:40:47,656 nipype.workflow:INFO Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
^CException ignored in: <bound method InstanceState._cleanup of <sqlalchemy.orm.state.InstanceState object at 0x7f6eeb461048>>
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 399, in _cleanup
    del self._instance_dict
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/local/miniconda/bin/mriqc", line 10, in <module>
    sys.exit(main())
  File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/bin/mriqc_run.py", line 221, in main
    plugin_settings = retval['plugin_settings']
  File "<string>", line 2, in __getitem__
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/managers.py", line 795, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

I Ctrl-C'ed above run since it did start going

so it feels like bind mount of /tmp/repro-3vNHTmV/ds000003-qc did not happen... but reason seems to be somewhere due to our singularity_cmd halper since it fails with it:

it fails with it
ubuntu@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ bash -x containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.1.sing 'sourcedata' . participant group
+ set -eu
++ readlink -f containers/scripts/singularity_cmd
+ thisfile=/tmp/repro-3vNHTmV/ds000003-qc/containers/scripts/singularity_cmd
++ dirname /tmp/repro-3vNHTmV/ds000003-qc/containers/scripts/singularity_cmd
+ thisdir=/tmp/repro-3vNHTmV/ds000003-qc/containers/scripts
++ dirname /tmp/repro-3vNHTmV/ds000003-qc/containers/scripts
+ updir=/tmp/repro-3vNHTmV/ds000003-qc/containers
+ BHOME=/tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME
+ cmd=run
+ shift
+ '[' -n '' ']'
++ mktemp -d -t singtmp.XXXXXX
+ tmpdir=/tmp/singtmp.ywiVAe
+ info 'created temp dir /tmp/singtmp.ywiVAe'
+ :
+ trap 'rm -fr "$tmpdir" && info "removed temp dir $tmpdir"' exit
+ pass_git_config user.name 'ReproNim User'
+ var=user.name
+ default='ReproNim User'
++ git config user.name
+ value='Your Name'
+ for attempt in {1..5}
+ git config -f /tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME/.gitconfig user.name
+ break
+ pass_git_config user.email [email protected]
+ var=user.email
+ [email protected]
++ git config user.email
+ [email protected]
+ for attempt in {1..5}
+ git config -f /tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME/.gitconfig user.email
+ break
+ SARGS=(-e -B "$PWD" -H "$BHOME" --pwd "$PWD" "$@")
+ need_no_c=
+ for d in "$PWD" "$updir"
+ '[' repro-3vNHTmV/ds000003-qc '!=' /tmp/repro-3vNHTmV/ds000003-qc ']'
+ info 'Creating /tmp/repro-3vNHTmV/ds000003-qc under /tmp/singtmp.ywiVAe'
+ :
+ mkdir -p /tmp/singtmp.ywiVAe//tmp/repro-3vNHTmV/ds000003-qc /tmp/singtmp.ywiVAe/var/tmp
+ need_no_c=1
+ for d in "$PWD" "$updir"
+ '[' repro-3vNHTmV/ds000003-qc/containers '!=' /tmp/repro-3vNHTmV/ds000003-qc/containers ']'
+ info 'Creating /tmp/repro-3vNHTmV/ds000003-qc/containers under /tmp/singtmp.ywiVAe'
+ :
+ mkdir -p /tmp/singtmp.ywiVAe//tmp/repro-3vNHTmV/ds000003-qc/containers /tmp/singtmp.ywiVAe/var/tmp
+ need_no_c=1
+ '[' -z 1 ']'
+ SARGS=(-B "$tmpdir/tmp:/tmp" -B "$tmpdir/var/tmp:/var/tmp" "${SARGS[@]}")
+ hash singularity
+ '[' -z '' ']'
+ singularity run -W /tmp/singtmp.ywiVAe -B /tmp/singtmp.ywiVAe/tmp:/tmp -B /tmp/singtmp.ywiVAe/var/tmp:/var/tmp -e -B /tmp/repro-3vNHTmV/ds000003-qc -H /tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME --pwd /tmp/repro-3vNHTmV/ds000003-qc containers/images/bids/bids-mriqc--0.15.1.sing sourcedata . participant group
2020-12-15 15:46:55,399 mriqc:IMPORTANT 
    Running MRIQC version 0.15.1:
      * BIDS dataset path: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata.
      * Output folder: /tmp/repro-3vNHTmV/ds000003-qc.
      * Analysis levels: group, participant.
    
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/bin/mriqc_run.py", line 400, in init_mriqc
    ignore=['derivatives', 'sourcedata', r'^\..*'])
  File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 185, in __init__
    self._validate_root()
  File "/usr/local/miniconda/lib/python3.7/site-packages/bids/layout/layout.py", line 318, in _validate_root
    raise ValueError("BIDS root does not exist: %s" % self.root)
ValueError: BIDS root does not exist: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata
+ rm -fr /tmp/singtmp.ywiVAe
+ info 'removed temp dir /tmp/singtmp.ywiVAe'
+ :
but works if I just go with the direct singularity command it runs:
ubuntu@nitrcce:/tmp/repro-3vNHTmV/ds000003-qc$ singularity run -W /tmp/singtmp.ywiVAe -B /tmp/singtmp.ywiVAe/tmp:/tmp -B /tmp/singtmp.ywiVAe/var/tmp:/var/tmp -e -B /tmp/repro-3vNHTmV/ds000003-qc -H /tmp/repro-3vNHTmV/ds000003-qc/containers/binds/HOME --pwd /tmp/repro-3vNHTmV/ds000003-qc containers/images/bids/bids-mriqc--0.15.1.sing sourcedata '' participant group
2020-12-15 15:47:50,992 mriqc:IMPORTANT 
    Running MRIQC version 0.15.1:
      * BIDS dataset path: /tmp/repro-3vNHTmV/ds000003-qc/sourcedata.
      * Output folder: /tmp/repro-3vNHTmV/ds000003-qc.
      * Analysis levels: participant, group.
    
201215-15:47:57,125 nipype.workflow INFO:
	 Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
2020-12-15 15:47:57,125 nipype.workflow:INFO Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
^C

so the reason is really is having that /tmp/singtmp.ywiVAe (and subdirs) which bind mount first and then elderly singularity refuses to bind mount /tmp/repro-3vNHTmV/ds000003-qc on top of it...

quick&dirty workaround is to place TMPDIR somewhere else but /tmp, e.g. mkdir -p ~/tmp and provide it while running the example, e.g. TMPDIR=~/tmp bash <(sed -n -e '/^ *#!/,/^```$/p' README.md | grep -v '```')

then we do not bother to create/bind mount those additional paths
ubuntu@nitrcce:~/tmp/repro-wgnyF4J/ds000003-qc$ TMPDIR=~/tmp bash -x containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.1.sing 'sourcedata' . participant group
+ set -eu
++ readlink -f containers/scripts/singularity_cmd
+ thisfile=/home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/scripts/singularity_cmd
++ dirname /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/scripts/singularity_cmd
+ thisdir=/home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/scripts
++ dirname /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/scripts
+ updir=/home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers
+ BHOME=/home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/binds/HOME
+ cmd=run
+ shift
+ '[' -n '' ']'
++ mktemp -d -t singtmp.XXXXXX
+ tmpdir=/home/ubuntu/tmp/singtmp.yq0pJg
+ info 'created temp dir /home/ubuntu/tmp/singtmp.yq0pJg'
+ :
+ trap 'rm -fr "$tmpdir" && info "removed temp dir $tmpdir"' exit
+ pass_git_config user.name 'ReproNim User'
+ var=user.name
+ default='ReproNim User'
++ git config user.name
+ value='Your Name'
+ for attempt in {1..5}
+ git config -f /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/binds/HOME/.gitconfig user.name
+ break
+ pass_git_config user.email [email protected]
+ var=user.email
+ [email protected]
++ git config user.email
+ [email protected]
+ for attempt in {1..5}
+ git config -f /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/binds/HOME/.gitconfig user.email
+ break
+ SARGS=(-e -B "$PWD" -H "$BHOME" --pwd "$PWD" "$@")
+ need_no_c=
+ for d in "$PWD" "$updir"
+ '[' /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc '!=' /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc ']'
+ for d in "$PWD" "$updir"
+ '[' /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers '!=' /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers ']'
+ '[' -z '' ']'
+ SARGS=(-c "${SARGS[@]}")
+ hash singularity
+ '[' -z '' ']'
+ singularity run -W /home/ubuntu/tmp/singtmp.yq0pJg -c -e -B /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc -H /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/containers/binds/HOME --pwd /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc containers/images/bids/bids-mriqc--0.15.1.sing sourcedata . participant group
2020-12-15 16:24:57,803 mriqc:IMPORTANT 
    Running MRIQC version 0.15.1:
      * BIDS dataset path: /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc/sourcedata.
      * Output folder: /home/ubuntu/tmp/repro-wgnyF4J/ds000003-qc.
      * Analysis levels: participant, group.
    
201215-16:25:04,31 nipype.workflow INFO:
	 Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
2020-12-15 16:25:04,031 nipype.workflow:INFO Building anatomical MRIQC workflow, datasets list: ['sub-02/anat/sub-02_T1w.nii.gz', 'sub-13/anat/sub-13_T1w.nii.gz']
...

proper fix is yet to be worked out...

FOI: datasets.datalad.org is now a special git annex remote with autoenable=true

Original web URLs pointing to https://www.googleapis.com/download/storage/v1... will no longer be usable (more on that later and separately).
So I have made our clone on datasets.datalad.org known to annex as a special annex remote of type git. Command was

git annex initremote datasets.datalad.org location=http://datasets.datalad.org/repronim/containers/.git type=git autoenable=true

(after first registering that remote locally with "git remote add" but under a different name -- otherwise annex complains).

So, freshly installed using datalad install, this dataset will automagically acquire that remote and images will be fetchable:

hopa:/tmp
$> rm -rf containers* ; datalad install http://github.com/ReproNim/containers && cd containers && git remote && datalad get images/bids/bids-validator--1.2.3.sing                                                              
[INFO   ] Cloning http://github.com/ReproNim/containers [1 other candidates] into '/tmp/containers' 
[INFO   ]   Remote origin not usable by git-annex; setting annex-ignore                                         
install(ok): /tmp/containers (dataset)
LICENSE  README.md  binds/  images/  scripts/
datasets.datalad.org
origin
images/bids/bids-validator--1.2.3.sing:  14%|████▎                         | 5.47M/38.0M [00:03<00:20, 1.57MB/s]^C[WARNING] Still have 1 active progress bars when stopping 
ERROR:                                                                                                          
Interrupted by user while doing magic: KeyboardInterrupt() [cmd.py:_process_one_line:358]

but if it is an existing clone (or cloning via git clone), it would require a call to

git annex enableremote datasets.datalad.org

Need to avoid/tolerate race condition in calling git config

(git-annex)lena:…11ea-be9b-ff519d1f6bc9[master].reproman/jobs/local/20200522-000958-46c0
$> grep 'not lock' -3 stderr.1
> 00:10:34.142854375 [2357873] echo '[ReproMan] executing command containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.0.sing '\''data/bids'\'' '\''data/mriqc'\'' participant --participant_label '\''13'\'' -w work'
> 00:10:34.146979943 [2357873] echo '[ReproMan] ... within /home/yoh/.reproman/run-root/0a372e4e-9be2-11ea-be9b-ff519d1f6bc9'
> 00:10:34.149859825 [2357873] /bin/sh -c 'containers/scripts/singularity_cmd run containers/images/bids/bids-mriqc--0.15.0.sing '\''data/bids'\'' '\''data/mriqc'\'' participant --participant_label '\''13'\'' -w work'
error: could not lock config file /home/yoh/.reproman/run-root/0a372e4e-9be2-11ea-be9b-ff519d1f6bc9/containers/binds/HOME/.gitconfig: File exists
> 00:10:34.211776551 [2357873] echo 'failed: 255'
> 00:10:34.214277774 [2357873] mkdir -p /home/yoh/.reproman/run-root/0a372e4e-9be2-11ea-be9b-ff519d1f6bc9/.reproman/jobs/local/20200522-000958-46c0/failed
> 00:10:34.216103284 [2357873] touch /home/yoh/.reproman/run-root/0a372e4e-9be2-11ea-be9b-ff519d1f6bc9/.reproman/jobs/local/20200522-000958-46c0/failed/1

probably due to https://github.com/ReproNim/containers/blob/master/scripts/singularity_cmd#L49

see more info in ReproNim/reproman#511

most likely should just try for X (5 should be enough ;)) times with some random sleep between

Ability to get new containers while having all prior or used containers frozen to specific version

Currently it would not be possible to update from original containers dataset with the purpose to only get new containers, while keeping current ones (possibly already used in some analysis) at current version -- "merging" of .datalad/config with the remote version would update all image configs with the new version.

Possible ways:

  • provide some scripts/freeze_containers script which would make a duplicate section for the container after its original section in .datalad/config, e.g.:
[datalad "containers.bids-validator"]
	updateurl = shub://ReproNim/containers:bids-validator--1.2.3
	image = images/bids/bids-validator--1.2.3.sing
	cmdexec = {img_dspath}/scripts/singularity_cmd run {img} {cmd}

...

### FROZEN CONTAINERS

[datalad "containers.bids-validator"]
	image = images/bids/bids-validator--1.2.3.sing

so whenever a new version to be merged, most likely conflict would occur at the end of the file, but at least it would be easy to troubleshoot and original "full" record would get is new image entry without affecting the end value of the image for the container

  • enhancement to above: We can prepopulate that trailing section within this dataset:
### FROZEN CONTAINERS

[datalad "containers.bids-validator"]
# end of  datalad "containers.bids-validator"

[datalad "containers.bids-fmriprep"]
# end of  datalad "containers.bids-fmriprep"
...

### END OF FROZEN CONTAINERS

and make sure that for every container (which is to stay above the ### FROZEN CONTAINERS) we add we also add this blank section within it. Then merges should proceed fine, users would be able to freeze needed containers. So the only thing we would need is within this repo make sure that new containers entries are added correctly and then add that script which would also need to understand this format to add new image entries.

  • provide some scripts/freeze_containers script which would adjust image entries within .datalad/config for specified/all containers so it would cause conflict upon merge and require conscious conflict resolution (or just git merge -S ours, but I am afraid trailing hunk could swallow then newly added container configs) to decide to either upgrade specific image version to the new one or not. There could even be some custom merge helper to perform merge by simply adopting only new sections of the config

Any other way @kyleam @mih @bpoldrack which might come to your mind?

I was not sure if that would be anything to tackle at datalad-containers level, since more relevant to such "datalad containers" distribution) so decided to file here first.

FOI: bats-assert

https://github.com/ztombol/bats-assert provides a nice collection of assertion helpers to be used in bats, similar to the ad-hoc ones I started to introduce in #27 . Would be nice to migrate those to use the ones from bats-assert

TODO to the Debian maintainer of bats -- package bats-assert as well ;-)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.