Comments (13)
I think we should do the CBRAIN execution in two stages:
- redirect to the portal as a first pass (i.e. send over to CBRAIN the information about which dataset and pipeline to be executed) and let the users use CBRAIN to run it.
- Then move to providing modular UI components from our new interface and run the jobs through the CBRAIN API. It may not be necessary then to code up an actual connection to the API, because the React Components will have the baked in.
from conp-portal.
actually, I will close it. Feel free to reopen it if you think there is still work involved on that issue.
from conp-portal.
We should discuss about the new interface in the coming weeks, I will bring the point 2 to this discussion.
from conp-portal.
- Goal 1 will be tracked in #347
- Goal 2 will be tracked in #348
- Goal 3 will be tracked in #349
- Goal 4 will be tracked in #350
from conp-portal.
I agree with this suggestion. How would you like to proceed?
from conp-portal.
This has two aspects:
- Following a discussion with @shots47s. From the portal, the frontend should be developed to launch a specific pipeline on a specific dataset using CBRAIN's REST API. The new CBRAIN GUI will soon provide widgets to facilitate that.
- From the command line, this is already possible using Boutiques+DataLad. I don't think we need to add anything specific there.
from conp-portal.
@glatard should this issue be closed?
from conp-portal.
On the CBRAIN front it would be useful to have a tighter integration than just redirecting to the login page. We should check with the CBRAIN team if point 2. in @shots47s' list above would be doable.
from conp-portal.
Discussed briefly at the CONP dev call of September 30th, 2020.
Will focus and split that issue in smaller tasks at the next CONP dev call (October 7th).
@glatard should we invite people from the CBRAIN team to the next CONP dev call to discuss the plan? If so, who should be invited?
from conp-portal.
Here are a few possible actions regarding this issue, organized in four Goals summarized below. All goals can be worked on in parallel, except Goal 3 as it depends on 1 and 2.
Goal 1: Run CONP pipelines in CBRAIN
Tasks
- Make sure that all CONP pipelines that are available in CBRAIN appear as such in the CONP portal.
- When user click on the CBRAIN button in a CONP pipeline, redirect to the pipeline launch page instead of the generic CBRAIN login.
How
Point 2 most likely requires storing a CBRAIN tool config id for each pipeline, preferably as a config file also available on GitHub for easier update. This design would also solve point 1, as a pipeline will be assumed to be installed in CBRAIN if and only if it has a valid tool config id. When registering config ids, one should make sure that they match the exact same pipeline (boutiques descriptor) than registered in CONP.
Who
CONP developers (@cmadjar, @mandana-mazaheri), liaise with @natacha-beck to get tool config ids.
Goal 2: Process CONP datasets in CBRAIN
Task
- Create a CBRAIN data provider for the whole CONP dataset, access datasets on this data provider or create a CBRAIN data provider for each CONP dataset.
- Store a CBRAIN data provider id for each dataset.
- In each dataset page, add a link to redirect to the CBRAIN dataset page in the CBRAIN portal.
How
The ideal solution would be to use CBRAIN's DataLad data provider. Otherwise, install and download the datasets on a server (suggestion: Beluga, to facilitate processing), and register this location as a regular CBRAIN data provider. Make sure that simple pipelines (Diagnostics) can be run on the files. In any case, new datasets should be created automatically (either create a new data provider or register new files to an existing data provider).
The CBRAIN data provider id should be stored using a mechanism similar to the one used to store CBRAIN tool config ids (see previous point). Suggestion: JSON file available in the portal config on GitHub.
Who
This is on the CBRAIN roadmap. Need to make sure that the CBRAIN datalad provider works as expected.
Liaise with CONP developers for DataLad expertise.
Notes
Something specific has to be done for datasets that require authentication. The CBRAIN team will manually configure permissions.
Goal 3: Process CONP datasets in CBRAIN using CONP pipelines
Tasks
- In the CONP portal, create an interface to select a pipeline from a dataset, and/or to select a dataset from a pipeline
- From this interface, redirect to a pre-populated CBRAIN launch form
How
Needs discussion, it might be a bit tricky, as fine-grained file selection in the dataset might be necessary.
Who
CONP portal developers: @liamocn, @xlecours
Goal 4: Analytics on pipeline execution
Task
- Create a dashboard of CONP pipeline executions on CONP datasets. This dashboard would track executions done in and outside of CBRAIN.
How
- Regularly upload Boutiques provenance from CBRAIN and any other execution platform.
- Pull Boutiques provenance records and present them in graphs
Who
@mandana-mazaheri for the provenance dashboard, liaise with @nbeck for provenance upload from CBRAIN.
from conp-portal.
ooooops, closed the wrong issue.
from conp-portal.
This issue is stale because it has been open 5 months with no activity. Remove stale label or comment or this will be closed in 3 months.
from conp-portal.
This issue was closed because it has been stalled for 3 months with no activity.
from conp-portal.
Related Issues (20)
- Look into SHACL schema validation for CONP data in Nexus
- Addition of a number of views and number of downloads sort by functionality
- Have a way to add metadata that we generate to better describe tools and datasets HOT 2
- Marking "coming soon" data in the portal. HOT 1
- DATS Editor: Date pickers should allow shortcut to change year. Dates should also follow ISO-8601 standard. HOT 1
- Dataset size, nb of files, etc. should be optional HOT 8
- extending DATS metadata to enable multiple new features HOT 6
- Add search/filter by age buckets - pedriatric, geriatric, lifespan
- License: Hyperlink / hoverhelp for context
- allow search for eeg OR electroencephalography
- versioned Terms of Use HOT 2
- DATS.json : hover/hyperlink to these in the list of datasets
- Task Executions page: Error message trigger can be stood down/ disabled
- Tool Executions page: add-on wishlist
- Tool License
- Proposal: DATS.json redesign for Interlex-based cross-references with CBRAIN HOT 3
- Revised proposal for Interlex-based cross-references with CBRAIN HOT 3
- Datasets: Option to provide Platform logo for third-party / offsite download platforms (vs. Study/org logo)
- List New/recent tools
- [ Tools ] 2 little important Bugs: Licenses are wrong + popup message glitch
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from conp-portal.