Giter Club home page Giter Club logo

conp-portal's Introduction

CONP Portal

Build Status Coverage Status

Requirements

This code requires Python 3.7

Python Virtual Environment

Create a Python virtual environment called venv and install Flask dependencies

In the top level directory:

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Initialize the flask environment

You can set environment variables in the .flaskenv file. A template is provided for you to start from.

In the top level directory:

cp flaskenv.template .flaskenv

You will need to specify a database environment to use. The easiest for testing purposes is sqlite3 which is a filebased database system that will run locally on your system.

First you will need to make sure you install sqlite3 for you system. Information can be found at https://www.sqlite.org/index.html. For linux we recommend using the packaged version that comes from your distribution. To make sure it installed, you can run from the terminal sqlite3 and the application should run. Type .q and return to exit.

Edit the DATABASE_URL

In .flaskenv, replace the words <ENTER FLASK TOP DIR> with path to your top level flask directory. You should already be in it, so you can find the path with pwd.

Initilize the test database

We provide some initial data for you to create a functioning database for testing purposes. To initialize this:

In the top level directory:

flask db upgrade
flask seed_test_db
flask update_pipeline_data
flask seed_test_experiments

Run Application

In the top level directory:

flask run

The application should now be live on http://localhost:5000/

Experiments Portal

If you would like to run the run the experiments portal with hot module replacement, in a separate terminal run:

npm start --prefix app/static/lib/experiments-portal

To build the experiments portal, run:

npm run build --prefix app/static/lib/experiments-portal

Testing

We use the pytest framework for testing all aspects of the application. This will be automatically run by TravisCI when a pull request is made.

The tests exists in the tests directory and should not effect any of the development or production builds to run. Please feel free to add unit and functional tests with any new feature. Pytest will automatically pick up any tests that start with test_ that are placed in the folder under a directory. Please adhere to the structure there.

For unit tests of classes and utilities, use the folder tests/unit_tests For database specific testing, please use the folder tests/database_tests for blueprint and route testing, please use the tests/blueprint_specific_tests folder and place it in the appropriate blueprint specific directory.

Coding standards

In order to keep the Python code maintainable and readable, please run ./lint.sh to make sure the coding is up to standards. TravisCI will be checking this.

AWS Cloud9 (Experimental)

Some experimental testing cases are being explored with AWS Cloud 9.

To run the application on a Cloud9 instance:

    flask run --host=0.0.0.0 --port=8080

Deployment

This flask application is deployed on Heroku. More information will be available soon

Docker

docker build -t conp-portal .
docker run -d -p 4000:4000 -p 8080:8080 -v ${PWD}/app:/app/app --rm conp-portal
docker exec -it 1e3ad5006c5a567b42575dd941e2f52837b11229090b34b1486498a3221ba69f bash -c "cd /app/app/static/v2 && yarn dev"

conp-portal's People

Contributors

candicecz avatar carona898 avatar cmadjar avatar dependabot[bot] avatar desm1th avatar driusan avatar emmetaobrien avatar ghpbz avatar glatard avatar h0bb3s87 avatar haoweiqiu avatar jbpoline avatar joeyzhou98 avatar johnsaigle avatar joshunrau avatar laemtl avatar mandana-mazaheri avatar mathdugre avatar natacha-beck avatar paiva avatar papillonmcgill avatar surchs avatar tkkuehn avatar xlecours avatar zxenia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conp-portal's Issues

Refactor the portal app to use blueprints

Purpose

Provides a more modular framework to develop Flask Apps by delineating scope within different functions of the App. For example, the authorization portion can go in its own drop-in authorization app that is added to the main Flask App.

Context

Sets up the ability to make our overall development more modular, and also provides a path to unit testing of the platform.

Possible Implementation (optional)

Basically, there is an init.py for every app we would like to add that defines a Flask Blueprint. For example:

In app/auth, the init.py will have:

from flask import Blueprint
auth_bp = Blueprint('auth',__name__,template_folder='../templates')
from app.auth import routes

and then in the app/init.py:

from app.auth import auth_bp
  app.register_blueprint(auth_bp)

Modules like Flask-login are stubbed in the global space, and then a new function called create_app is used to create the full app.

I have an example of what this looks like for a role based authentication scheme using Flask-User at https://github.com/shots47s/NRC-Map-Converter-Flask

Should updating pipelines add data to the database or should we rely on the file based search that is there now?

Purpose

The updating of pipelines data from Zenodo happens at add places in the portal workflow by spawning threads. These leads to inconsistent datasets, and the file based nature of this seems a bit fragile.

Context

Should we have all of the data for the portal translated to the database, so that we have models to manipulate or is it best to rely on the cache files that are occurring now? The database may be easier to index with something like elastic search.

Possible Implementation (optional)

rewrite the system so that updating the pipeline data modifies the records in the database rather than being created on the fly from files in a .boutiques directory.

Related issues (optional)

Is a login required for retrieving open data in the portal?

Purpose

Do we require a login in the platform to access all of the data, even completely open datasets, or do we require a login or not (independent of what that login is).

Context

Right now, the portal does not require a login to see and access the data that is unrestricted and open. It only requires it to see registered data (in the current case, just the PreventAD dataset). We should decide once and for all, what the model is.

  1. Have users login to use the portal and access all of the data
    a. Advantage: Can track all downloads to an individual user.

  2. Have users only need to login to get access to data that is not unrestricted an open
    a. Advantage: More open
    b. Disadvantage: We will only be able to log access to datasets via IP address where the requests are coming from.

Add cli.py and make proper command line utilities for the app

Purpose

Create a place to build command line management tools for the portal such as database operations such as backup, restore, and seeding as well as other functions we may need in the future

Context

having a succinct place to put these tools can be easier to develop in the future and facilitate testing and translation.

Possible Implementation (optional)

create a separate cli.py that can be imported into the app and give the proper application context so that testing can also be done on a mock database without rewriting any code or configurations.

Check out what I did in the https://github.com/shots47s/NRC-Map-Converter-Flask for guidance.

Related issues (optional)

#56

Create a distinct Development and Production configuration environments

Purpose

In order to facilitate development and deployment, create the capability to distinctly set up a development vs. production environment.

Context

This way developers can set up a local env without the need to have use a dedicated VM. Should ensure that the dev environment is deployable quickly on a laptop.

Possible Implementation (optional)

using a well defined configuration definition for flask app, potentially multiple ones for each environment, and allowing a .flaskenv file to define local variables specific to the machine with flask-dotenv.

Related issues (optional)

#71

css suggestion for img

Currently the bootstrap css is allowing images to change aspect ration as they are resized. I suggest adding the following to the css

img {
   width: 100%;
   height: auto;
}

Beta Release Specs

Need to write the roadmap and beta release specs. Plan to open between July and August

Setting up VM webserver

Purpose

The conp-portal VM needs a production capable web server set up and running. Currently this does not seem to be configured correctly.

Possible Implementation

I will:

  • Set up NGINX+ uWSGI for the flask app
  • Configure the firewall / fail2ban / alerts emailed to admins for security
  • Set up letsencrypt https certs for the portal.conp.ca domain + set up auto-renewal of certs (unless you already have certs available, please let me know)
  • Ensure DNS is set up to allow the portal.conp.ca domain to be accessed by end users (this does not appear to be in place yet, but please let me know if you guys already have something ready to roll out)
  • Write an Ansible playbook to document the correct configuration and allow the VM to be re-provisioned from scratch (If a configuration management package other than Ansible is already in use by the project let me know and I'll use that instead)

@shots47s , @glatard : Thoughts?

Pipeline demo in Neurolibre

We should prepare a basic demo on how to process data with a pipeline in a Neurolibre notebook. We could for instance do brain extraction of a few subjects using zenodo.1482743. The way to do it would be to use Boutiques' desc2func feature, in a way similar to:

> from boutiques.descriptor2func import function
> mcflirt = function('zenodo.2602109')
> mcflirt(in_file='/home/glatard/data/test.nii.gz')

@stikov @pbellec

Porting pipelines to CBRAIN

Tools found through the portal will be identified as available in CBRAIN when their online-platform-urls property (a list) contains a URL with the string "cbrain" in it. Many pipelines returned by search aren't currently available in CBRAIN, and if they are, it's likely that their descriptor doesn't include the URL of a CBRAIN portal. Pipelines returned by bosh search should be reviewed, added to CBRAIN plugins whenever relevant, and republished.

@shots47s

Pipeline execution in CBRAIN

#51 will result in the user being redirected to CBRAIN to run pipelines. We should discuss whether we would like to have a tighter integration with CBRAIN. Steps could be:

  1. Pre-populate a pipeline launch form with a dataset and default parameters, instead of just redirecting to the portal.
  2. Launch the pipeline from the CONP portal.

I think (1) is doable and (2) is not really necessary but that would require some discussion.

@shots47s

Tools & Pipelines Key Points

Overview of the search engine:
Search through tools by using the bosh command line (form Boutiques). Boutiques should act as an "app store" framework where it retrieves tools through bosh search. The unique tool identifier (DOI) is returned from Zenodo and can be used to allow the user to download the tool.

One key element about the search is that it should use web caching, as individual bosh searches are quite slow.

Dynamic updating of datasets from the conp-datasets repo.

Purpose

Currently, the datasets in our portal are statically updated at install time through a kludgy migration. This should be dynamically populated through calls to whatever backend API that we are using, or pull from a registry process.

Context

The portal should be dynamic in how it presents its datasets.

Possible Implementation (optional)

  1. Have this automatically done from time to time by polling the conp-dataset repo, potentially using datalad itself.
  2. Create a means of registering datasets in the platform, and defining where they may be.

We could support both, but this is an open issue that would be good to discuss amongst the team and TSC before we move forward.

Select portal framework

Assess and select suitable framework based on initial requirements, e.g. Drupal, Joomla, etc.

Missing pipelines descriptors.json

The file all_descriptors.json is not downloaded in the .cache repository locally, therefore when I search for pipelines, it gives an error that it cannot find that file

Expected Behavior

The file all_descriptors.json should be downloaded first before launching the application to avoid having the error problem.

Current Behavior

Screenshot_2019-07-16_14-48-34

Traceback (most recent call last):
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 2309, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 2295, in wsgi_app
    response = self.handle_exception(e)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 1741, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise
    raise value
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 1815, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 1718, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise
    raise value
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/s/Work/MNI/CONP/dev/conp-portal/venv/lib/python3.5/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/s/Work/MNI/CONP/dev/conp-portal/app/routes.py", line 417, in pipeline_search
    with open(all_desc_path, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/s/.cache/boutiques/all_descriptors.json'

Environment (optional)

This happened in Ubuntu-based systems. I was able to replicate the bug 3 times. It does not seem to affect Mac users.

Context (optional)

Possible fix (optional)

Related issues (optional)

Add link to online platform where pipeline is installed

Purpose

Pipeline objects have an online-platform-urls list property that refers to Web platforms where the pipeline is installed. When this property is defined, pipeline search should add an icon linking to a platform in online-platform-urls in the pipeline record.

  • When the linked URL contains string "cbrain", then the icon should be that of CBRAIN. Otherwise, it should be a default "globe" URL.
  • When online-platform-urls has more than 1 entry, the portal should select an entry containing the string "cbrain" if available. Otherwise, it should select the first entry in the list. If multiple URLs exist that contain the string "cbrain", it should choose the first one.

Pipeline zenodo.3267250 currently has an online-platform-urls property defined with an element containing "cbrain".

Context

It is the first step toward enabling pipeline execution on CBRAIN, see #63. As part of #58, @shots47s added a CBRAIN URL to zenodo.3267250 but it is not showing in the portal

Possible Implementation

The pipeline JS record reuses the dataset record. The link should be added to the CBRAIN logo there:
Screenshot from 2019-07-03 10-37-25
If the URL doesn't point to CBRAIN, a generic logo should be used, such as https://fontawesome.com/icons/globe?style=regular

Related issues

#58
#63

@joeyzhou98 would you like to take on this when done with your current task?

Zenodo vs CONP access restriction

Purpose

Ensure that datasets submitted to CONP through Zenodo have equivalent access restriction.
Access restriction displayed and selected on Zenodo match what is advertized on CONP.

Context

Zenodo has different restriction on access.

Improve test coverage in pytest

Purpose

Having added the framework for testing the project, tests must now be added to increase overall test coverage

Context

Test coverage can be monitored using coveralls, which has been added as part of #56

Related issues (optional)

#56

[Suggestion] Forums/Collaboration Tools

On the Forums section of the CONP portal, it would be nice to explore:

  1. Having a link to Neurostars
  2. Brainhack slack channel
  3. Discourse platform (either MCIN's or create a CONP discourse forum)

Establish CONP style guide CSS

Purpose

This may have already been started or looked at, but creating a definitive style-guide / CSS for the portal so that there is not a bunch of custom CSS floating around.

Context

Having a definitive style guide will ameliorate development and ensure that as we move forward, the portal has a uniform look and feel.

Possible Implementation (optional)

Review of the current CSS's in the repo to clean up and ensure we are moving in the right direction.

Pipeline UI improvements

As discussed with @paiva and @joeyzhou98, #51 will result in a minimal, usable interface for pipeline search. That minimal version should be extended to comply to the mockup initially designed. @joeyzhou98 could add more information following our discussion.

Design UX/UI

Design initial portal UX/UI, including

  • branding
  • style
  • UI layout
  • workflow / navigation

Clean up CONP VM environment

Purpose

There is a need to reorganize and clean up our current set of CONP VMs. Currently each have been deployed with a single use account that is being shared amongst several people, and each of them doesn't at this point serve a clear purpose.

In order for the project to move from a demo platform to a full fledged production environment with a geographically shared development effort, we need to bring these into the MCIN fold and reevaluate how our developers interact with the infrastructure.

Context

As we are moving into taking the CONP development from a demonstration effort to a long-term sustained production environment, we need to reevaluate how we securely and appropriately utilize MCIN infrastructure. Ideally, the VMs created should be for testing and hosting deployments rather than as shared environments for development (as this is not a scalable solution, especially for remote development). Additionally, it is incredibly insecure to have shared VMs with only one user account that are not tied to centralized user accounting system, as auditing activity is impossible.

Possible Implementation (optional) (not sequential)

  1. Audit current set of VM's deployed for CONP and determine their purpose and need.
  2. Tie all existing VMs to the MCIN LDAP with appropriate groupings to facilitate who needs what access to the VMs (e.g. who needs sudo).
  3. Ensure all developers that need access to MCIN VMs are in the LDAP and placed in appropriate groups.
  4. Develop a plan for on-boarding new developers with in mind that many may be geographically distributed and will need to have local environments set up.

Related issues (optional)

#69

[Mobile] CSS changes from conp.ca website (optimize responsive)

<!-- styles the nav container -->
.navbar.navbar-inverse-red {
    position: fixed;
    z-index: 99999;
    top: 0;
    left: 0;
    width: 100%;
    min-height: 60px;
    background: #fff;
    border-bottom: 2px solid;
    border-bottom-color: #FF0000;
    box-shadow: 2px 2px 8px rgba(0,0,0,.5);
}

<!-- overwrites bootstrap margin on nav container -->
.navbar-nav {
    margin: 0 0 20px;
}

<!-- overwrites styles for list item anchors -->
.navbar-nav>li>a {
    position: relative;
    display: block;
    padding: 10px 10px;
    font-weight: 400;
    text-decoration: none;
    line-height: 1.6em;
}

<!-- restyles hamburger. the website uses font-awesome so this makes it look similar -->
.navbar-toggle .icon-bar {
    display: block;
    width: 30px;
    height: 5px;
    border-radius: 8px;
    background-color: #343434;
}


<!-- link to google fonts for the head of the site -->
<link href="https://fonts.googleapis.com/css?family=PT+Sans+Narrow:400,700|PT+Sans:400,700" rel="stylesheet">

<!-- set the fonts of the site -->
body {    
    	color: #343434;
        font-family: 'PT Sans', sans-serif;
	font-size: 16px;
	font-size: 1rem;
	line-height: 1.5;
}

h1, h2, h3, h4, h5, h6 {
	clear: both;
        font-family: 'PT Sans Narrow', sans-serif;
        line-height: 1.3em;
        margin-top: 1rem;
        margin-bottom: 1rem;
        font-weight: 400;
}

h1 { font-size: 2.4em; }
h2 { font-size: 2.2em; }
h3 { font-size: 2em; }
h4 { font-size: 1.8em; }
h5 { font-size: 1.6em; }
h6 { font-size: 1.4em; }

<!-- note - you'll need to reset the navigation items to the default lower case except first letter -->

Dataset presentation in portal

Purpose

The synced view of Datalad datasets in the portal is awesome! But I think it would benefit from the following improvements:

  • Find a way to show logos as it was a case before. We could check if a logo.{png,jpg} file is present at the root of the dataset
  • If the DATS.json model has no title, use the title of the dataset (currently, the PERFORM dataset shows as "No title in DATS.json"

dataset upload requirement checklist

Purpose

Ensure that when a user try to upload a new dataset, a message remind the user about dataset type (open, restricted, registered, ...) requirement connect with CONP ethics and governance.

Context

to respect ethics and privacy of people

Fix Mobile Layouts

It breaks for NeuroLibre icon, download buttons, and data search aspects. Need to show cards instead

Documentation and publications for Prevent-AD

Here is a block of code to add to the Prevent-AD description

<h2>Documentation</h2>
<ul>
	<li>
		<a href="https://conp.ca/wp-content/uploads/2019/04/PREVENT-AD-short-description.pdf">Prevent-AD short description</a>
	</li>
	<li>
		<a href="https://conp.ca/wp-content/uploads/2019/04/MRI-protocol-for-open-science.pdf">MRI protocol for Open Science</a>
	</li>
</ul>

<h2>List of Publications</h2>
<ol>
	<li>
		<p>Brain properties predict proximity to symptom onset in sporadic Alzheimer's disease. Vogel JW, Vachon-Presseau E, Pichet Binette A, Tam A, Orban P, La Joie R, Savard M, Picard C, Poirier J, Bellec P, Breitner JCS, Villeneuve S; Alzheimer’s Disease Neuroimaging Initiative and the PREVENT-AD Research Group. Brain. 2018 Apr 23.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29688388">doi: 10.1093/brain/awy093.</a>
	</li>
	<li>
		<p>Bi-directional Association of Cerebrospinal Fluid Immune Markers with Stage of Alzheimer’s Disease Pathogenesis. Meyer PF, Savard M, Poirier J, Labonté A, Rosa-Neto P, Weitz TM, Town T, Breitner, J; Alzheimer’s Disease Neuroimaging Initiative; PREVENT-AD Research Group. J Alzheimers Dis. 2018 Apr 11.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29660934">doi: 10.3233/JAD-170887.</a>
	</li>
	<li>
		<p>Subjective cognitive decline is associated with altered Default Mode Network connectivity in individuals with a family history of Alzheimer’s Disease, Verfaillie SCJ, Pichet Binette A, Vachon-Presseau E, Tabrizi S, Savard M, Bellec P, Ossenkoppele R, Scheltens P, van der Flier WM, Breitner JC, Villeneuve S for the PREVENT-AD Research Group. Biological Psychiatry: Cognitive Neurosciences and Neuroimaging. May 2018.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29735156">doi: 10.1016/j.bpsc.2017</a>
	</li>
	<li>
		<p>Proximity to Parental Symptoms Onset and Amyloid-β Burden in sporadic Alzheimer disease. Villeneuve S, Vogel JW, Gonneaud J, Pichet Binette A, Rosa-Neto P, Gauthier S, Bateman RJ, Fagan AM, Morris JC, Benzinger TL, Johnson SC, Breitner JC, Poirier J, for the Prevent-AD group. JAMA Neurology. 2018 May 1.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29482212">doi: 10.1001/jamaneurol.2017.5135.</a>
	</li>
	<li>
		<p>Regionally specific changes in the hippocampal circuitry accompany progression of cerebrospinal fluid biomarkers in preclinical Alzheimer’s disease, Tardif CL, Devenyi G, Amaral RSC, Pelleieux S, Poirier J, Rosa-Neto P, Breitner J, Chakravarty MM; PREVENT-AD Research Group., Hum Brain Mapp. 2017 Nov 21.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29164798">doi: 10.1002/hbm.23897</a>
	</li>
	<li>
		<p>Deficit in Central Auditory Processing as a Biomarker of Pre-Clinical Alzheimer’s Disease, Tuwaig, Miranda, Savard, Mélissa, Jutras, Benoît, Judes Poirier, Louis Collins, Pedro Rosa-Neto, David Fontaine and John C.S. Breitner for the PREVENT-AD Research Group, Journal of Alzheimer's Disease, vol. 60, no. 4, pp. 1589-1600, 21 Aug 2017.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/28984583">doi: 10.3233/JAD-170545</a>
	</li>
	<li>
		<p>Validation of a Regression Technique for Segmentation of White Matter Hyperintensities in Alzheimer's Disease. Dadar M, Pascoal TA, Manitsirikul S, Misquitta K, Fonov VS, Tartaglia MC, Breitner J, Rosa-Neto P, Carmichael OT, Decarli C, Collins DL. IEEE Trans Med Imaging. 2017 Aug;36(8):1758-1768.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/28422655">doi: 10.1109/TMI.2017.2693978</a>
	</li>
	<li>
		<p>Odor identification as a biomarker of preclinical AD in older adults at risk. Lafaille-Magnan ME, Poirier J, Etienne P, Tremblay-Mercier J, Frenette J, Rosa-Neto P, Breitner JCS; PREVENT-AD Research Group. Neurology. 2017 Jul 25;89(4):327-335.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/28659431">doi: 10.1212/WNL.0000000000004159</a>
	</li>
	<li>
		<p>Alzheimer’s Progression Score’: Development of a Biomarker Summary Outcome for AD Prevention Trials J.-M. Leoutsakos, A.L. Gross , R.N. Jones, M.S. Albert, J.C.S. Breitner (2016) The Journal of Prevention of Alzheimer’s Disease - JPAD© Volume 3, Number 4, 2016.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29034223">doi: 10.14283/jpad.2016</a>
	</li>
	<li>
		<p>Rationale and Structure for a new Center for Studies on Prevention of Alzheimer’s Disease (StoP-AD) J.C.S. Breitner, J. Poirier, P.E. Etienne1, J.M. Leoutsakos for the PREVENT-AD Research Group (2016) The Journal of Prevention of Alzheimer’s Disease - JPAD© Volume 3, Number 4, 2016.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/29199324">doi: 10.14283/jpad.2016.121.</a>
	</li>
	<li>
		<p>Test-retest resting-state fMRI in healthy elderly persons with a family history of Alzheimer's disease. Orban P, Madjar C, Savard M, Dansereau C, Tam A, Das S, Evans AC, Rosa-Neto P, Breitner JC, Bellec P; PREVENT-AD Research Group. Sci Data. 2015 Oct 13;2:150043.</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/26504522">doi: 10.1038/sdata.2015.43.</a>
	</li>
	<li>
		<p>INTREPAD: a randomized trial of naproxen to slow progress of pre-symptomatic Alzheimer’s disease, Pierre François Meyer*, MSc; Jennifer Tremblay-Mercier*, MSc; Jeannie Leoutsakos, PhD; Cécile Madjar, MSc; Marie-Élyse Lafaille-Maignan, PhD; Melissa Savard, MSc; Pedro Rosa-Neto, MD, PhD; Judes Poirier, PhD**; Pierre Etienne, MD**; John Breitner, MD, MPH** for the PREVENT-AD research group. Neurology, 2019 Apr 5 [Epub ahead of print].</p>
		
		<a href="https://www.ncbi.nlm.nih.gov/pubmed/30952794">doi: 10.1212/WNL.0000000000007232.</a>
	</li>
</ol>

Establish user stories

Establish iteration user stories, focused on:

  • General user operations and activities
  • Activities related to meta-data query and handling

[Analytics] CONP Analytics - phase 1

Summary of the analytics to be added on the short term from our Matomo server (Feb 23rd, 2021)

  • display Matomo widgets directly into the analytics page using iframes #403
  • display number of visits to the portal across time #404
  • display number of views per datasets/tools #405
  • display popular keyword searches #406
  • display analytics on dataset providers #407

Old description of the ticket (Mar 11, 2019)

It would be ideal to deploy some analytics tool and some event tracking systems to better understand user behavior and, if possible, find out what it is being done with the data downloaded.

Event tracking tags should be consistent with those to be used by the LORIS platform so that we are consistent across both platforms.

Some event tracking tools include:

  • Google Analytics
  • Snowplow Analytics
  • Piwik
  • Google WebMaster Tools

Some dashboard tools include:

  • Google Analytics
  • Grafana

Add login pages to the styleguide

Purpose

The login pages have a different base and need to be introduced into the styleguide.

Context

So that we have a definitive styleguide for the whole website.

Related issues (optional)

#88

Download datasets from portal

Purpose

We want for users to be able to download datasets that they have authorization to download to be able to do so without additional logins. Primarily, this is a currant issue for LORIS datasets, but also will need to be addressed for datasets in other platforms (e.g. BrainCode).

Context

We want for users to not necessarily need to go to other platforms to get the data unless we truly need to.

Possible Implementation (optional)

For LORIS, quick and dirty, we decided to create a global user that would be used to access data through the portal.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.