poldrack / cogat Goto Github PK
View Code? Open in Web Editor NEWCognitive Atlas
Home Page: http://www.cognitiveatlas.org
Cognitive Atlas
Home Page: http://www.cognitiveatlas.org
The link to the page is here
You can reproduce error by searching for "risk aversion"
The "stop signal task" has defined the contrasts "stop-critical" and "stop-noncritical" and we do not have these contrasts defined in our openfmri dataset. This likely means that we simply did not calculate them in the analysis, but I want to make sure they are valid contrasts. We should also discuss if the cognitive atlas generally should include "all possible contrasts" or only the ones that we think are meaningful.
ironically, working memory should be removed from http://www.cognitiveatlas.org/term/id/trm_550b50095d4a3 as it will be linked via more specific terms!
our first instantiation does not have "emotional suppression" as a concept, and the second does:
http://www.cognitiveatlas.org/task/id/trm_4da890594742a
row 129 and 57 in the spreadsheet I think are describing the same contrast - but one uses "emotional suppression" and the other "emotional reappraisal"
There are two instances of "probabilistic classification task" in cognitive atlas - one describes a "feedback" contrast and the other is more specific to describe a "positive feedback" and "negative feedback" contrast. I think we need to look over these two in the context of the group, because some assertions are not logical. For example, the first makes the assertion that visual form recognition is measured by both positive and negative feedback, and the second does not make any assertion between visual form recognition and feedback (but the assertion is made for visual word recognition, and positive/negative feedback also have this assertion). It could be the case that the two instantiations of the task are different, but we should double check this.
If it is incorrect, we should either be more specific for the first task (i.e., add positive/negative) or add the missing contrast (i.e., assertion that visual form recognition --> feedback)
I want to clarify the language that is used to describe relationships in the RDF. Specifically:
(note this is not all of the examples, just a distinct subset)
If we look at the stroop task page, it looks like:
narrower == is a child of (what the interface calls a progenitor, meaning someone cloned the task)
broader == is a parent of? This would mean that the "Stroop test" was cloned from the "selective attention task" but if you see the link, I don't see Stroop task as a progenitor.
I would interpret these first two "is a (something) synonym" as "is a" relationships. But then when we see this:
is descended from == parents of. This is an assertion that can be made if I click + Add Phylogeny. But what is the difference between this and "is a broader synonym?" What seems to be happening (I think) is that if a task is cloned, this defines a relationship for both the parent and child. However if a manual annotation is made with + Add Phylogeny, the relationship is not defined for whatever second task is selected. I can't "undo" relationships that I describe in this way, so I'm hesitant to test it out.
I would have guessed that "collections" encompass the "is part of" relationship (e.g., if a task belongs to a collection we would say 'Task A is_part_of Collection B'" however I don't see any fields relevant to collections in the task RDF (the ids start with tco).
I had been capitalizing all letters of tasks, but I think the standard should be lower case, and upper case only used for Last names, acronyms, etc.
I was reviewing the changes from 0.3.0 -> 0.3.1 so that I could update the version we import into the NIF-Ontology and discovered that there is a major mismatch between the identifiers.
Here is a diff which makes the issue easy to see.
All of neurolex/interlex and anyone who used them has the mapping from 3.0. Fortunately it doesn't look like it is too hard to fix in the existing owl file, but the fact that this has happened probably means that the generation script has a bug.
From Oscar Corcho:
When I download the RDF (link at the bottom of the page) I get a small RDF
representation for that skos concept, and it has:
skos:altLabelEPQ, EPI/skos:altLabel
In fact, it should be:
skos:altLabelEPQ/skos:altLabel
skos:altLabelEPI/skos:altLabel
We already have "emotional face recognition"
"inhibition" should be deleted from http://www.cognitiveatlas.org/term/id/trm_4cacf3fbc503b as it will be parent of response inhibition
I duplicated this contrast with this second one here, and since the second version has more detail, I think we should delete the older one.
When cloning or updating a task, it is possible that a user would want to change / delete contrasts. When this is done, the contrasts still linger under current Concept assertions. For example, this task was just cloned, and the old assertions need to be deleted.
visual perception should be deleted from http://www.cognitiveatlas.org/term/id/trm_550b5b066d37b as it is too general
The current RDF format does not give enough detail to distinguish between parents, and mapped concepts. For example, here is a modified stroop task. I would want to programatically retrieve the parents, however the parents and concept assertions are both represented as follows:
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/trm_4a3fd79d0af66"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/trm_4a3fd79d0af66"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/trm_4a3fd79d0af66"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/trm_4a3fd79d0a038"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/trm_5542841f3dcd5"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/tsk_4a57abb949e27"/>
<skos:related rdf:resource="http://www.cognitiveatlas.org/id/tsk_4a57abb949e27"/>
I can distinguish the two based on the format of the id (eg, "tsk" vs "trm") however it could be the case that if other links are defined, I can't be confident that all "tsk" terms are indeed parents. The same assertions are also mentioned in the example fields, for example:
<skos:example>color-word stroop task is descended from Stroop task</skos:example>
<skos:example>color-word stroop task is a narrower synonym of Stroop task</skos:example>
<skos:example>response inhibition is measured by the contrast of trials in the color-word stroop task</skos:example>
<skos:example>response inhibition is measured by the contrast of incongruent - congruent trials in the color-word stroop task</skos:example>
<skos:example>response inhibition is measured by the contrast of incongruent - neutral trials in the color-word stroop task</skos:example>
<skos:example>decision making is measured by the contrast of trials in the color-word stroop task</skos:example>
So this would mean that in order to use, I would need to do text mining for "contrast" or for "descended from."
I am currently implementing an Ontological Similarity package for the Cognitive Atlas, so accessing the data is pretty important! I can do a work around to do the text mining, etc, or create a small database to use (with some static cognitive atlas state) for the meantime, but ideally this could easily be done with the new Cognitive Atlas API. Do we have a sense if it will be ready soon, or should I do one of the work arounds?
working memory should not be measured by tone counting. I made the incorrect assertion, deleted it in the curation interface, and it is still there. I am hoping it has something to do with browser caching or the like.
The assertion of the concept "recognition" was made for the associative memory encoding task because it was cloned, and this is an incorrect assertion that needs to be removed (I don't have this ability to delete it).
We originally were going to tag images with choice, but a much better term is "response selection" and so we should consider deleting "choice" from the atlas all together, unless there is another motivation to have it.
As is currently done, the Term Bibliography gets replicated when cloning a task. Given that the task is a descendant of the cloned task, this information seems redundant - we could climb up a level to get those references, and the "Term Bibliography" for the task would be best used as citations relevant to that specific sub-task. The user would likely not take the time to delete the redundant references.
The first three contrasts for the oddball task (added previously) do not coincide with what we have for our images. We should look as a group and delete, modify, etc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.