aces / brainbrowser Goto Github PK
View Code? Open in Web Editor NEWWeb-based visualization tools for neurological data.
Home Page: https://brainbrowser.cbrain.mcgill.ca/
License: GNU Affero General Public License v3.0
Web-based visualization tools for neurological data.
Home Page: https://brainbrowser.cbrain.mcgill.ca/
License: GNU Affero General Public License v3.0
Would it make sense to make BrainBrowser available through cdnjs?
I saw a few examples of using requirejs to load javascript libraries from cdnjs into an ipython notebook and curious if a similar approach would work for brainbrowser. You can always download a release and configure it locally, but cdnjs looks like it might make the process more straight forward.
Would be handy to have a way to move through time using the keyboard.
Right now it requires you to give a shape name as the first parameter. I think it would be better if it defaulted to changing the transparency of all loaded shapes, and then maybe took an optional second parameter (or an options object with a "shape" property) to set the transparency on a single shape. E.g.
viewer.setTransparency(0.5, {
"shape": "left_hemisphere"
});
Actually, this type of signature might be good for other methods, like setWireframe().
It seems that when you move the cursor on one panel, the slices on other panels gets updated but the cursor doesn't move. Was probably introduced when I refactored the loading code (07ded5e).
Unlike a lot of the graphical stuff, the tree store should be fairly straightforward to write some unit tests for. A lot of systems depend on it, so I think it would be worthwhile to do so. I'm leaning towards qUnit as the testing framework.
Would be cool if it were possible to do mobile with the Volume Viewer.
It's currently sending "text/plain" for everything which is causing annoying console errors.
The Volume Viewer's loadVolumes() always starts rendering when loading is complete. Now that volumes can be removed and reloaded (see bf86904), it would be better if loadVolumes() took an optional complete callback that could start the rendering. That way, it could be used more than once without creating multiple render loops.
Events are now defined on the viewer objects. I think it would be better to centralize them in the top-level BrainBrowser namespace, so that they can be accessed in non-viewer modules (e.g. BrainBrowser.loading or BrainBrowser.utils).
Currently, the Volume Viewer has to load all volumes when it starts rendering. I'd like to make it possible to load/remove volumes dynamically. This could potentially make it possible to load volume data from files.
It updates when the cursor is moved, but you can't input a position from it.
The foundation is there. Just have to extract the networking code from the volume loaders and make it possible to switch in file loading code.
The docular pages don't provide an example of a method call, so I think that would be useful.
Look into MINC RGB color map files and how they might be loaded into BrainBrowser.
Create an interface to select two points on a slice and have the viewer indicate the distance in world coordinates between them.
Not sure why this was done, but the cached_slices object in the viewer object doesn't cache actual slice objects. It's some strange intermediary object that holds the slice width_space and height_space objects and the image of the slice. Slice caching is already done within the volume objects, and the image can be cached in the appropriate panel object.
Needs to be updated to include separate worker configurations for model types and intensity data types.
Didn't realize I had left a dependency on jQuery in VolumeViewer.loader. Can fix this with issue #5.
Keyboard events for the control key seem to work differently. It looks like the keydown event is repeated when the key is held, which means the distance measurement anchor will keep getting reset..
It should probably hold an array containing data for all the models loaded.
Needs to be documented.
As currently implemented, markers are ignored for picking. The vertex of the model is what is actually picked and is used to look up an annotation. This can be confusing in terms of the interaction. If you click on an annotation marker but miss the vertex, nothing happens. I think one solution would be to store all annotation information in the marker mesh which could then be used directly through the picking to retrieve the annotation.
Due to changes in bf86904, some aspects of the documentation are out of sync:
Currently, the Volume Viewer moves one pixel at a time, and this generally maps to one unit in world space. If a voxel is represented by more than one pixel, your have to move through several pixels to get to the next slice. It would probably be preferable to move one voxel at a time.
This makes sense since it's now possible to load volumes from files (bf17c08).
It should be possible to resize the Volume Viewer panels after volumes are loaded.
I've noticed a few places where the mouse has to be captured, and the code is being rewritten each time. It would be nice to have a property of the viewer object that tracks the mouse (like the captureMouse() function does in the Volume Viewer).
The new JSON loader essentially standardizes a Surface Viewer data format. This should be documented somewhere.
The mouse tracker doesn't track movement that occurs outside the canvas. It should.
Currently, it isn't very well described anywhere.
The Volume Viewer requires files to be split by the minc tools into header and raw data files prior to use. Create a little node script to make this step easier for users.
The preloading of web workers is done through AJAX requests, and currently, if the preloading isn't complete by the time viewer starts, the viewer will throw an exception indicating that its configuration isn't complete.
I just noticed that the mongoose server has become a more closed project. It's not as easy to just compile and use as it used to be, so I think the "Getting Started" part of the README is no longer valid. I'll just create a simple node server in the examples directory, and modify the README to use that.
Came up with a rough draft for annotation mechanism for BrainBrowser at a hackathon this weekend. Just need to refine it an integrate into the core.
The add() and clear() methods are used internally, but if they're used directly by a user, they could leave the Surface Viewer in an invalid state (where the models loaded don't match the model data).
To write before the next release.
Louis noticed (and Cecile and I confirmed) that when going through world coordinates in one axis each step is equal for both voxel and world coordinates. Based on the conversion from voxel to world this is probably not expected behaviour.
There is a voxel to world minc tool and some explanation of world coordinates in the following url:
http://www.bic.mni.mcgill.ca/software/minc/minc2_format/node4.html
It might be worthwhile looking at a file that is in stereotaxic space to ensure that voxel coordinates are correct first. For example look at 0,0,0 location to ensure that this is correctly encoded.
Method is defined in parse_intensity_data.js. It probably shouldn't be part of the public API. It makes more sense to put it as a private function in surface-viewer/modules/color.js, since that's the only place it's used. It could also probably be cleaned up a bit:
There is any method to add labels at surface viewer loading freesurfer.asc?
I' m try to use the surface viewer to select a region and take the label that should be compatible with these http://www.slicer.org/slicerWiki/index.php/Slicer3:Freesurfer_labels.
Reading on the documentation I see that is possible to load a freesurfer asc, it will be very useful to have something like the atlas_label also for the this use case.
@PaulMougel
Not a major issue, but it doesn't need to be split across two separate functions, and it could generally be made easier to follow.
The Surface Viewer and Volume Viewer currently each have their own data fetching methods for files and URLs (see src/surface-viewer/loading.js and src/volume-viewer/loader.js). The methods to load color maps could probably be consolidated as well. The best place for the new methods would probably be in BrainBrowser.utils.
This should also be documented.
I've been told this would be more meaningful than the current setup which syncs based on voxel coordinates.
Only some parts of the Surface Viewer are currently triggering them. I'd rather they be more general.
Currently, it assumes incoming models are polygon-based.
The Surface Viewer currently requires users to redundantly set configuration for built in workers. These workers should simply configure themselves, and the only configuration parameter that should have to be set by a user is the worker_dir. Also, I'd kind of like configuration to be handled by an object that gets and sets parameters through methods. E.g. :
BrainBrowser.config.set("worker_dir", "js/brainbrowser/workers");
BrainBrowser.config.get("model_types.mniobj.worker");
When you click one of the "All Slices" buttons in the Volume Viewer example, the current slice shows up as the first image in the series.
This will allow for testing of fMRI data handling and would also be a good test of the new dynamic loading features introduced in bf86904.
loadVolumes() should ideally only handle the loading of volumes into the viewer.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.