eLife Sciences's Projects
Python functions for visualising fMRI cortical depth sampling results created with CBS tools.
Segmentation package for yeast fluorescence microscopy used in Weir et al. 2017
3D U-Net model for volumetric semantic segmentation written in pytorch
QDSpy - a Python software for scripting and presenting stimuli for visual neuroscience.
Splitting image to radial and non-radial components ImageJ macro
Rapamycin_rejuvenates_oral_health_in_aging_mice R Markdown
Risk Assessment Population and Identification
Import MSG tsv files into r-qtl
Readthrough analysis in S. Pombe rrp6D and cut14-208 mutants
Real-Time Experimental Control with Graphical User Interface
Code used for elife publication Reinhard*, Li* et al., 2019. DOI:
The code found in this repository is that which has been used to conduct the analyses reported in "Alpha/beta power decreases track the fidelity of stimulus-specific information" [https://doi.org/10.1101/633107].
(Prototype) Recentering and subboxing of particles.
Markdown source for the Project Rephetio Manuscript
The repository contains materials for the e-Life paper: Merse E. Gáspár, Pierre-Olivier Polack, Peyman Golshani, Máté Lengyel, Gergő Orbán, Representational untangling by the firing rate nonlinearity in V1 simple cells. https://doi.org/10.7554/eLife.43625
TE dynamics in yeasts with respect to reproduction mode.
Analysis of convergence between organismal traits and DNA/protein sequences
Source code to accompany: Resulaj A., Ruediger S., Olsen S.R., Scanziani M. "First spikes in visual cortex enable perceptual discrimination". eLife 2018.
Depository for computer code, full data set, and trajectories. The trajectories are upper and lower panel from figure 6 plotted for each species. Lower panels are plotted with data points distributed equally along the x-axis (*_b.pdf) and with data points distributed according to time (*_c.pdf).
Retinal Video Analysis Suite: a utility for generating reference frames and extracting eye position traces from retinal videos recorded via scanning laser ophthalmoscopes. It also includes a set of tools for classifying eye movements into drifts and (micro)saccades, and extracting detailed information about these eye movements.
Python/C++ codes for simulations in T Haga, T Fukai, Recurrent network model for learning goal-directed sequences through reverse replay
to normalize RGB images using SHINE toolbox function (Willenbockel, Verena, Javid Sadr, Daniel Fiset, Greg O. Horne, Frédéric Gosselin, and James W. Tanaka. 1132 2010. “Controlling Low-Level Image Properties: The SHINE Toolbox.” Behavior Research Methods 1133 42 (3): 671–84. doi:10.3758/BRM.42.3.671).