A new spiking neural network (SNN) model which obtains accelerated training and state-of-the-art performance across various neuromorphic datasets without the need of any regularisation and using less spikes compared to standard SNNs.
Install all required dependencies and activate the dblock environment using conda.
conda env create -f environment.yml
conda activate dblock
See the notebooks/Tutorial.ipynb notebook for getting started with the d-block model.
All the paper results can be reproduced using the scripts available in the scripts
folder. Alternatively, all speedup benchmarks and pretrained models can be found under the releases.
The python run_benchmarks.py
script will benchmark the time of the forward and backward passes of the d-block and standard SNN model for different numbers of neurons and simulation steps.
Ensure that the computer has a CUDA capable GPU with CUDA 11.0 installed.
Following instructions outlined in the block repo to download and process the N-MNIST and SHD datasets. The SSC dataset can be downloaded and unzipped into the data/SSC
directory
You can train the d-block and standard SNN on the different datasets using the train.py script. For example, to train a d-block model with d=5 on the SHD dataset:
python train.py --method=fast_naive --t_len=500 --beta_requires_grad=True --d=5 --recurrent=True --n_layers=1 --n_neurons=128 --detach_recurrent_spikes=True --dataset=shd --epoch=100 --batch=128 --lr=0.001
All speedup and training results can be built by running the notebooks/results/benchmark_results.ipynb
and notebooks/results/dataset_results.ipynb
notebooks. The code for the other paper figures can be found under notebooks/figures
directory.
Training speedup of our
Analysis of our