Giter Club home page Giter Club logo

decolle-public's People

Contributors

eneftci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

decolle-public's Issues

Issues when running a demo

Hi @eneftci,
I'd like to ask, how to solve the issue that I found when running the given demo.
Following is the log:

(decolle) user@user-VBox:/media/user/VBoxShared/decolle-public/scripts$ python train_lenet_decolle.py
Saving results to logs/train_lenet_decolle/default/Dec13_04-24-38_user-VBox
/home/user/.local/lib/python3.9/site-packages/decolle-0.1-py3.9.egg/decolle/utils.py:137: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
The following files did not exist, will attempt download:
data/nmnist/Train
data/nmnist/Test
Traceback (most recent call last):
  File "/media/user/VBoxShared/decolle-public/scripts/train_lenet_decolle.py", line 39, in <module>
    gen_train, gen_test = create_data(chunk_size_train=params['chunk_size_train'],
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/nmnist/nmnist_dataloaders.py", line 180, in create_dataloader
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/nmnist/nmnist_dataloaders.py", line 153, in create_datasets
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/nmnist/nmnist_dataloaders.py", line 59, in __init__
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/neuromorphic_dataset.py", line 103, in __init__
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/nmnist/nmnist_dataloaders.py", line 81, in download
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/neuromorphic_dataset.py", line 171, in download
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/neuromorphic_dataset.py", line 83, in download_and_extract_archive
  File "/home/user/.local/lib/python3.9/site-packages/torchneuromorphic-0.3.1-py3.9.egg/torchneuromorphic/neuromorphic_dataset.py", line 45, in download_url
ModuleNotFoundError: No module named 'requests'

Thanks in advance and Best Regards

Error running test

Hi, I'm a newbie. I was trying to run train_lenet_decolle.py but i get this error:
C:\Users\gpira\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\decolle-0.1-py3.8.egg\decolle\utils.py:98: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
[2, 32, 32]
Traceback (most recent call last):
File "train_lenet_decolle_fa.py", line 39, in
gen_train, gen_test = create_data(chunk_size_train=params['chunk_size_train'],
File "C:\Users\gpira\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torchneuromorphic\nmnist\nmnist_dataloaders.py", line 170, in create_dataloader
train_d, test_d = create_datasets(
File "C:\Users\gpira\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torchneuromorphic\nmnist\nmnist_dataloaders.py", line 145, in create_datasets
train_ds = NMNISTDataset(root,train=True,
File "C:\Users\gpira\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torchneuromorphic\nmnist\nmnist_dataloaders.py", line 63, in init
self.n = f['extra'].attrs['Ntrain']
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\gpira\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\h5py-2.10.0-py3.8-win-amd64.egg\h5py_hl\attrs.py", line 60, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5a.pyx", line 77, in h5py.h5a.open
KeyError: "Can't open attribute (can't locate attribute: 'Ntrain')"

I am no expert so I can't find the solution, can anyone help me?

Confused type error in code (Sep. 2)

Hi~ I am a student of Tsinghua University and is trying to test dvs_gesture dataset. Hoping that I could get some help from the repo. I tested the code(Sep.2 version). Everything goes well until the epoch 1 finished. It shows:

Saving results to runs_args_lenet_decolle_cuda/Nov01_21-23-02_seallab-Precision-Tower-7910_bioplaus
/home/hewh16/master/decolle/utils.py:70: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
params = yaml.load(f)
File data/dvs_gestures_events.hdf5 exists: not re-converting DvsGesture

------Starting training with 3 DECOLLE layers-------
Epoch 0: 100%|███████████████████████████████████████████████████████████| 500/500 [00:06<00:00, 75.84it/s]
Loss 2.43e+04
---------------Epoch 1-------------
---------Saving checkpoint---------
Traceback (most recent call last):
File "train_lenet_decolle.py", line 94, in
test_loss, test_acc = test(gen_test, loss, net, params['burnin_steps'], print_error = True)
File "/home/hewh16/master/decolle/utils.py", line 108, in test
net.init(data_batch, burnin)
File "/home/hewh16/master/decolle/base_model.py", line 266, in init
self.forward(torch.Tensor(data_batch[:, i, :, :]).to(device))
File "/home/hewh16/master/decolle/lenet_decolle_model.py", line 129, in forward
u_p = pool(u)
File "/home/hewh16/miniconda3/envs/hewh16/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/hewh16/miniconda3/envs/hewh16/lib/python3.7/site-packages/torch/nn/modules/pooling.py", line 141, in forward
self.return_indices)
File "/home/hewh16/miniconda3/envs/hewh16/lib/python3.7/site-packages/torch/_jit_internal.py", line 138, in fn
return if_false(*args, **kwargs)
File "/home/hewh16/miniconda3/envs/hewh16/lib/python3.7/site-packages/torch/nn/functional.py", line 488, in _max_pool2d
input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: non-empty 3D or 4D input tensor expected but got ndim: 4

Sound unbelievable confusing :(
I will appreciate it if you could help me out. By the way, if that seems hard, could you please just show me a simple demo of mlp test on dvs_gesture? Thanks~

resume training fails with single learning rate

Quick report from the field on commit 61aca54 to branch update_21014 (which addressed #15 and #16).

Because decolle.utils.MultiOpt uses the semantically accurate, but non-conforming method name load_state_dicts, resuming model training fails with train_lenet_decolle.py when trying to resume from a single learning-rate model whose opt is a torch .optim.Adamax object (which has only a load_state_dict method), the failure happening here.

I'd perhaps suggest simply renaming the MultiOpt method to the singular load_state_dict to avoid messiness elsewhere about checking for which attribute or object class is present.

I'm happy to submit a PR along those lines if you like; it looks like you pull the public facing version of this repo from elsewhere, so I'd understand if that complicates your git flow for so simple a change.

Here's the error trace:

Traceback (most recent call last):
File "train_lenet_decolle.py", line 111, in
starting_epoch = load_model_from_checkpoint(checkpoint_dir, net, opt)
File "[root]/conda/lib/python3.7/site-packages/decolle-0.1-py3.7.egg/decolle/utils.py", line 166, in load_model_from_checkpoint
AttributeError: 'Adamax' object has no attribute 'load_state_dicts'

Training Time

HI, I would like to know how many time did you train on an epoch.
I spent an hour to train an epoch using RTX2080. Doesn't seem right to deal with such small dataset.

---------------Epoch 1-------------
---------Saving checkpoint---------
Testing: 100%|████████████████████████████████| 124/124 [03:28<00:00, 1.68s/it]
Error Rate L0 0.315 Error Rate L1 0.0407 Error Rate L2 0.0249 Error Rate L3 0.0246
Epoch 1: 100%|████████████████████████████████| 753/753 [54:31<00:00, 4.34s/it]
Loss [ 28.6935 6.8569 4.2865 895.1974]
Activity Rate [95.05911542127708, 49.73174018810146, 38.44802363960695, 66.41492680630824]
Changing learning rate from 1e-09 to 1e-09

LIFLayer dynamics

Thanks to everyone involved for sharing the detailed code and well-written paper about the method.

I apologize if this is elementary, but I'm trying to reconcile the implementation of the dynamics in class LIFLayer with the equations written in the paper.

Specifically, the forward method (here) updates Q and P using members tau_s and tau_m, respectively.

These members are set in the constructor; for example (abridged for clarity)

tau_m = 1./(1-alpha)

Then P is updated as

P = self.alpha * state.P + self.tau_m * state.Q  

Since the paper defines (Equation 4)

P_j^l \left[t+\Delta t\right] = \alpha P_j^l\left[t\right] + (1-\alpha) Q_j^l\left[t\right]

and

\alpha = \exp\left(-\frac{\Delta t}{\tau_{\mathrm{mem}}}\right),

I'm wondering why the line above isn't

P = self.alpha * state.P + (1-self.alpha) * state.Q  

(and perhaps also tau_m = - dt / log(alpha), though I'm not sure this would matter as much since tau_m seems to have no other use in this class, yet it does in class LIFLayerVariableTau).

Is it simply because

\frac{1}{1-\alpha} \approx -\frac{1}{\log \alpha}

for the range of α values in use?

In sum, what is the reason for using tau_m and tau_s for updating P and Q respectively, rather than (1-alpha) and (1-beta)?

[Question] : neural coding for static images

Hello,

In the related jupyter tutorial (lif-autograd repository), DECOLLE is used with the static MNIST dataset encoded in rate coding ( snn_utils.image2spiketrain() ).. Have you tried other neural coding in addition to rate coding ? For exemple, temporal coding, phase coding, etc ? If it is the case, can we have the results ?

By the way : thank you for your work. And I specially want to thank you for your well-written tutorial in addition to your contribution. It is really rare to see it with a paper implementation.

Can't run DvsGestures example

python train_lenet_decolle.py --params_file=parameters/params_dvsgestures_torchneuromorphic.yml
Saving results to logs/train_lenet_decolle/default/Jul17_18-05-01_george-System-Product-Name
/home/george/.local/lib/python3.8/site-packages/decolle-0.1-py3.8.egg/decolle/utils.py:122: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
The following files did not exist, will attempt download:
data/dvsgesture/raw
Using downloaded and verified file: data/dvsgesture/DvsGesture.tar.gz
Extracting data/dvsgesture/DvsGesture.tar.gz to data/dvsgesture/
Traceback (most recent call last):
File "train_lenet_decolle.py", line 39, in
gen_train, gen_test = create_data(chunk_size_train=params['chunk_size_train'],
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/dvs_gestures/dvsgestures_dataloaders.py", line 166, in create_dataloader
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/dvs_gestures/dvsgestures_dataloaders.py", line 59, in init
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/neuromorphic_dataset.py", line 104, in init
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/dvs_gestures/dvsgestures_dataloaders.py", line 76, in create_hdf5
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/dvs_gestures/create_hdf5.py", line 23, in create_events_hdf5
File "/home/george/.local/lib/python3.8/site-packages/torchneuromorphic-0.3.4-py3.8.egg/torchneuromorphic/dvs_gestures/create_hdf5.py", line 72, in gather_aedat
FileNotFoundError: DVS Gestures Dataset not found, looked at: data/dvsgesture/raw

CUDA error on tutorial

Hello everyone,
while running tutorial on classification using dcll (second tutorial), I have encountered the following error during training very simple two layer mlp:

~/some_python_examples/VT_SNN/auxillary_files/../../pytorch-lif-autograd/decolle_public/decolle/base_model.py in forward(self, Sin_t)
    210         #print('Sin_t:', Sin_t)
    211 
--> 212         Q = self.beta * state.Q + self.tau_s * Sin_t
    213         P = self.alpha * state.P + self.tau_m * state.Q  # TODO check with Emre: Q or state.Q?
    214         R = self.alpharp * state.R - state.S * self.wrp

RuntimeError: CUDA error: an illegal memory access was encountered

I googled possible solutions, but with no success. Is there anything I am missing while setting DECOLLE?

Thanks,
Tas

accuracy

Hi, I ran the scripts train_lenet_decolle.py with default setting and get the test_acc.npy in logs folder. The result is down below, am I doing right? What are the meaing in each column?

Python 3.8.3 (default, Jul  2 2020, 16:21:59) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.load('test_acc.npy')
array([[0.67275785, 0.96008969, 0.97578475, 0.97421525],
       [0.71950673, 0.9706278 , 0.98307175, 0.98273543],
       [0.74204036, 0.97253363, 0.98430493, 0.97443946],
       [0.75257848, 0.97443946, 0.98598655, 0.98363229],
       [0.76345291, 0.97600897, 0.98688341, 0.98542601],
       [0.77399103, 0.97780269, 0.98643498, 0.9853139 ],
       [0.78195067, 0.97959641, 0.98800448, 0.98508969],
       [0.78396861, 0.97847534, 0.98553812, 0.97970852],
       [0.78251121, 0.97959641, 0.98789238, 0.98508969],
       [0.78026906, 0.97982063, 0.98766816, 0.9793722 ],
       [0.78621076, 0.97982063, 0.98665919, 0.97701794],
       [0.79091928, 0.98038117, 0.98867713, 0.98441704],
       [0.79147982, 0.98038117, 0.98834081, 0.98408072],
       [0.79248879, 0.98127803, 0.9896861 , 0.97096413],
       [0.79809417, 0.98161435, 0.98878924, 0.98206278],
       [0.79002242, 0.97959641, 0.98834081, 0.982287  ],
       [0.7970852 , 0.98049327, 0.98946188, 0.97825112],
       [0.79585202, 0.98004484, 0.98811659, 0.9808296 ],
       [0.79730942, 0.9809417 , 0.98778027, 0.9808296 ],
       [0.7941704 , 0.9809417 , 0.98856502, 0.98520179],
       [0.79495516, 0.98161435, 0.9882287 , 0.98307175],
       [0.8014574 , 0.98139013, 0.98923767, 0.97679372],
       [0.79977578, 0.9794843 , 0.98744395, 0.98026906],
       [0.79977578, 0.98139013, 0.98800448, 0.98183857],
       [0.79742152, 0.98150224, 0.98834081, 0.97780269],
       [0.8       , 0.98150224, 0.98834081, 0.98172646],
       [0.80190583, 0.98195067, 0.98912556, 0.98195067],
       [0.80403587, 0.98116592, 0.98845291, 0.98127803],
       [0.80807175, 0.98060538, 0.98811659, 0.97993274],
       [0.80257848, 0.98150224, 0.98800448, 0.97813901],
       [0.80235426, 0.98206278, 0.99058296, 0.98475336],
       [0.80493274, 0.98172646, 0.99103139, 0.98497758],
       [0.80639013, 0.98217489, 0.9896861 , 0.98452915],
       [0.80672646, 0.98273543, 0.99024664, 0.9853139 ],
       [0.80325112, 0.98172646, 0.98957399, 0.98497758],
       [0.80560538, 0.98251121, 0.99013453, 0.98598655],
       [0.80526906, 0.98172646, 0.98991031, 0.98733184],
       [0.80257848, 0.98206278, 0.99002242, 0.98396861],
       [0.80504484, 0.982287  , 0.99002242, 0.98452915],
       [0.80515695, 0.98217489, 0.99069507, 0.9867713 ],
       [0.80818386, 0.98206278, 0.99035874, 0.98553812],
       [0.80493274, 0.9823991 , 0.99035874, 0.98699552],
       [0.80661435, 0.98262332, 0.99058296, 0.98542601],
       [0.80751121, 0.98195067, 0.99002242, 0.98710762],
       [0.80795964, 0.98206278, 0.99013453, 0.98621076],
       [0.80616592, 0.98262332, 0.99024664, 0.98654709],
       [0.8059417 , 0.98217489, 0.9896861 , 0.98497758],
       [0.8058296 , 0.982287  , 0.99024664, 0.98508969],
       [0.80538117, 0.98273543, 0.99002242, 0.98452915],
       [0.807287  , 0.98217489, 0.99024664, 0.98721973],
       [0.80695067, 0.98150224, 0.98912556, 0.98587444],
       [0.80773543, 0.98295964, 0.98979821, 0.98699552],
       [0.80807175, 0.98206278, 0.98991031, 0.98587444],
       [0.8088565 , 0.98139013, 0.98991031, 0.9853139 ],
       [0.80840807, 0.98206278, 0.99035874, 0.98396861],
       [0.80695067, 0.98262332, 0.99035874, 0.98609865],
       [0.80840807, 0.98251121, 0.99013453, 0.98542601],
       [0.80426009, 0.98262332, 0.9896861 , 0.98609865],
       [0.80784753, 0.9823991 , 0.99013453, 0.98396861],
       [0.807287  , 0.982287  , 0.99024664, 0.98553812],
       [0.80695067, 0.9823991 , 0.99058296, 0.98609865],
       [0.80616592, 0.98273543, 0.99047085, 0.98587444],
       [0.80852018, 0.9823991 , 0.99069507, 0.98643498],
       [0.80639013, 0.98273543, 0.98979821, 0.98699552],
       [0.8088565 , 0.982287  , 0.99058296, 0.98699552],
       [0.80762332, 0.98195067, 0.98991031, 0.98699552],
       [0.80807175, 0.98206278, 0.99013453, 0.98553812],
       [0.80639013, 0.98262332, 0.98991031, 0.98665919],
       [0.80616592, 0.98273543, 0.99047085, 0.98699552],
       [0.80773543, 0.9823991 , 0.99080717, 0.98654709],
       [0.80639013, 0.98195067, 0.99035874, 0.98710762],
       [0.80807175, 0.98251121, 0.99047085, 0.98778027],
       [0.80997758, 0.98195067, 0.99013453, 0.98621076],
       [0.80605381, 0.982287  , 0.99047085, 0.98621076],
       [0.80952915, 0.98251121, 0.99013453, 0.98733184],
       [0.80930493, 0.98206278, 0.98979821, 0.98699552],
       [0.8073991 , 0.98262332, 0.99024664, 0.98654709],
       [0.81053812, 0.98206278, 0.99024664, 0.98665919],
       [0.80930493, 0.98251121, 0.99002242, 0.98834081],
       [0.8088565 , 0.9823991 , 0.99047085, 0.98699552],
       [0.80896861, 0.98295964, 0.98991031, 0.98688341],
       [0.80919283, 0.98251121, 0.99002242, 0.98699552],
       [0.80930493, 0.98295964, 0.99002242, 0.98665919],
       [0.80840807, 0.9823991 , 0.99024664, 0.98699552],
       [0.8117713 , 0.98217489, 0.98946188, 0.9867713 ],
       [0.80964126, 0.98307175, 0.99058296, 0.98598655],
       [0.80852018, 0.98195067, 0.98991031, 0.98665919],
       [0.8103139 , 0.982287  , 0.99002242, 0.98688341],
       [0.81143498, 0.98251121, 0.99047085, 0.98755605],
       [0.80997758, 0.98284753, 0.99024664, 0.98721973],
       [0.81087444, 0.98217489, 0.9896861 , 0.98665919],
       [0.80762332, 0.98195067, 0.99024664, 0.98688341],
       [0.81121076, 0.98262332, 0.9896861 , 0.98699552],
       [0.80930493, 0.982287  , 0.98991031, 0.98654709],
       [0.80795964, 0.98251121, 0.99002242, 0.98665919],
       [0.80840807, 0.98295964, 0.99002242, 0.9867713 ],
       [0.8103139 , 0.98273543, 0.99002242, 0.9867713 ],
       [0.81076233, 0.98284753, 0.98991031, 0.98665919],
       [0.81053812, 0.9823991 , 0.99013453, 0.98654709]])

double application of sigmoid affects last-layer accuracy calculation

I think there is a bug in the combination of the forward method of class decolle.LenetDECOLLE with the decolle.utils.test calculation.

Specifically, when iterating over the layers the model determines whether it's at the final layer, and if so, applies a sigmoid to the membrane potential rather than the surrogate gradient function:

            if i+1 == self.num_layers:
                s_ = sigmoid(u_p)
            else:
                s_ = lif.sg_function(u_p)

Because the dropout and readout functions of the final layer are the identity, the final readout output r_out[-1] of forward has been passed through the sigmoid.

When the decolle.utils.test method calculates the accuracies, it passes all of the final read-out values through a sigmoid, including the layer above, which has already been passed through a sigmoid.

This seems to have a detrimental effect on the reported accuracies of the final layer.

(In theory, the sigmoid is monotonic, and since the follow-up call to decolle.utils.prediction_mostcommon uses the argmax, it shouldn't matter, but since the sigmoid saturates for single-precision floating values, the argmax is likely making arbitrary choices between ties at 1.00000.)

Assuming my interpretation is correct, I would gladly submit a PR to patch, but it seems there are several larger pieces at play that may preclude an obvious fix. Options include:

  • changing the use of sigmoid (as shown above) in the decolle.LenetDECOLLE.forward method to the identity; this would break the use of cross_entropy_one_hot in the output layer training loss function.
  • adding a test for the special case n+1==len(net) in decolle.utils.test to avoid putting the readout through a second sigmoid in the last layer; I think this would "break" the behavior for nets where the with_output_layer is set to False, because of this test.

Thoughts?

run the N-Tidigits dataset

Your work is wonderful. I want run the N-Tidigits dataset, but there was something wrong, and the parameters of this dataset are not known, could you tell me these?

Tutorial?

Is there any tutorial notebook? It would be really helpful . Can I try another DVS dataset(with the same file format) easily or do I need to implement the conversion on my own? For example the action recognition dataset from here uses the aedat format. https://github.com/CrystalMiaoshu/PAFBenchmark

Base model init burnin

Hi, I was looking at the code to get a good understanding of how it works and while running an example I saw that in the init function of the DECOLLEBase class it uses the burnin value of that object instead of the one passed as a parameter in the for loop. That by default is always 0.

for t in (range(0,max(self.burnin,1))):

Shouldn't it be like that?

for t in (range(0,max(burnin,1))):

Question with paper chapter 2

Hi, Is there an eazy way to understand the model you define. It is hard to match with I&F differential equation learning from the book "Neuronal Dynamics".Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.