Giter Club home page Giter Club logo

spyketorch's People

Contributors

miladmozafari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spyketorch's Issues

RunTimeError using multiple GPUs.

Hello,

Thank You for this library. I have been using the mozafari.py network to train my spiking network model. Since the dataset is imagenet I wanted to use a multiGPU setup. I used the data parallel module from pytorch to train with 8 GPUs like following:

mozafari = torch.nn.DataParallel(mozafari, device_ids=[1, 7])
  File "/data-mount/spiking-CVT/SpykeTorch/snn.py", line 219, in forward
    lr[f] = torch.where(pairings[i], *(self.learning_rate[f]))
RuntimeError: Expected condition, x and y to be on the same device, but condition is on cuda:7 and x and y are on cuda:0 and cuda:0 respectively

score can't stabilize

Hi miladmozafari, I am trying to implement your SpykeTorch code.I have some issues.
use face and motorbicyle images, during the training, the score will increase first and then decrease with the increase of the number of iterations, can't stabilize, do you know what the problem is?

Accuracy problem

Hi, I ran MozafariDeep.py and got the result below. The accuracy is not as good as paper 97.2%.

Current Train: [0.98281667 0.01718333 0.        ]
   Best Train: [9.83033333e-01 1.69666667e-02 0.00000000e+00 6.68000000e+02]
 Current Test: [0.9637 0.0363 0.    ]
    Best Test: [9.649e-01 3.510e-02 0.000e+00 6.770e+02]
time elapsed: 95007.85 seconds

Simulation Time

Hi Milad,

How much time did your simulations take for the Deep network to run on MNIST on CPUs vs GPUs?

The accuracy of cifar10 is not high

Hello, I changed the training data into cifar10 and then cifar10 into gray-scale image, but the accuracy is not high, only 36%. The general CNN network has an accuracy rate of more than 66%. Why?

global pooling in KheradpishehDeep.py

Hi miladmozafari. Thanks for providing such a useful toolbox for building convolutional spiking neural networks. I am trying to implement Kheradpisheh's work using your script (KheradpishehDeep.py). However, I found that in Kheradpisheh's paper the threshold of neurons in the last convolutional layer were set to be infinite and the global pooling was performed for classification, which is inconsistent with your code. Would you please explain this?

about datasets

hi~ I have used this toolbox, but i do not know how to prepare the proper datasets file format in my folder, so could you provide me with you reimplement datasets link, or tell me how can i prepare the datasets?
Thank you!

No learning with STDP

Hello. I am new to SpykeTorch. I am working on an anomaly detection project and I would like to learn features from spectrograms using a Convolutional SNN. I have been struggling so far for the past week with STDP. My model achieves decent performance (~ 70% AUC) but does not learn or very few (gain of 2-5% AUC). I have tried a lot of things : parameter tuning (number of winners, firing threshold, inhibition radius) but also adaptive learning rate, adaptive firing threshold, etc. I don't understand where the problem comes from because I know I can obtain > 90% AUC with a regular CNN.

My pipeline is : signal -> spectrogram (MFSC) -> CSNN -> ML outlier detection classifier

So I have several questions :

  • Why do you usually choose a very small number of winners ? If there is at most one winner per feature map, isn't it a good idea to choose one winner per feature map ?
  • When to "modify/filter" potentials before choosing winners ? Using things like sf.pointwise_inhibition(pot) or sf.threshold(pot) leads to worse performance in my case.
  • Is taking the mean of spikes / potentials from all timesteps a good idea for readout ?

Do you have any intuition about something I am doing wrong ?

Here is my model, I can also send you my whole code if you have the motivation to check it out.

class CSNN(nn.Module):
    def __init__(self,
            input_shape,
        ):
        super(CSNN, self).__init__()

        self.ctx = {}

        out_channels = 50
        kernel_height = 7
        in_nb_spike_bins, in_channels, in_frames, in_freqs = input_shape  
        output_height = in_frames - kernel_height

        self.conv = snn.Convolution(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_height,in_freqs), weight_mean=0.8, weight_std=0.05)
        self.stdp = snn.STDP(self.conv, learning_rate = (0.004, -0.003))
        self.pool = snn.Pooling(kernel_size = (4,1), stride = (4,1), padding = 0)
        self.firing_thr = 26
        self.nb_winners = 1
        self.inhib_rad = 0

        self.max_ap = Parameter(torch.Tensor([0.15]))

        self.mean_pot = 0
        self.counter = 0
    

    def get_thr(self):
        return self.mean_pot / self.counter


    def forward(self, input):
        input = input.float()
        if self.training:
            pot = self.conv(input)
            spk,pot = sf.fire(pot, self.firing_thr, return_thresholded_potentials=True)
            #pot = sf.pointwise_inhibition(pot)
            #spk = pot.sign() #remove spk where pot is now null
            winners = sf.get_k_winners(pot, kwta=self.nb_winners, inhibition_radius=self.inhib_rad, spikes=spk)
            self.save_stdp_data(input, pot, spk, winners)
        else:
            pot = self.conv(input)
            pot = self.pool(pot)
            self.mean_pot += pot.mean()
            self.counter += 1
            spk = sf.fire(pot, self.firing_thr)
            return spk, pot


    def save_stdp_data(self, input_spikes, potentials, output_spikes, winners):
        self.ctx['input_spikes'] = input_spikes
        self.ctx['potentials'] = potentials
        self.ctx['output_spikes'] = output_spikes
        self.ctx['winners'] = winners

        
    def update_stdp(self):
        self.stdp(self.ctx['input_spikes'], self.ctx['potentials'], self.ctx['output_spikes'], self.ctx['winners'])
            
            
    def update_learning_rate(self):
        ap = torch.tensor(self.stdp.learning_rate[0][0].item(), device=self.stdp.learning_rate[0][0].device) * 2
        ap = torch.min(ap, self.max_ap)
        an = ap * -0.75
        self.stdp.update_all_learning_rate(ap.item(), an.item())

Notes :

  • My experiments are for a number of epochs < 10

Parameter Selection: out_channel

Hello,
Thank you for the intuitive tutorial in the included notebook. I am trying to change to modify the provided unsupervised network to learn the MNIST handwritten digit dataset.

I am curious as to how you decided to select your out_channel in the conv layer and also your firing threshold. Why did you choose 20 for both and is it coincidental they are the same? Sorry if dumb question. And do you have any recommendations for parameter selections for MNIST?

Thanks again for this implementation!

Can traditional STDP weight updating rule be applied in this model?

Hello,
thanks a lot for the library! I am current trying to realize R-STDP with emerging devices, and is wondering if traditional weight updating rule in STDP can be applied in this model. That is, the delta_w calculated by STDP/R-STDP being proportional to exp(-delta_t/tau), instead of proportional to w(1-w). Is it possible for the algorithm to work like that?
(not a native English speaker here, i apologize if anything conveyed being ambiguous)

The system cannot find the path specified: 'facemotortrain'

Hi miladmozafari, I am trying to implement your SpykeTorch code.I have some issues.
First, in your ipynb file kept face and motorbicyle images but i got this error:

ValueError: Unknown resampling filter (64). Use Image.NEAREST (0), Image.LANCZOS (1), Image.BILINEAR (2), Image.BICUBIC (3), Image.BOX (4) or Image.HAMMING (5)
Here i am using GaborFilter, Should i use DoG filters instead? Or i need to chanege some parametes?

Second: i am trying you code "MozafariShallow" when i tried to run your code i got this error:
pic
Waiting for your response.
Thanks

No such file or directory: 'facemotortrain'

Hello,

when I run MozafariShallow.py, error occurs

python MozafariShallow.py
Traceback (most recent call last):
  File "MozafariShallow.py", line 127, in <module>
    trainsetfolder = utils.CacheDataset(ImageFolder("facemotortrain", s1c1))
  File "/home/liqp/.local/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 229, in __init__
    is_valid_file=is_valid_file)
  File "/home/liqp/.local/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 108, in __init__
    classes, class_to_idx = self._find_classes(self.root)
  File "/home/liqp/.local/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 137, in _find_classes
    classes = [d.name for d in os.scandir(dir) if d.is_dir()]
FileNotFoundError: [Errno 2] No such file or directory: 'facemotortrain'

How to visualise features

Hi, Mila!
I was confused by the feature visualization:
1.in MozafariDeep network, the conv3.weight's kernel size is small(about 5*5), but the visualized features are huge images.
2.I tried some feature visualization methods in CNN, but all methods require gradient (BP).
I don't know if there is a magic method for STDP network's feature visualization.
3. The same confusions for KheradpishehDeep network.
Thanks very very much~

dataset

hi~, I want to ask a question.
I am very interested in your network, but I want to use my dataset, which is a csv file. However, I don’t know what your MNIST input format looks like, numpy, list, or tensor? If I want to prepare it myself, what kind of format do I need?
Because my data set is not a picture data set, but can be directly input into numpy, list or tensor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.