Giter Club home page Giter Club logo

keras-spiking's Introduction

Latest PyPI version Python versions

Nengo: Large-scale brain modelling in Python

An illustration of the three principles of the NEF

Nengo is a Python library for building and simulating large-scale neural models. Nengo can create sophisticated spiking and non-spiking neural simulations with sensible defaults in a few lines of code. Yet, Nengo is highly extensible and flexible. You can define your own neuron types and learning rules, get input directly from hardware, build and run deep neural networks, drive robots, and even simulate your model on a completely different neural simulator or neuromorphic hardware.

Installation

Nengo depends on NumPy, and we recommend that you install NumPy before installing Nengo. If you're not sure how to do this, we recommend using Anaconda.

To install Nengo:

pip install nengo

If you have difficulty installing Nengo or NumPy, please read the more detailed Nengo installation instructions first.

If you'd like to install Nengo from source, please read the developer installation instructions.

Nengo is tested to work on Python 3.6 and above. Python 2.7 and Python 3.4 were supported up to and including Nengo 2.8.0. Python 3.5 was supported up to and including Nengo 3.1.

Examples

Here are six of many examples showing how Nengo enables the creation and simulation of large-scale neural models in few lines of code.

  1. 100 LIF neurons representing a sine wave
  2. Computing the square across a neural connection
  3. Controlled oscillatory dynamics with a recurrent connection
  4. Learning a communication channel with the PES rule
  5. Simple question answering with the Semantic Pointer Architecture
  6. A summary of the principles underlying all of these examples

Documentation

Usage and API documentation can be found at https://www.nengo.ai/nengo/.

To build the documentation yourself, run the following command:

python setup.py build_sphinx

This requires Pandoc to be installed, as well as some additional Python packages. For more details, see the Developer Guide.

Development

Information for current or prospective developers can be found at https://www.nengo.ai/contributing/.

Getting Help

Questions relating to Nengo, whether it's use or it's development, should be asked on the Nengo forum at https://forum.nengo.ai.

keras-spiking's People

Contributors

arvoelke avatar drasmuss avatar hunse avatar tbekolay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

phzeller poen0121

keras-spiking's Issues

Which neuromorphic hardware does KerasSpiking simulate ?

Hello,

I have been recently using KerasSpiking, but I am still not sure which neuromorphic hardware does it simulate ?
For example,
I am running this code, but I am not sure which neuromorphic hardware does the simulator use to run the SNN ?

Support RNNs (i.e., a way to use spiking activations within RNNs)

Would be nice to support spiking activations within RNNs such as the keras-lmu or an LSTM. Ideally this would just be a matter of supplying a different activation argument to the RNN. Currently this can be done with a stochastic (i.e., stateless) spiking model (e.g., activation = StochasticSpiking("tanh")):

# The following code is made available under the KerasSpiking license:
# https://www.nengo.ai/keras-spiking/license.html

import tensorflow as tf


def pseudo_gradient(forward, backward):
    """Multiplexes between one of the forward or backward tensors."""
    # The following trick can be used to supply a pseudo gradient. It works as follows:
    # on the forwards pass, the backward terms cancel out; on the backwards pass, only
    # the backward gradient is let through. This is similar to using tf.custom_gradient,
    # but that is prone to silent errors since certain parts of the graph might not
    # exist in the context that any tf.gradients are evaluated, which can lead to None
    # gradients silently getting through.
    return backward + tf.stop_gradient(forward - backward)


class StochasticSpiking:
    """Turn a static activation into a spiking one using stochastic rounding.

    Similar to ``nengo.neurons.StochasticSpiking``.

    The effective spike rate is controlled by ``dt``. Specifically, if ``a = f(x)``
    is the static activity, then at most ``ceil(abs(a * dt))`` spikes are generated
    per time-step.

    Uses the gradient of the underlying activation function on the backward pass.
    """

    def __init__(self, wrapped_activation, dt=1):
        self._activation = tf.keras.activations.get(wrapped_activation)
        self._dt = dt

    def __call__(self, x):
        a = self._activation(x)
        y = a * self._dt

        # Apply stochastic rounding: y -> y_rounded.
        n = tf.math.floor(y)
        r = y - n
        y_rounded = n + tf.cast(tf.random.uniform(shape=tf.shape(x)) < r, dtype=x.dtype)

        a_rounded = y_rounded / self._dt
        return pseudo_gradient(forward=a_rounded, backward=a)

Otherwise I've been unable to get the SpikingActivation layer or SpikingActivationCell to work within RNNs.

Feature requests for Lowpass: non-trainable and/or homogeneous

There are two features I've found myself needing lately w.r.t. the lowpass layer:

  1. (Non-Trainability) The ability to make the time-constant non-trainable. apply_during_training is close, but setting this to False skips over the lowpass entirely during training. I still want the lowpass in the forward training pass. I just don't want its time-constants to be modified from the initial value that I've provided.

  2. (Homogeneity) The ability to learn only a single time-constant. Currently the initial tau is broadcasted like so:

    shape=[1] + self.state_size.as_list(),
    initializer=tf.initializers.constant(np.ones(self.state_size) * self.tau),

    such that a different tau is learned for each dimension. Sometimes prior knowledge can tell us that the time-constant should be the same. This would also make trained lowpass filters compatible with NengoDL's converter (see nengo/nengo-dl#60 (comment)). But even independently of that, I've encountered a sitaution where I'd like to be able to learn just a single time-constant, and then change the shape of the data going into the layer at inference time (i.e., have the single lowpass that is broadcast across all of the dimensions, and also with the same initial values).

Cannot use keras_spiking.ModelEnergy on model that have a concatenation layer, such as DenseNet121

I am trying to get some energy values from a DenseNet architecture and when I run the code below I get a valueError (as shown below). I tried narrowing down where the issue occurs and found that it first happens at a layers.Concatenate call in the keras implementation. Has anyone seen a similar issue to this? When I use keras’ vgg16 implementation it works, which doesn’t have a Concatenate.

from tensorflow.keras.applications import DenseNet121
import keras_spiking
import numpy as np

denseNet_model = DenseNet121(
weights=None, include_top=True, input_shape=(224, 224, 3)
)

energy = keras_spiking.ModelEnergy(denseNet_model, example_data=np.ones((32, 224, 224, 3)) * 3)

ValueError: could not broadcast input array from shape (32,56,56,64) into shape (32,56,56)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.