Giter Club home page Giter Club logo

Comments (13)

jonahweissman avatar jonahweissman commented on August 26, 2024 1

@rjpg Both examples use the ESN as a frequency generator. You give the model some kind of period time series, and it learns to match the waveform. Instead of this project, you might be better served by just Keras. They have an example of LSTM sequence classification.

from pyesn.

cknd avatar cknd commented on August 26, 2024

Interesting idea. But I have a hunch that the cleanest way to do this would be to extend Keras' own simpleRNN layer - after all, echo state networks are just well-initialized vanilla RNNs with training restricted to a readout layer. So if there's a way to define a custom initialiser for the simpleRNN's recurrent weights and exempt them from optimisation, most of the work should already be done. Except for all the little things I'm forgetting now.
Hm. I guess the benefit of using an ESN layer over a normal RNN layer would be that it provides some memory & nonlinear feature expansion without adding too many new free parameters to train -- could be worth trying

from pyesn.

cknd avatar cknd commented on August 26, 2024

(less ambitiously, you could just use the existing pyESN as a data preprocessor - i.e. hack fit to give you the network response for your data (extended_states), and instead of fitting a linear readout on it, use it as the input data to some (likely non-time-aware) keras model. I think there are some anecdotal success stories in the literature for this idea, 'large fixed reservoir + small feedforward network readout')

from pyesn.

jonahweissman avatar jonahweissman commented on August 26, 2024

I'll probably try to extend Keras' simpleRNN layer. If I get it to work, I'll post here. Thanks!

from pyesn.

cknd avatar cknd commented on August 26, 2024

cool, I'm curious what comes out of it

from pyesn.

gurbain avatar gurbain commented on August 26, 2024

Hi @jonahweissman!

I'm currently trying to implement the same architecture esn+keras network (in order to train with GPU and use different numbers of layers in combination with my esn), do you have any progress there or did you find something interesting somewhere? :)

One important feature that I am not sure how to solve is to integrate the feedback from the readout layers to the reservoir in the architecture.

Best,
Gabriel

from pyesn.

jonahweissman avatar jonahweissman commented on August 26, 2024

@gurbain I made a pull request to pyESN with the work I did. The basic idea is that instead of the readout layer being fed into a linear model, it is fed through a Keras model to produce outputs.

One important feature that I am not sure how to solve is to integrate the feedback from the readout layers to the reservoir in the architecture.

I initially tried to create a Keras layer, and also really struggled with this issue. I found an example of this technique in lstm_seq2seq, but it was too complicated for me. I believe pyESN integrates feedback from the readout layer into the reservoir by default.

from pyesn.

gurbain avatar gurbain commented on August 26, 2024

Hi @jonahweissman ! Super nice, that is exactly what I started to implement yesterday!

However, I get the same issues I was already facing with a different implementation (and I am starting to guess it is a theoretical problem, and is not due to a bug): when I run freely the ESN+keras readout and I includes the feedback, the full closed-loop system (ESN+keras+feeback) start diverging. I tested with many different spectral radius, damping, noise or readout number values without success... Did you face the same problem? Would you have any idea why maybe?

You can find an simple example of what I mean here (with the best result I could get, generally it diverges even faster): https://gist.github.com/gurbain/ba52af78d7be6eb2a23f48af15da2ce0

from pyesn.

jonahweissman avatar jonahweissman commented on August 26, 2024

@gurbain That's happened to me a few times. I don't really know how to fix it, but it's possible that your spectral radius of 1.4 is the problem. The literature on Echo State Networks tends to recommend a spectral radius of less than (but close to) 1. The idea is that it's stable, but just barely.

from pyesn.

gurbain avatar gurbain commented on August 26, 2024

Well, I forgot to change it in the example but the problem is different: it does not work for any spectral radius and neither for any task!! Though the same works perfectly when I change the readout layer for a simple RLS rule, so I think there is something I would need to investigate in more details there...

from pyesn.

jonahweissman avatar jonahweissman commented on August 26, 2024

Hmm, that's odd. Sorry I can't be more helpful; I don't really have a very firm grasp on the theory behind all of this.

from pyesn.

rjpg avatar rjpg commented on August 26, 2024

hello, I am looking to the example and I do not understand why you use np.ones(..) on the X set for train on :

pred_training = esn.fit(np.ones(trainlen),
                        data[:trainlen],
                        epochs=100,
                        batch_size=32,
                        verbose=0)


And if I want to do multivariate time-series classification with 8 variables and 5 classes output (with 5 classes -e.g. strong down , weak down , neutral , weak up , strong up) can I change the esn = ESN(n_inputs=1 to esn = ESN(n_inputs=8 ?

and the output layer to :

model.add(Dense(units=5, activation='softmax'))

?

how do I define the input time-steps number ? for the input shape ?

for example when defining the input for a LSTM three dimensions are with shape : (batch_size, timesteps, variables).

what is the input shape here ?

from pyesn.

rjpg avatar rjpg commented on August 26, 2024

hello,

I have my problem running with LSTM and it is ok.

I would like to try ESN to the same problem ... I was trying to see how to adapt the inputs and outputs of my problem to this implementation.

Here is my problem with LSTM :
https://github.com/rjpg/bftensor/blob/master/Autoencoder/src/LSTMNewData.ipynb

The base NN is like

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 128, 9)            0         
_________________________________________________________________
bidirectional_1 (Bidirection (None, 128, 200)          88000     
_________________________________________________________________
bidirectional_2 (Bidirection (None, 128, 100)          100400    
_________________________________________________________________
bidirectional_3 (Bidirection (None, 40)                19360     
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 205       

As you can see I have 9 time-series and for each 128 time-steps and an output layer of 5 neurons with softmax to classify the multivariate time-series input in one of 5 classes.

I would like to see if ESN does some improvement.

PS: in my example, the "base-line" accuracy is 20% because I have 5 classes well balanced (if the model responds randomly it will have the accuracy of 20%). With LSTM I reach 29% , 9% above base-line. It sounds poor but for my problem, it is already good...

from pyesn.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.