vict0rsch / deep_learning Goto Github PK
View Code? Open in Web Editor NEWDeep Learning Resources and Tutorials using Keras and Lasagne
License: GNU General Public License v2.0
Deep Learning Resources and Tutorials using Keras and Lasagne
License: GNU General Public License v2.0
I m getting below error while executing the program. Please help me to solve it
/recurrent_keras_power.py: line 8: syntax error near unexpected token 1234' ./recurrent_keras_power.py: line 8:
np.random.seed(1234)'
the tutorial's output is the next one value,but how to use the same time series data to predict the next n steps?
how to design the model to fit it?i have know the 'TimeDistributed()'can make the 3D output,but the output shape is the same as input.shape ,i can only change the output_dim ,so the problem is how to change the time_step params in the output?
Thanks for making your LSTM time series code available; it reported MSE of 0.07. I tried to create a baseline for comparison, simply taking X_t as X_{t-1}. This also gives me MSE of 0.07. Is this odd, or maybe LSTM generalizes better, or my MSE computation is faulty?
Thanks,
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('household_power_consumption.txt', sep=';')
df = df[['Global_active_power']]
df = df[df.Global_active_power != '?']
df['G2'] = df['Global_active_power'].shift(1)
print df.head()
df = df.astype(float)
df['err'] = df['G2']-df['Global_active_power']
df['err'] = np.power(df['err'],2)
print df.err.sum() / len(df)
#print np.sqrt(df['err'].sum()) / len(df)
Did you notice that the prediction looks like a time shift of the original time series? Is it the doomed pattern of applying this network to a time series?
Just a few basic questions:
I tried your code. I think it has a very low accuracy.
I try to explain
Thanks, very well explained for a beginner like me.
Minor note: I get
The show accuracy argument is deprecated, instead you should pass the accuracy metric to the model at compile time: model.compile(optimizer, loss, metrics=["accuracy"])
I'm going through this and finding a few typos and broken links. Fixing them and including in this PR.
Hi, the code runs well, thanks.
I got one question that confuse me , why you reshape the X into 3 dimensions : X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) ?does these represent: nb_samples , timestep and features ?
Many thanks
Hi Sir,
Can you please help me in solving the below error in keras installation.
I want to install keras via anaconda in ubuntu.
UnsatisfiableError: The following specifications were found to be in conflict:
Thanks for your examples and explanation regarding use of RNN with a LSTM.
I have a robotics application that takes as input an image and vector of imu measurements. I'm wondering if you have an idea about how to incorporate a time sequence of images, imu, into a network that uses conv2d layers and dense layers as input?
Could you please update your time-series Readme with a diagram of the architecture? You tried to explain a great detail about what return_sequences
do but a simple diagram would be more helpful.
Cheers
@vict0rsch Could you please delete this issue completely. I will post from my other account the question. 😃
Thank you!
I used 1,000 samples to do a test of your example, but get the following error:
what does "too many indices for array" mean ?
Compilation Time : 0.00799989700317
Train on 418 samples, validate on 23 samples
Epoch 1/1
418/418 [==============================] - 228s - loss: 0.6930 - val_loss: 0.5179
too many indices for array
Training duration (s) : 279.796000004
I would love to recreate that example with the same data that has been used.
However, the dataset is no longer available for download for me.
Is there any way to retrieve it, or can you upload it some place where i can download it?
Thanks :)
Thanks,your tutorial and code very well explained LSTM for a beginner like me,however,when I run it on pycharm(based on python3.6,keras1.0.7),it shows as follows:
File "C:/Users/Guo/Desktop/household_power_consumption/predict.py", line 132, in
run_network()
File "C:/Users/Guo/Desktop/household_power_consumption/predict.py", line 105, in run_network
model = build_model()
File "C:/Users/Guo/Desktop/household_power_consumption/predict.py", line 67, in build_model
return_sequences=True))
TypeError: Expected int32, got <tf.Variable 'lstm_1_W_i:0' shape=(1, 50) dtype=float32_ref> of type 'Variable' instead.
I wonder if it's the version-compatibility,would you give me some advice?
really,thank you again
Hi,your original post code is to use 49 dimension X to predict the 50th.how about I want to use 48 dimension to predict 49th and 50th. under such condition : does that mean i just change the output_dime of the last output layer :
model.add(Dense(
output_dim=2))
model.add(Activation("linear"))
Is that right?
Sorry for bothering you again..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.