zomux / deepy Goto Github PK
View Code? Open in Web Editor NEWA highly extensible deep learning framework
License: MIT License
A highly extensible deep learning framework
License: MIT License
I have follow output
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
But have Microsoft Python compiler pack, Visual Studio 12.0, Visual Studio 11.0, ...
Hi Admin,
Do you have any plan to create a documentation or tutorial on How to use deepy?
I would like to contribute.
Please let me know.
Thanking You.
File "/home/hadoop/deepy/deepy/trainers/trainers.py", line 327, in __init__ [39/1926]
super(AdamTrainer, self).__init__(network, config, "ADAM")
File "/home/hadoop/deepy/deepy/trainers/trainers.py", line 273, in __init__
learning_updates = list(self.learning_updates())
File "/home/hadoop/deepy/deepy/trainers/trainers.py", line 289, in learning_updates
gradients = T.grad(self.cost, params)
File "/home/hadoop/lib/python2.7/site-packages/theano/gradient.py", line 528, in grad
grad_dict, wrt, cost_name)
File "/home/hadoop/lib/python2.7/site-packages/theano/gradient.py", line 1103, in _populate_grad_dict
rval = [access_grad_cache(elem) for elem in wrt]
File "/home/hadoop/lib/python2.7/site-packages/theano/gradient.py", line 1063, in access_grad_cache
term = access_term_cache(node)[idx]
File "/home/hadoop/lib/python2.7/site-packages/theano/gradient.py", line 924, in access_term_cache
input_grads = node.op.grad(inputs, new_output_grads)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 1545, in grad
Xt_placeholder)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1018, in forced_replace
to_replace = local_traverse(out, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
rval += local_traverse(inp, x)
File "/home/hadoop/lib/python2.7/site-packages/theano/scan_module/scan_utils.py", line 1016, in local_traverse
Hey there! First of all: thank you so much for releasing this, documenting things, putting it on pypi, etc. etc., really appreciate it :)
I've been trying to get a fun "search and rescue" example working where a drone with a search radius explores a map until it finds the objective. Right now I am having trouble getting the state
input properly... assuming I have the rest understood. It seems like I should keep doing DQNAgent.learn
over and over until I am satisfied? Was kind of confused by the DQNA example with all the socketIO stuff, wasn't sure how that was driving the learning.
# some psuedo code
height = width = 40
map = numpy array[40, 40]
ACTIONS = ('up', 'down', 'left', 'right')
agent = DQNAgent(height * width, len(self.ACTIONS))
while True:
state = map.copy()
action = self.agent.get_action(state)
reward = 0 # not sure what to set this on the first "learn"
drone.do_action(ACTIONS[action])
# Get state after action has changed it
next_state = map.copy()
reward = drone.get_current_reward()
self.agent.learn(state, action, reward, next_state)
Wrong number of dimensions: expected 2, got 3 with shape (1, 40, 40).
If you're feeling crazy here's the actual source.
I may have this all ass backwards, apologies if this is a silly question.
In https://github.com/uaca/deepy/blob/master/deepy/layers/lstm.py#L111
an n_steps=self._steps is missing
Cheers!
Pablo
Hi,
I am trying to execute the sample code given as example and by running the below line
model.stack(Dense(256, 'relu'),
Dropout(0.2),
Dense(256, 'relu'),
Dropout(0.2),
Dense(10, 'linear'),
Softmax())
I am getting error:
Traceback (most recent call last):
File "", line 1, in
NameError: name 'Dropout' is not defined
Do i am making any mistake? or there is any problem with the library?
Keras now supports both Theano and TensorFlow as backends. Do you have any intention or interest in doing the same with Deepy?
Hi,
cool library with an impressive variety of models -- I will take a closer look!
I noticed some code in your DRAW model was cut-n-pasted from jbornschein/draw. I don't mind and rather feel honored; but do you mind attributing it correctly?
The license on that code is also technically GPL V3 -- I wanted to ensure that enhancements stay open source. But it's OK with me if those parts here are now under the MIT license.
ubgpu@ubgpu:/github/deepy$ sudo python experiments/attention_models/baseline.py/github/deepy$
[sudo] password for ubgpu:
Using gpu device 0: GeForce GTX 970
Traceback (most recent call last):
File "experiments/attention_models/baseline.py", line 8, in
from baseline_model import get_network
File "/home/ubgpu/github/deepy/experiments/attention_models/baseline_model.py", line 14, in
from deepy.networks import NeuralLayer
ImportError: cannot import name NeuralLayer
ubgpu@ubgpu:
Hi, I have install the deepy in windows7, and can run the tutorial1.py in experiments\tutorials, but when I run tutorial2.py, I have got an error like this
C:\Users\Administrator\Desktop\deepy-master\experiments\tutorials>python tutoria
l2.py
Using gpu device 0: GeForce GT 630 (CNMeM is enabled with initial size: 75.0% of
memory, CuDNN not available)
D:\soft\python\lib\site-packages\theano\tensor\signal\downsample.py:6: UserWarni
ng: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
INFO:deepy.networks.network:deepy version = 0.2.0
INFO:deepy.dataset.mnist:loading minst data
INFO:deepy.dataset.mnist:[mnist] training data size: 50000
INFO:deepy.dataset.mnist:[mnist] valid data size: 10000
INFO:deepy.dataset.mnist:[mnist] test data size: 10000
INFO:deepy.trainers.trainers:changing optimization method to 'MOMENTUM'
INFO:deepy.networks.network:network inputs: x
INFO:deepy.networks.network:network targets:
INFO:deepy.networks.network:network parameters:
INFO:deepy.networks.network:parameter count: 0
INFO:deepy.trainers.trainers:monitor list: J
INFO:deepy.trainers.trainers:compile evaluation function
INFO:deepy.trainers.trainers:compiling MomentumTrainer learning function
Traceback (most recent call last):
File "tutorial2.py", line 73, in
trainer = MomentumTrainer(model, {'weight_l2': 0.0001})
File "D:\soft\python\lib\site-packages\deepy\trainers\trainers.py", line 436,
in init
super(MomentumTrainer, self).init(network, config, "MOMENTUM")
File "D:\soft\python\lib\site-packages\deepy\trainers\trainers.py", line 354,
in init
learning_updates = list(self.learning_updates())
File "D:\soft\python\lib\site-packages\deepy\trainers\trainers.py", line 381,
in learning_updates
gradients = T.grad(self.cost, params)
File "D:\soft\python\lib\site-packages\theano\gradient.py", line 436, in grad
raise TypeError("cost must be a scalar.")
TypeError: cost must be a scalar.
can you help me?
Thanks!
Installing via pip (latest version - 1 day old) results in an error:
C:\Users\dd>pip install deepy
Collecting deepy
Downloading deepy-0.1.4.tar.gz (94kB)
100% |################################| 98kB 330kB/s
Complete output from command python setup.py egg_info:
Using gpu device 0: GeForce GTX 970M
Traceback (most recent call last):
File "", line 20, in
File "c:\users\ddofer\appdata\local\temp\pip-build-vtis_g\deepy\setup.py", line 12, in
requirements = open(os.path.join(os.path.dirname(file), 'requirements.txt')).read().strip().split("\n")
IOError: [Errno 2] No such file or directory: 'c:\users\dd\appdata\local\temp\pip-build-vtis_g\deepy\req
uirements.txt'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\ddofer\appdata\local\temp\pip-build-vtis_g\deepy
For all examples I get this error. I tried to increase recursion depth did not work.
Hi,
I am doing my own experiments with the DQNAgent - the only difference is that I want it to go to the target location and stop.
In contrast, in the puckworld, the agent is continuously looking for new "fruits" to gobble up. In my version, I want it to find one "fruit" in the search space with the most efficient path ( set of actions! ). The agent keeps running without converging on the point that provides the highest reward. Is there anything that I need to change to get this working that way ?
Hello! Thank you for your project, great job!
I have a trouble with encoding function in RAE. I'm new in Theano, so I could have mistaken, but I think the code in lines 51-53 is not valid: x_var should be a matrix and encode_func should contain x_var as argument, not x. So, I suppose the correct code will be:
x_var = T.matrix()
self._encode_func = theano.function([x_var], self.layers[0].encode_func(x_var),
allow_input_downcast=True, mode=theano.Mode(linker=THEANO_LINKER))
Sorry if I mistake.
Hello - is there a way to keep the state (the activations) of the RNN without resetting it so it can be used in an online mode (i.e, train whenever a new example is given, while keeping all the past information) incrementally?
When I run baseline_rnnlm.py in experiments/ I get "TypeError: 'NoneType' object is not callable".
File "/deepy/experiments/lm/deepy/utils/fake_generator.py", line 12, in iter
return getattr(self.dataset, self.method_name)()
TypeError: 'NoneType' object is not callable
Hi Raphael,
I tried to run Visual Attention model but found error when running this line:
from deepy.networks import NeuralLayer
cannot import name NeuralLayer
However, in deepy.networks, I did not find NeuralLayer...
Can you help resolve this?
Thanks!
I am running the provided plot notebook in the attention model. when I run this line.
import experiments.attention_models.baseline_model
reload(experiments.attention_models.baseline_model)
from experiments.attention_models.baseline_model import get_network
model_path = os.path.join(ROOT, "experiments/attention_models/models/mnist_att_params2.gz")
network = get_network(model_path, disable_reinforce=True)
i get this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7d813e2daac2> in <module>()
4
5 model_path = os.path.join(ROOT, "experiments/attention_models/models/mnist_att_params2.gz")
----> 6 network = get_network(model_path, disable_reinforce=True)
D:\Python Directory\winPython 2.7\deepy\experiments\attention_models\baseline_model.py in get_network(model, std, disable_reinforce, random_glimpse)
211 """
212 network = NeuralClassifier(input_dim=28 * 28)
--> 213 network.stack_layer(AttentionLayer(std=std, disable_reinforce=disable_reinforce, random_glimpse=random_glimpse))
214 if model and os.path.exists(model):
215 network.load_params(model)
D:\Python Directory\winPython 2.7\deepy\experiments\attention_models\baseline_model.py in __init__(self, activation, std, disable_reinforce, random_glimpse)
23 self.gaussian_std = std
24 #super(AttentionLayer, self).__init__(activation)
---> 25 super(AttentionLayer, self).__init__(10, activation)
26
27 def initialize(self, config, vars, x, input_n, id="UNKNOWN"):
TypeError: __init__() takes at most 2 arguments (3 given)
I error occurs when the AttentionLayer class initializes its parent class 'NeuralLayer'
super(AttentionLayer, self).__init__(10, activation)
I looked at the NeuralLayer implementation and found that it indeed takes only one argument besides self.
class NeuralLayer(object):
def __init__(self, name="unknown"):
Can you please look into this error.
The requirement.txt file shown that deepy only support theano <= 0.7.0, but currently its latest version is >=0.9, and seems much faster than the old version. Is there any plans to update the deepy to support latest Theano?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.