clementfarabet / ipam-tutorials Goto Github PK
View Code? Open in Web Editor NEWIPAM Tutorials on Theano/Torch
IPAM Tutorials on Theano/Torch
I'll wait for the Lua version, and then port it to Python
The tutorial adds a softmax to prepare the network output for input into a ClassNLLCriterion. However, ClassNLLCriterion expects that the log of the probability outputs is already taken.
Thank you very much for this very helpful contribution!
I'm now trying to go through the tutorial on my home windows machine and I'm therefore running the ubuntu virtual machine (Torch7.ova). The machine logins automatically when the it starts. However, after some time, the machine is automatically locked and I need the password for user "torch" in order to login again. The password would also be helpful in modifying the image.
BTW, I've noticed that the machine does not have luajit installed. Am I right?
Thank you,
Lior
Hi Clement,
I know this starts to date but the link code.cogbits.com/tutorials is broken, so if you had a new one I'd be happy to have it. Thanks!
Alice
Hi Clement, Yann,
I was going through the tutorials [WELL DONE, GREAT ONES!!!!]
I have a few questions I will email you, if you do not mind:
Q1: on https://github.com/clementfarabet/ipam-tutorials/tree/master/th_tutorials/1_supervised
Why do you have a random connection table on layer 1 here:
-- stage 1 : filter bank -> squashing -> L2 pooling -> normalization
model:add(nn.SpatialConvolutionMap(nn.tables.random(nfeats, nstates[1], fanin[1]), filtsize, filtsize))
I do not understand why not fully connect here? That way you can use all filters for all input planes and combine them into the outputs.
The way it is coded above only forces 2 filters to operate on random input planes to create the 2nd layer
We would get a lot more powerful networks if the fan-in is 4, not 1 as in this case. Also you are now using only 2 filters per plane... is that not TOO LOW?
I understand this way code is faster to execute: less convolution on input planes.... but....
Am I missing something here?
Test that SVHN works in e.g. Day 1.
The batch costs are sums. The derivatives are sums. No need to compute all data set at once, it wastes huge memory. How to give interface to not do this?
Why it is while true
loop at the end of doall.lua to train and test? Like this
while true do
train()
test()
end
which made it run for the whole night without an end.
I'm a newer to Lua and Torch, so perhaps there is some grammar tricks jumping out of the loop?
Links in examle not actual now. Because of this, examples are not running and there are big problems beginners.
I don't really know how to do this, but there are people here to get tips from.
To save memory and reduce load times, use memmaps for all data sets.
Hi,
I've installed everything on my computer and tried to run the scripts of today's session, and that works fine until L-BFGS. I have the error below. This actually happens in autodiff code, maybe I don't have the right version (?)
Starting L-BFGS
Traceback (most recent call last):
File "train_svm.py", line 72, in <module>
sys.exit(main())
File "train_svm.py", line 64, in main
m=lbfgs_m)
File "/usr/local/lib/python2.7/dist-packages/pyautodiff-0.0.1-py2.7.egg/autodiff/fmin_scipy.py", line 119, in fmin_l_bfgs_b
f_df, lvars = theano_f_df(fn, args, mode=theano_mode)
File "/usr/local/lib/python2.7/dist-packages/pyautodiff-0.0.1-py2.7.egg/autodiff/fmin_scipy.py", line 72, in theano_f_df
memo=dict(zip(orig_s_args, s_args)))
TypeError: clone_get_equiv() got an unexpected keyword argument 'memo'
confusion.totalValid is set to zero by confusion:zero() before it is logged in trainLogger. I've included the relevant code excerpt below.
-- print confusion matrix
print(confusion)
confusion:zero()
-- update logger/plot
trainLogger:add{['% mean class accuracy (train set)'] = confusion.totalValid * 100}
Day2 is the supervised learning day. I think we have a good assignment idea (play with optimization of convnet for cifar10 or google street signs (gss)) but we need code.
I put an idea about talking about Model Selection / Hyper-parameter optimization on the last day, but basically it was for lack of anything else.
Other ideas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.