yaringal / dropoutuncertaintydemos Goto Github PK
View Code? Open in Web Editor NEWWhat My Deep Model Doesn't Know...
License: MIT License
What My Deep Model Doesn't Know...
License: MIT License
Hi @yaringal,
Thanks for the wonderful post and the code. I was going through your blogpost and found that there are some inconsistencies in the way you calculate tau in the blog, the supportive python code as well as in this repo at the following line:
As I understand, you directly calculate tau_inv in the js code. However, on the line above you add the tau_inv
to the std. dev of y instead of the variance of y as mentioned on the blog. Is there a specific reason for this ?
I have tried to replicate your analysis using pytorch at: https://github.com/napsternxg/pytorch-practice/blob/master/Pytorch%2BUncertainity.ipynb, based on the method proposed on the blogpost. However, I am not getting the same uncertainties. Specifically, the uncertainty, it appears, is constant across all X, when in reality it should be high at the edges.
I would appreciate if you can help me with this.
looking at a wall with 4 eyes while walking into it resulted in a positive reward;
http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html
That's the > 0.75
threshold for forward reward. With a few eyes missing the walls, the overall proximity drops more it does in most cases, if the agent can get a little bonus for forward at that stage it will take it.
Generally I've found the threshold still works ok, takes tweaking but is kind of a "this is a doorway you'll accept" vs "that's a little too risky" in the end. Thinking the best bet would be to remove it and punish harder on walls some other way, so the forward bonus can't win out against walls when multiplied by those last few decimal points of the proximity being fed in.
Hi @yaringal
Thanks for sharing this information.
The code you posted on your blog (http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html#uncertainty-sense) is to be utilized as part of training, therefore T = the number of network training epochs ?
probs = []
for _ in xrange(T):
probs += [model.output_probs(input_x)]
predictive_mean = numpy.mean(prob, axis=0)
predictive_variance = numpy.var(prob, axis=0)
tau = l**2 * (1 - model.p) / (2 * N * model.weight_decay)
predictive_variance += tau**-1
As 1 forward pass is utilized to make a prediction p on a trained network, when performing the uncertainty calculation of p does T become 1 so then
for _ in xrange(T):
probs += [model.output_probs(input_x)]
is just :
probs += [model.output_probs(input_x)]
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.