Comments (4)
Hi,
- I've just updated the code to be compatible with the newest PyTorch, thanks for letting me know :)
- Yeah, I'm aware that my implementation is different, I'm running some tests right know to see if it results in better or worse performace
- the first difference is that their implementation uses Adam-like weight decay, but my code uses the corrected version from AdamW,
- the other difference (AFAIK) is that they average out the absolute value of the hessian trace across the kernel dimensions. I don't see any performance gains (so far) from that and it makes the code too specialized and not applicable to a wider range of NNs, so I don't support that feature. IMHO if the hessian needs to be smoother, it can be achieved solely by setting
beta_2
to a higher value and there is no need for the spatial smoothing.
Anyway, let me know if you spot any other differences. I've just started to play with this optimizer, so I'm not an expert by any means :)
from ada-hessian.
For a small toy example, it seems that both implementations behave the same 👍 .
I used both AdaHessian implementations ("New" is yours, "Orig" the original one) and also SGD (as a comparison) to minimize a quadratic function in x: y=x.T * A * x + b.T * x. Results see plot, both AdaHessian implementations follow exactly the same path for lr=1 and betas=(0.1, 0.1) and torch seed set to 0.
from ada-hessian.
Thanks for the information about the differences. I will also start two runs with identical parameters for both implementations with a segmentation model. If I have the time I will also have a look at a toy example (minimalistic neural network, no random init) for which it might be easier to check if things behave (more or less) the same. I'll keep you updated on this.
I've just started to play with this optimizer, so I'm not an expert by any means :)
same here, but I'm happy that there's finally a Newton-based method for neural networks which also works for real models (my experiments so far show that it at least achieves SGD results, but without tuning any hyperparameters of AdaHessian) and not just toy examples.
from ada-hessian.
Thank you very much for your effort, I'm glad it works properly :)
from ada-hessian.
Related Issues (6)
- torch.autograd.grad(grads, params, grad_outputs=z, only_inputs=True, retain_graph=False) local variable 'z' referenced before assignment HOT 2
- Accumulation of gradient and Hessian HOT 4
- AttributeError: 'Parameter' object has no attribute 'hess' HOT 3
- question: about create_graph HOT 1
- bug about generater HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ada-hessian.