Comments (7)
Hi @bratao,
Thanks for your interest. Both of these functions do the same thing which is to compute the Hessian diagonal. The only difference is the shape of the model parameters. The first code is written for convolutional neural networks that have four dimension (C_inC_outKK), or 2 dimensional as in the case of a FC layer.
The second code is specifically for transformers and in particular for Bias and LN which have a dimension size of 1, and the attention layers.
Could you please let me know for which layer type you have a parameter size of 3?
from adahessian.
It is a CNN for getting a char representation of a token
params = Params(
{
"embedding": {"embedding_dim": 16, "vocab_namespace": "token_characters"},
"encoder": {
"type": "cnn",
"embedding_dim": 16,
"num_filters": 128,
"ngram_filter_sizes": [3],
"conv_layer_activation": "relu",
},
}
)
I mixed the two versions in the function abobe:
def get_trace(self, gradsH):
"""
compute the Hessian vector product with a random vector v, at the current gradient point,
i.e., compute the gradient of <gradsH,v>.
:param gradsH: a list of torch variables
:return: a list of torch tensors
"""
params = self.param_groups[0]["params"]
params = list(filter(lambda x: x.requires_grad, params))
v = [torch.randint_like(p, high=2, device=self.device) for p in params]
for v_i in v:
v_i[v_i < 0.5] = -1
v_i[v_i >= 0.5] = 1
hvs = torch.autograd.grad(
gradsH, params, grad_outputs=v, only_inputs=True, retain_graph=True
)
hutchinson_trace = []
for hv, vi in zip(hvs, v):
param_size = hv.size()
if len(param_size) <= 2: # for Bias and LN
tmp_output = torch.abs(hv * vi) + 0.0
hutchinson_trace.append(tmp_output)
else: # Matrix
tmp_output1 = torch.abs((hv * vi + 0.0)).view(
-1, self.block_length
) # flatten to the N times self.block_length
tmp_output2 = torch.abs(torch.sum(tmp_output1, dim=[1])).view(-1) / float(
self.block_length
)
tmp_output3 = tmp_output2.repeat_interleave(self.block_length).view(param_size)
hutchinson_trace.append(tmp_output3)
return hutchinson_trace
Apparently it works. All my test suite pass. Despite being slower to converge than the Ranger optimizer, it do not requires an adjustment of the LR, that is an great trade-off.
from adahessian.
I am happy to hear that you are finding good use for it. We are actually observing strong improvement for NLP tasks, where the average GLUE score significantly increases with AdaHESSIAN. We will soon update the paper with these results. We would also like to hear more about the details of your use case if you would like to share them.
Regarding the implementation, we perform the block averaging on the convolution dimensions. Please see:
Specifically note that the averaging for convolution filters is occurring across dim=[2,3] which is the filter size dimension. You may get better performance by doing that here as well. For example, for a 3x3 conv filter, the block averaging happens across groups of 9 convolution parameters.
Also regarding speed you may want to try the delayed hutchinson step calculation so you are computing Hessian diagonal every other iteration. But even though AdaHessian is a little slower, it gives a good tradeoff with reduced hyperparameter tuning.
from adahessian.
I will try @amirgholami ,
But I just got an error when I moved to my production cluster:
RuntimeError: derivative for _cudnn_rnn_backward is not implemented
Apparently I will need to use without cudnn. Is that right? 😢
from adahessian.
No it does work with cudnn. Is there a sample code that we can take a look at? I would like to see exactly how the convolution is being applied in your code. Based on the above snippet the block averaging should happen across ngram_filter_sizes
from adahessian.
@amirgholami
Here is a repo I did with a regular BI-LSTM-CRF model on the CONLL-2003 task.
https://github.com/bratao/ner_adahessian
It compares with the Ranger optimizer
from adahessian.
Hi Bratao,
We added instructions to support different types of kernels. Please let us know if this help solve your problem.
BTW: Currently, PyTorch does not support second-order derivative for RNN type of layers (like LSTM, GRU, RNN). Therefore, if you are asking how to use AdaHessian for those models, there is no solution yet.
Best,
from adahessian.
Related Issues (20)
- AdaHessian in tensorflow 1 version
- Alpha unused HOT 1
- Optimizer is not respecting "trainable" attribute of variables.
- Replace numpy power by TF pow HOT 1
- Help using adahessian in TensorFlow HOT 3
- Error using adahessian in PyTorch HOT 3
- About how to group my params
- Use of AdaHessian with batched training data? HOT 2
- Reasonable learning rate range for adahessian?
- Use of FP16 in backward with create_graph = True?
- Is Hutch++ applicable to improve AdaHessian? HOT 1
- Scalability Question HOT 1
- Inconsistence between paper and training scripts on NMT tasks
- Images
- Object Detection HOT 1
- Possible to use with PyTorch Lightning? HOT 1
- Pre-trained model not available anymore (google drive link expired)
- Can this deal with complex numbers?
- Performance issue about tf.function HOT 1
- I get this error when I use the AdaHessian. Is it a bug?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adahessian.