Comments (5)
@uyekt This code combines techniques from three papers (including the one from this repository)
The continuous partial parameter controls how likely the optimizer work to SGD (0.0) or Adam (1.0).
https://arxiv.org/abs/1806.06763
https://arxiv.org/abs/1711.05101
It could be great in my opinion if the partial parameter can be updated on the fly similar to what have been done with the learning rate here, as there is also previous research suggesting switching from Adam to SGD during training for improving the generalization: https://arxiv.org/abs/1712.07628
class AdamComb(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-5, weight_decay=1e-5, hypergrad=1e-5, partial=0.5):
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, hypergrad=hypergrad, partial=partial)
super().__init__(params, defaults)
def step(self, closure=None):
loss = None if closure is None else closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None: continue
grad = p.grad.data
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p.data)
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['hypergrad'] > 0 and state['step'] > 1:
prev_bias_correction1 = 1 - beta1 ** (state['step'] - 1)
prev_bias_correction2 = 1 - beta2 ** (state['step'] - 1)
h = torch.dot(grad.view(-1), torch.div(exp_avg, exp_avg_sq.sqrt().add_(group['eps'])).view(-1)) * math.sqrt(prev_bias_correction2) / prev_bias_correction1
group['lr'] += group['hypergrad'] * h
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
if group['weight_decay'] != 0:
decayed_weights = torch.mul(p.data, group['weight_decay'])
p.data.addcdiv_(-step_size, exp_avg, denom**group['partial'])
p.data.sub_(decayed_weights)
else:
p.data.addcdiv_(-step_size, exp_avg, denom**group['partial'])
return loss
from adamwr.
I did the combination, can share upon your request.
from adamwr.
I did the combination, can share upon your request.
I would be interested :-)
from adamwr.
@akaniklaus @uyekt I've done some experiments with hypergradient and for me it behaved pretty much like a simple linear lr decay with small hypergrad_lr value. With large hypergrad_lr values it just added redundant stochasticity to the training process. Was your experience different and if it was then what problem/data/batch size/network architecture did you try it with?
from adamwr.
@mpyrozhok Hello, it is quite normal that it works similar to a decayed learning rate schedule. I believe the best functionality is that you do not have to re-optimize max and min learning rate of such decaying schedule each time you change something e.g. batch-size, architecture, etc. This is quite something especially when you are hypertuning (as one would need to re-optimize both initial and min learning-rate of a scheduler for each configuration and thus updating it online saves lots of resources). Furthermore, I am generally first making few trial runs to decide on a initial learning rate to start with (I start with a low learning-rate and then as it first increases, I use the peak point where it starts to drop again as the initial learning rate of my actual run). As for hypergrad_lr, the paper suggests 1e-5 and 1e-4, I experienced that the default value in the code (1e-8) is too low and it causes adapting too slowly; whilst a higher value sometimes is causing de-convergence from the minima (and even sometimes negative learning rate!).
@mpyrozhok Do you have any idea how we can change the code to use the validation loss gradient? Maybe the warm restart can also be implemented, what would be the best way to do it automatically?
from adamwr.
Related Issues (8)
- StopIteration HOT 7
- LR Scheduler help HOT 1
- Add License HOT 1
- Getting Stop Iteration when running for training
- scheduler.batch_step() AttributeError: 'CosineLRWithRestarts' object has no attribute 'batch_increment' HOT 1
- Persisting CosineAnnealingLRWithRestarts HOT 2
- Lower/Upper Bound for LR and Upper Bound decay HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adamwr.