Comments (5)
Thanks for the feature! fixed a few things, all tests passed and merged!
from lightning.
yeah, that's a good approach.
Two options:
trainer = Trainer(...)
trainer.lr_scheduler = SomeScheduler()
Or:
def learning_rate_scheduler(self):
return MyScheduler()
Option 2 is what you mentioned, and I think probably makes the most sense. It might be that you have multiple models you're running with the same trainer, and each model might use a different learning rate scheduler. Option 2 makes supporting this case easier (otherwise user has to decide which to use based on the model).
What do you think?
from lightning.
Or:
# In Lightning module
def configure_optimizers(self):
adam = Adam()
cosine_anneal = CosineAnneal(adam,..)
sgd = SGD()
cyclic = CyclicalLR(sgd)
return [adam, sgd], [cosine_anneal, cyclic]
.....
# In Trainer base class
self.optimizers, self.lr_schedulers = model.configure_optimizers()
Because each scheduler in pytorch take exactly one optimizer as an arguments, this would make sense to group their definition together.
If no scheduler is needed, the seconds list will be empty.
def configure_optimizers(self):
adam = Adam()
sgd = SGD()
return [adam, sgd], []
from lightning.
perfect. Mind putting in a PR with these changes? 🙃
from lightning.
also need to remove the lr scheduling options from the trainer and update the docs. I can help with any of these if you need
from lightning.
Related Issues (20)
- Differentiate testing multiple sets/models when logging
- Issue in Manual optimisation, during self.manual_backward call HOT 1
- Existing metric keys not moved to device after LearningRateFinder
- Checkpoint every_n_steps reruns epoch on restore HOT 3
- Metrics logged by self.log and metric.compute() are different HOT 1
- Multi-node Training with DDP stuck at "Initialize distributed..." on SLURM cluster HOT 3
- Full validation after first microbatch when training after LearningRateFinder
- Add a warning when some of the modules are in eval mode before the training stage
- why pytorch-lightning doc say "Model-parallel training (FSDP and DeepSpeed)". I think there is something wrong. HOT 1
- AWS Trainium fails number of device validation when using more than 1 accelerator on the instances
- OnExceptionCheckpoint: training resumes if ckpt found, even if no ckpt_path provided
- TensorBoardLogger has the wrong epoch numbers much more than the fact HOT 1
- How to incorporate vLLM in Lightning for LLM inference?
- WandbLogger `save_dir` and `dir` parameters do not work as expected.
- Loading large models with fabric, FSDP and empty_init=True does not work
- Unable to extract confusion matrix as a metric from trainer HOT 1
- Torchmetrics Accuracy issue when dont shuffle test data. HOT 1
- ModelCheckpoint: Using save_top_k, only the first k models are stored, not the best k models HOT 1
- trainer.fit from checkpoint without performance improvement will break 'last' link to checkpoint on window11
- Exception in RecordFunction callback: state_ptr INTERNAL ASSERT FAILED at "../torch/csrc/profiler/standalone/nvtx_observer.cpp":115
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lightning.