Comments (10)
Hi Anil,
In a typical Continuous Learning setup, models are trained sequentially on different datasets, with the objective of retaining performance across all datasets throughout this process.
However, STU-Net follows a more traditional pretraining-finetuning approach. Initially, a pre-training phase is conducted on a large-scale dataset, after which the model is fine-tuned for different downstream tasks to improve performance on these tasks. It's important to mention that Catastrophic Forgetfulness can occur during this process, leading to a potential loss in performance on the upstream tasks. This issue has not been addressed in my setup as it's not within my current concerns.
Nonetheless, I believe this is a very worthwhile issue to investigate.
I will make sure to upload the weights of FLARE23 to Google Drive
Best,
Ziyan
from stu-net.
Hello Ziyan,
Thank you very much for taking time to explain my doubt.
I will work on TCIA NSCLC dataset for lung nodule detection. I will update you with the results after I complete the training
Kind Regards,
Anil
from stu-net.
Hello, how to use run_finetuning.py for downstream task fine-tuning, still can maintain the original category capabilities; For example, I use run_finetuning.py in a downstream task, which only has three classes. If I train it directly, the network will only split these three classes.How should I configure it so that the network can both segment the original over a hundred classes and segment the additional three classes?
from stu-net.
Hello @Airliin . The current run_finetuning.py does not support maintaining the original category capabilities, and I can't provide much help in this area. However, you can refer to some continual learning papers for guidance.
from stu-net.
@Ziyan-Huang . I am working on porting your model to nnUNet v2. Can I do a pull request after the update. I feel it would be better for others to have a single repo instead of having a separate repo :)
Regards,
Anil
from stu-net.
Hello @Ziyan-Huang .Thank you for your response. I noticed in your article you mentioned reducing the learning rate of the pre-trained weights to less than ten times that of 'seghead' when running 'run_finetuning.py', but I didn't see this setting in the code. Could you please advise on where this can be modified?
from stu-net.
Dear Anil
@yerramasu, Thank you for your initiative! We truly appreciate your effort. We have indeed provided a basic version of the nnUNet v2 implementation at https://github.com/Ziyan-Huang/STU-Net/tree/main/nnUNet-2.2. If you find any areas of improvement or optimization, we would be more than happy to review and potentially incorporate your changes. Pull requests are very much welcome.
Best regards,
Ziyan Huang
from stu-net.
Dear @airlin,
Thank you for pointing that out. Currently, our code does not explicitly provide this setting. Due to your suggestion, we are considering incorporating this feature. However, to be honest, using a uniformly adjusted learning rate yields similar results to the differentiated setting you mentioned for the 'seghead'.
Best regards,
Ziyan Huang
from stu-net.
Dear @Ziyan-Huang,
Thank you for the prompt response!
Best regards
from stu-net.
Hello @Ziyan-Huang , thank you very much. I wilose this issue as my query has been addressed and open Issue tracking for nnUNet v2.
Thanks again for the great repo.
Regards,
Anil
from stu-net.
Related Issues (20)
- Label overlap HOT 2
- About the commands to pretraining the model HOT 4
- How to make the model run in DDP HOT 1
- about output channel and label HOT 2
- Can you provide the metrics evaluation scripts for the WORDS and FLARE datasets? HOT 4
- The unpacking dataset stopped for several hours
- huge model training supermemory problem HOT 3
- Direct evaluation HOT 4
- Are pretrained models compatible with nnUNet v2?
- Surpport 2D segmentation
- New label
- crop size is patch size?
- Some questions about the setting of initial_ir
- DP
- What is the “3d_fullres” mentioned in README
- Dataset conversion instructions HOT 9
- model infer error
- Could not find trainer class HOT 4
- Hyper-parameter of fine-tune HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stu-net.