Giter Club home page Giter Club logo

Comments (10)

Ziyan-Huang avatar Ziyan-Huang commented on August 10, 2024

Hi Anil,

In a typical Continuous Learning setup, models are trained sequentially on different datasets, with the objective of retaining performance across all datasets throughout this process.

However, STU-Net follows a more traditional pretraining-finetuning approach. Initially, a pre-training phase is conducted on a large-scale dataset, after which the model is fine-tuned for different downstream tasks to improve performance on these tasks. It's important to mention that Catastrophic Forgetfulness can occur during this process, leading to a potential loss in performance on the upstream tasks. This issue has not been addressed in my setup as it's not within my current concerns.

Nonetheless, I believe this is a very worthwhile issue to investigate.

I will make sure to upload the weights of FLARE23 to Google Drive

Best,

Ziyan

from stu-net.

yerramasu avatar yerramasu commented on August 10, 2024

Hello Ziyan,

Thank you very much for taking time to explain my doubt.

I will work on TCIA NSCLC dataset for lung nodule detection. I will update you with the results after I complete the training

Kind Regards,
Anil

from stu-net.

Airliin avatar Airliin commented on August 10, 2024

Hello, how to use run_finetuning.py for downstream task fine-tuning, still can maintain the original category capabilities; For example, I use run_finetuning.py in a downstream task, which only has three classes. If I train it directly, the network will only split these three classes.How should I configure it so that the network can both segment the original over a hundred classes and segment the additional three classes?

from stu-net.

Ziyan-Huang avatar Ziyan-Huang commented on August 10, 2024

Hello @Airliin . The current run_finetuning.py does not support maintaining the original category capabilities, and I can't provide much help in this area. However, you can refer to some continual learning papers for guidance.

from stu-net.

yerramasu avatar yerramasu commented on August 10, 2024

@Ziyan-Huang . I am working on porting your model to nnUNet v2. Can I do a pull request after the update. I feel it would be better for others to have a single repo instead of having a separate repo :)

Regards,
Anil

from stu-net.

Airliin avatar Airliin commented on August 10, 2024

Hello @Ziyan-Huang .Thank you for your response. I noticed in your article you mentioned reducing the learning rate of the pre-trained weights to less than ten times that of 'seghead' when running 'run_finetuning.py', but I didn't see this setting in the code. Could you please advise on where this can be modified?

from stu-net.

Ziyan-Huang avatar Ziyan-Huang commented on August 10, 2024

Dear Anil

@yerramasu, Thank you for your initiative! We truly appreciate your effort. We have indeed provided a basic version of the nnUNet v2 implementation at https://github.com/Ziyan-Huang/STU-Net/tree/main/nnUNet-2.2. If you find any areas of improvement or optimization, we would be more than happy to review and potentially incorporate your changes. Pull requests are very much welcome.

Best regards,

Ziyan Huang

from stu-net.

Ziyan-Huang avatar Ziyan-Huang commented on August 10, 2024

Dear @airlin,

Thank you for pointing that out. Currently, our code does not explicitly provide this setting. Due to your suggestion, we are considering incorporating this feature. However, to be honest, using a uniformly adjusted learning rate yields similar results to the differentiated setting you mentioned for the 'seghead'.

Best regards,

Ziyan Huang

from stu-net.

Airliin avatar Airliin commented on August 10, 2024

Dear @Ziyan-Huang
Thank you for the prompt response!
Best regards

from stu-net.

yerramasu avatar yerramasu commented on August 10, 2024

Hello @Ziyan-Huang , thank you very much. I wilose this issue as my query has been addressed and open Issue tracking for nnUNet v2.

Thanks again for the great repo.

Regards,
Anil

from stu-net.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.