haqishen / siim-isic-melanoma-classification-1st-place-solution Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Thank you very much for sharing the 1st place solution so quickly, congratulation for winning the competition and for this nice code sharing.
I just wanted to ask a few questions that I have when reading your code, I hope you'll find some time to answer them.
I see here https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution/blob/master/train.py#L84-L86 that you only perform gradient clipping for some image sizes, would you mind explaining why those two sizes (896 - 576)? Is this because they are bigger? Didn't you train with 768x768 images? Also in general how do you choose the clipping value?
You are using Swish activation https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution/blob/master/models.py#L11-L26, would you say that in general Swish is always a better choice that basic ReLU?
I'd like to understand what you are doing here : https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution/blob/master/models.py#L61-L66 It seems that you are applying 5 different dropouts at the very end on the same linear layer and average them, it does not seem a very common approach. I'm seeing this as a way of making sure that the last layer is training correctly but is this really needed? I find interesting the idea of self ensembling a model with different random heads at the end but why do you use the same linear layer? Why not having 5 different heads with different random inputs during training? I'd really like to understand :) Have you ever tried the 5 linear layer approach?
Thanks in advance!
Cheers
Because I want to use my own downloaded data as the training and testing sets, is there a way to input local data?
Hello and thank you for sharing your work!
Would it be possible for you to share the last checkpoint of the pretrained model?
Thank you in advance, Lucia
Hi,
I saw your idea that concat the result from image through CNNs model and the vector from meta data (data frame). Is it your idea, or it depend on a paper or some tutorials ?
If it depend on something, please give me the link to it.
Thank you.
Hi, first of all, thanks for your great work.
I have encountered issues running your code, and wanna ask questions on it.
I am trying to run your example code:
python train.py --kernel-type 9c_meta_b3_768_512_ext_18ep --data-dir ./data/ --data-folder 768 --image-size 512 --enet-type efficientnet_b3 --use-meta --n-epochs 18 --use-amp --CUDA_VISIBLE_DEVICES 0,1
After training single epoch, when trying to evaluate, I encountered the following error:
According to the error above, the issue might be relevant to data parallel. But I am not sure on the cause for this.
Could you give any suggestion for solving this problem?
Thanks,
Carrtesy.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.