tfboys-lzz / mpscl Goto Github PK
View Code? Open in Web Editor NEWThis repository contains code for the paper "Margin Preserving Self-paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation", published at IEEE JBHI 2022
This repository contains code for the paper "Margin Preserving Self-paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation", published at IEEE JBHI 2022
您好,感谢您的工作和代码,请问一下初始prototype是如何获得的,是随机的吗还是从数据集中求得的
I found that using the provided yaml file directly only reproduces half of the results in the paper. How can I reproduce the results reported in the paper? For example, the number of optimization iterations, batch size, etc.
Hi, thanks for the work. I have a question about the training process. When I run the training, the first 25 iterations seem to work fine, but on the 26th iteration and afterwards, I am only getting nan for all of the numerical values (for loss_seg_src_aux, loss_dice_src_aux, etc.). This seems to be caused by the model itself predicting nan.
Also, the testing process works fine, so it is only the training process that I am having the issue with.
Do you have any idea what might be causing this issue or how I can resolve it? Thank you for the help.
Hi, I'm pretraining the deeplabv2 to generate prototypes on another dataset. I find that the prediction is down-sampled to 33
Note that I notice that there is a label_downsample
function in train_UDA.py. But it is only used in the update_class_center_iter
function, and that does not affect the shape of labels used for BCE and dice loss calculation.
Thanks.
Hello, I'm very interested in your work. Could you explain how pre-training is conducted?
Hello, Thanks for you code, I like you work.
In domain_adaptation/eval_UDA.py line 68 :
img_mean = np.array((104.00698793, 116.66876762, 122.67891434), dtype=np.float32)
if I want to test my own dataset, I need to replace img_mean up to my dataset right?
Is the value equal to the mean value of my test dataset?
Hello, Nice work. But I have a question about how to obtain the warmup model of CT2MR.
The pre-trained model could achieve ~54.1% Dice score on target test_mr. And when I loaded the warmup model and continued the warmup stage, I could obtain a ~63% Dice score on target test_mr, which was very close to your reported result of AdvEnt/AdaSeg in the MPSCL paper.
BUT, when I conducted the CT2MR warmup stage with the default config.yml from the scratch, I could only achieve ~30% Dice on target test_mr, which makes me very confused.
Could you please provide some advice? Very thanks.
In the paper, logits are defined as the cosine similarity between the pixel feature and the anchor. However, it seems the code uses "cosine = torch.matmul(anchor_feature, class_center_feas)" directly instead of cosine similarity. Maybe I do not fully understand the paper or code. Could you explain this issue? Thanks a lot.
Dear author, I have a question. If i want to run train_UDA.py to train MPSCL, I need a pretrained model on source domain which you have provided, but you did not provide the code to pretrain model on source data?
Hi, it's me again.
I want to try your code on another dataset, the abdominal dataset, which was reported in the SIFA[TMI] paper. But I cannot find the initialization code for calculating category centers. Could you plz release the corresponding code?
Thanks!
dear author, i want to train from scratch for another dataset, how can i train it, first train the warmup.yml for 4000 iterations,second using the trained parameters for training mpscl.yml?
Hi! I am very interested in your work, I used your code for supervised training on Cardiac CT images, my average dice result was only 86.6% at best. But the result of supervised training on Cardiac CT in your paper is 90.40%. I would like to ask if you changed the hyperparameters while doing supervised training ?
Hi author! I have a doubt about using your code to train on another dataset. At first, I need to pre-train a segmentation model on the source domain to obtain the initial category prototypes. Is it necessary to pre-train main and aux discriminators?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.