Giter Club home page Giter Club logo

dtc's People

Contributors

luoxd1996 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dtc's Issues

Supervised Training

Hi authors,
Thanks for your great work!

I'm trying to understand your code and I found it difficult to comprehend the training process. Can you help me pinpoint where the supervised training is happening? can you refer me to the specific lines, please?

About output shape used for compute_sdf function?

Thank you for the nice work!

About compute_sdf function, the VNet's outputs shape is (batch_size, num_classes, x, y, z), isn't it?

if so, the num_classes is set to 0 at outputs[:labeled_bs, 0, ...].shape (please see

).numpy(), outputs[:labeled_bs, 0, ...].shape)
), so if my dataset has num_classes = 2 or says 5, should i still set it to 0 or arbitrary (says k, where k < num_classes?

Thanks in advance.

Questions about CT Pancrease dataset

Dear author,
In this repository, you have provided the training code and LA dataset. Could you please provide the CT Pancrease dataset mentioned in your paper?
Hope for your reply.

question about losses.dice_loss

Hello,
thanks for your nice work.
I have some confusion as I look your code in losses.dice_loss.
i find dice in the end is the sum. when 1 - loss(why not n - loss), the results is a negative number.
I did an experiment as follows.

def dice_loss(score, target):
    target = target.float()
    smooth = 1e-5
    intersect = torch.sum(score * target)
    y_sum = torch.sum(target * target)
    z_sum = torch.sum(score * score)
    loss = (2 * intersect + smooth) / (z_sum + y_sum + smooth)
    loss = 1 - loss
    return loss

a=np.array([[[[[0, 1], [1, 0]],[[0, 1],[1, 0]]]], [[[[0, 1], [1, 0]],[[0, 1],[1, 0]]]]])
b=np.array([[[[[0, 1], [1, 1]],[[0, 1],[1, 1]]]], [[[[0, 1], [1, 1]],[[0, 1],[1, 1]]]]])

a = torch.autograd.Variable(torch.from_numpy(a))
b = torch.autograd.Variable(torch.from_numpy(b))

t = dice_loss(a[:2, 0, :, :, :], b[:2] == 1)
print(t)

tensor(-0.6000)

Thats so weird. Did I misunderstand something in any process. Please tell me.
Best.

About the level-set function

Hi,
Thanks for your great work!

Can you tell me how to map the output of LSF task to the space of segmentation output for multi-classes. In your work, "dis_to_mask = torch.sigmoid(-1500 * outputs_tanh)" was used to transfer for two-classes. For multi-classes, how can I transfer?

Best.

Error in Running test_LA.py

Hi,
I tried to run the test_LA.py file and I got the following error:
FileNotFoundError: [Errno 2] No such file or directory: '../model/DTC_16labels/best_model.pth'
I wonder if you can help me.
Thanks

can you solve my some wrong?

hi, author. The report is related with semi-supervision, but I can't find anything with unlabeled data.
I only see labeled data for training in Dataloader. In the code, how do you enter an unlabeled data into the network?
please give me some help. Thx.

Multi-label Data

Hi,
Does your code work on multi-label data?
Thank you in advance :)

Code suitability

Can this code be directly used for the segmentation of 2D images?
I would be very grateful if you could reply

liver training?

Hello, is it okay for me to train a liver dataset with this code? I lose drops when training but DICE barely when testing

About the metrics

Hello!
The evaluation index I reproduced is one point lower than that in the paper,
do you know the reason?

Magic Number iter_num//150?

What does the 150 mean here? Did you find that value by trial and error or does one epoch consist of 150 iterations? I assume, that the value should be specific to the dataset and perhaps also the model?

consistency_weight = get_current_consistency_weight(iter_num//150)

results on pancreas dataset

Hi thanks a lot for releasing the code. On pancreas dataset, I have followed the same pipeline of preprocessing, including resampling, center crop and normalization but I can only get 40.86 dice trained with 12 labeled data and 60.43 dice trained with all training data. Could you please specify what augmentation you used in your paper? because after I remove rotation augmentation and use random crop only as on-the-fly augmentation, I can get a similar results as yours.

train with my own data

hi, I want to train the models with my own dataset,How can I achieve this objective?

About the FC3DDiscriminator

there is "from networks.discriminator import FC3DDiscriminator" in the la_heart.py, but I do not see the discriminator file in the networks, and it seems that FC3DDiscriminator is not used in the training. So where should I use this? Looking forward to your reply.

An Inquiry about NIH-Pancreas dataset configuration for evaluation

During study, I am so interested in your network, Dual-task Consistency, and wondered the dataset configuration in training and testing.

My question is below:

  • Patient numbers in 12 labeled training sets from 82 NIH-Pancreas datasets (ex. no. 1, no. 2, ...)

  • Patient numbers in 20 testing sets from 82 NIH-Pancreas datasets (ex. no. 62, no. 63, ...)

  • Ratio of training/validation dataset in network training

If you answer the above questions, I think it will be of great help to my study.

Many thanks =)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.