Giter Club home page Giter Club logo

cbim-medical-image-segmentation's People

Contributors

yhygao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cbim-medical-image-segmentation's Issues

The Inference Code

Hi yhygao! Thank you for the codes first!

I have a dataset for inference and it was pre-processed and was not post-processed in the same way as my training dataset. Can I ask how to use the model to run inference for my dataset for inference and output the nii.gz file? It would be really helpful if you can update the codes.

Thanks again!

Issue in image padding

Thanks for published great work I really appreciate your efforts. I have very basic question related to image and label padding function. My image size is 247365 and it's very throughout the complete dataset. When I use zero padding let's say padd_size= (24,96,96). The zeeopadding function produced different size let say 269797.
Is there any suggestion, how to handle this issue? Once again thanks

prediction.py

          Hello @yhygao!! I'm trying to run the prediction.py , I have copied the preprocess part from training in the preprocess of the prediction.py as you suggest but I get an error saying "RuntimeError: Sizes of tensors must match except in dimension 1. Expected 170 but got size 169 for tensor number 2 in the list." Have you got any idea what it can be from?

thank you in advance!

Originally posted by @snaka99 in #18 (comment)

可以加入推理的代码吗

你好,请问在训练得到最好的模型后,如何加载最好的模型,并推理出标签文件?

对一个没有label的test集来说,我想得到它的预测结果(nii.gz),有没有相关的代码可以参考?

About prediction.py

Hello @yhygao,
Thank you for your work!
I used the liver for prediction, but why did I get a blurry visualization result as shown in the following figure? The premise is that I have modified the code in the preprocessing function according to your method. This is what I have modified:

np_ img = np.clip(np_img, -17, 201)

np_ img = np_ img - 99.40

np_ img = np_ img / 39.39

I don't know what caused it. Please provide an answer. Thank you!
屏幕截图 2023-05-09 191335

Save ensembled models

Hi yhygao!

I have trained several folds using medformer and used ensemble modelling for prediction. But everytime it has to repeat the ensemble modelling for inference. Is it possible to save the ensemble model in future branch? Thanks!

Some problems in running the code.

Hi Yunhe,

I run your code with acdc 3D dataset. The data conversion is OK, but there is something wrong with the training.

python3 train.py --model utnetv2 --dimension 3d --dataset acdc --batch_size 3 --unique_name acdc_3dutnetv2 --gpu 0
Loading configurations from config/acdc/utnetv2_3d.yaml
Traceback (most recent call last):
File "train.py", line 212, in
net, ema_net = init_network(args)
File "train.py", line 186, in init_network
net = get_model(args, pretrain=args.pretrain)
File "CBIM-Medical-Image-Segmentation/model/utils.py", line 95, in get_model
return UTNetV2(args.in_chan, args.classes, args.base_chan, map_size=args.map_size, conv_block=args.conv_block, conv_num=args.conv_num, trans_num=args.trans_num, chan_num=args.chan_num, num_heads=args.num_heads, fusion_depth=args.fusion_depth, fusion_dim=args.fusion_dim, fusion_heads=args.fusion_heads, expansion=args.expansion, attn_drop=args.attn_drop, proj_drop=args.proj_drop, proj_type=args.proj_type, norm=args.norm, act=args.act, kernel_size=args.kernel_size, scale=args.down_scale)
AttributeError: 'Namespace' object has no attribute 'chan_num'

Questions about prediction.py file

Hi, I used a 3D Unet model with a 3D acdc dataset, successfully completed the training phase, saved the weights, and got the desired results. now, I am currently trying to obtain the inference results and visualization maps on the acdc testing set (testing set containing of patients information in 4d.nii) such as mentioned blew the setting of prediciton.py file....


parser.add_argument('--dataset', type=str, default='acdc', help='dataset name')
parser.add_argument('--model', type=str, default='unet', help='model name')
parser.add_argument('--dimension', type=str, default='3d', help='2d model or 3d model')

parser.add_argument('--load', type=parse_model_list, default='C:/Users/lenovo/PycharmProjects/CBIMM/exp/acdc/acdc_3d_unet/fold_4_latest.pth', help='the path of trained model checkpoint. Use \',\' as the separator if load multiple checkpoints for ensemble')
parser.add_argument('--img_path', type=str, default='C:/Users/lenovo/PycharmProjects/CBIMM/database/testing/patient101/', help='the path of the directory of images to be predicted')
parser.add_argument('--save_path', type=str, default='C:/Users/lenovo/PycharmProjects/CBIMM/Prediction_results/', help='the path to save predicted label')
parser.add_argument('--target_spacing', type=parse_spacing_list, default='1.0,1.0,1.0', help='the spacing that used for training, in x,y,z order for 3d, and x,y order for 2d')

Every time I started to execute the prediction.py file using the aforementioned settings, I was met with this problem. do we need any preprocessing for the testing set like we did it before the training phase for the training set ?

C:\Users\lenovo\anaconda3\envs\cbm\python.exe C:\Users\lenovo\PycharmProjects\CBIMM\prediction.py
Loading configurations from config/acdc/unet_3d.yaml
Model loaded from C:/Users/lenovo/PycharmProjects/CBIMM/exp/acdc/acdc_3d_unet/fold_4_latest.pth
Traceback (most recent call last):
File "C:\Users\lenovo\PycharmProjects\CBIMM\prediction.py", line 280, in
tmp_itk_img.CopyInformation(itk_img)
File "C:\Users\lenovo\anaconda3\envs\cbm\lib\site-packages\SimpleITK\SimpleITK.py", line 3113, in CopyInformation
return _SimpleITK.Image_CopyInformation(self, srcImage)
RuntimeError: Exception thrown in SimpleITK Image_CopyInformation: D:\a\1\sitk\Code\Common\src\sitkImage.cxx:227:
sitk::ERROR: Source Image for information does not match this image's dimension.

inconsistency results between your nnFormer and the original nnFormer repo

Hi, Thanks for your excellent work. I have tried to run your models and also the other models you provided in this repo. But I found out there is a large inconsistency in the results between your models and the the original nnformer repo even with the same patch size, spacing and other same parameters. for you nnFormer, avg of 5-fold cross validation is around 0.5 DSC, but for original nnFormer it is 0.62. makes wonder did you somehow fine-tune only your medformer, and the results from other models are not fine-tuned?

Thanks a lot.

data_conversion/acdc_3d.py中注意哪些参数?

非常棒的框架,请问在使用自己的数据集时,需要在data_conversion/acdc_3d.py中注意哪些参数?

[def ResampleCMRImage(imImage, imLabel, save_path, patient_name, count, target_spacing=(1., 1., 1.)):](

def ResampleCMRImage(imImage, imLabel, save_path, patient_name, count, target_spacing=(1., 1., 1.)):
)

我参照acdc_3d.py创建了一个自己数据集的mydataset_3d.py文件,但是发现ResampleCMRImage类中传入的target_spaceing为其他值时,比如(0.6481,0.6481,1),在trian的过程中,会出现RuntimeError: CUDA error: device-side assert triggered 的问题。试过其他spaceing值也是如此,但是采用默认的(1.5625,1.5625,5)是可以训练的。

另外,我使用的是CT的血管分割数据

请问这个spaceing值要如何设置?或者我还需要修改哪些地方?

VTUNET_3D causes dimension troubles

Hi,

I was trying out several models, and it happened with vtunet_3d that I couldnt get to run...

I tried with different input sizes and dummy shapes, but I think the mismatch lies somehwhere further down the line. Unfortunately the original code of VT-UNet is very different that I could not get a quick comparison of the two.

Maybe you have encountered a similar issue in the past? I just want to run their network on some test data.

Thanks in advance!

Below the code:

python train.py

__CUDNN VERSION: 8500
__Number CUDA Devices: 4
__CUDA Device Name: NVIDIA TITAN RTX
__CUDA Device Total Memory [GB]: 25.388515328
Device: cuda
Run: 07_7_2023
Loading configurations from config/aorta/vtunet_3d.yaml
SwinTransformerSys3D expand initial----depths:[2, 2, 2, 1];depths_decoder:[1, 2, 2, 2];drop_path_rate:0.1;num_classes:4;embed_dims:96;window:(7, 7, 7)
---final upsample expand_first---
(178, 64, 64, 40)
current lr: 1e-08
0it [00:00, ?it/s]> /scratch/gwolkerstorf/ASharon_retrain_UNet++/train_for_cluster.py(228)()
0it [00:03, ?it/s]
Traceback (most recent call last):
File "train_for_cluster.py", line 231, in
result = net(img)
File "/home/gwolkerstorf/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/gwolkerstorf/ASharon_retrain_UNet++/model/dim3/vtunet.py", line 96, in forward
logits = self.swin_unet(x)
File "/home/gwolkerstorf/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/gwolkerstorf/ASharon_retrain_UNet++/model/dim3/vtunet_utils.py", line 2187, in forward
x = self.forward_up_features(x, x_downsample, v_values_1, k_values_1, q_values_1, v_values_2, k_values_2, q_values_2)
File "/scratch/gwolkerstorf/ASharon_retrain_UNet++/model/dim3/vtunet_utils.py", line 1925, in forward_up_features
x = layer_up(x)
File "/home/gwolkerstorf/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/gwolkerstorf/ASharon_retrain_UNet++/model/dim3/vtunet_utils.py", line 979, in forward
x = x.view(B, D * 8, H, W, C)
RuntimeError: shape '[1, 32, 4, 4, 1536]' is invalid for input of size 3145728

train_ddp.py卡住

我在运行python train_ddp.py --model attention_unet --dimension 3d --dataset acdc --batch_size 32 --unique_name acdc_attention_unet_3d_ddp --gpu 0,1,2,3,4,5,6,7时,终端显示[INFO] 2023-03-09 10:50:07 train_ddp.py:253 Use EMA model for evaluation之后就没有日志更新了,检查发现是一直卡顿在mp.spawn(fn=main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, fold_idx, args, result_dict))这一行,请问怎么样解决这个问题?感谢任何回复!

's'

pip install -r requirements.txt --> pip install -r requirement.txt

几点个人意见

1)mkdir在不在某路径时压根不会创建,导致训练个半天,结果保存不了挂了(linux下),建议:
train.py:
if not os.path.exists('exp/exp_%s' % args.dataset):
os.mkdir('exp/exp_%s' % args.dataset) -->os.makedirs('exp/exp_%s' % args.dataset)
2)在公平的K-fold交叉验证时,数据集的划分应该固定,随机划分就不太公平了。建议:
dataset_acdc.py: class CMRDataset(Dataset): def init....
random.Random(seed).shuffle(img_name_list) 应该注释掉
3)重新加载已有模型时,加载路径可以这样动态生成:
def init_network(args, model_i):# 加了一个model_i参数,传入的是当前的K-Fold值
net = get_model(args, pretrain=args.pretrain)

if args.load:  #
    model_path = '%s%s/%s/%d_best.pth' % (args.cp_path, args.dataset, args.unique_name, model_i)
    net.load_state_dict(torch.load(model_path))
    print('Model loaded from {}'.format(model_path))

仅仅个人意见,仅供参考(这个框架写得挺不错的,我个人也非常不喜欢mmseg之类的框架,包装太复杂,在多配置文件下,都不太好调试,当然在工程上可能需要这样)

about segmentation map

Dear Yunhe,
I have tested your model on several segmentation tasks with excellent results. I would like to ask you a quick question about your segmentation map.

If I am not wrong once you compute the semantic map as in fig 8, all the spatial information is lost and you get a global summary of the scene. The upper branch with the softmax selects "where" to see and the lower branch selects "what" to see. This information is later refined using your network.

Interestingly, the bidirectional attention layers use 1x1 convolutions on the semantic maps. So information between adjacent pixels of the segmentation mask is never mixed. In fact the w, h dimensions are never used for anything and everything could be done equivantly without the reshape into the w and h dimensions. In the paper you say that convolutions are not used for the semantic maps, but I really think that convolutions shouldn't be used because they make no sense.

Finally, I really think that the proposed approach is very interesting and the semantic maps accumulate global information in each layer and for this reason the algoritmh works so well. But I would rewrite this interpretation of the h, w dimensions.

How to calculate the median spacing

Hi yhygao,

Thank you for the codes! Can I ask how you calculate the median spacing when preprocessing the dataset? I found that all the three datasets have different median spacing.

3d UTNetV2模型

请问当训练大小为(128, 128, 128)时,使用3d UTNetV2模型训练,batch_size默认是多少啊,需要显存多少呢?

update inference code

Could you update the full code fragment for inference? Save the predicted maps, etc. Thanks.

Inference

Hello @yhygao,

Thank you for your work!

But I have a question about the inference!
I would like to know if there will be a code to run inference in the near future.
In addition, It would be really helpful to have documentation about how to run inference.

training my own data

hello @yhygao ,
I am trying to train my own cardiac data with your medformer 2d model but I am having some trouble , I am trying with your medformer_2d.yaml from the acdc data should I change something in it? When I run it I get this error:

Traceback (most recent call last):
File "/content/gdrive/.shortcut-targets-by-id/1CdjrP0uBrq3xcbNjQtST6Y_Mx7YdGinp/CBIM-Medical-Image-Segmentation/train.py", line 343, in
best_Dice, best_HD, best_ASD = train_net(net, args, ema_net, fold_idx=fold_idx)
File "/content/gdrive/.shortcut-targets-by-id/1CdjrP0uBrq3xcbNjQtST6Y_Mx7YdGinp/CBIM-Medical-Image-Segmentation/train.py", line 97, in train_net
train_epoch(trainLoader, net, ema_net, optimizer, epoch, writer, criterion, criterion_dl, scaler, args)
File "/content/gdrive/.shortcut-targets-by-id/1CdjrP0uBrq3xcbNjQtST6Y_Mx7YdGinp/CBIM-Medical-Image-Segmentation/train.py", line 209, in train_epoch
loss += args.aux_weight[j] * (criterion(result[j], label.squeeze(1)) + criterion_dl(result[j], label))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3029, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception ignored in atexit callback: <function _MultiProcessingDataLoaderIter._clean_up_worker at 0x7f926feb7eb0>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1472, in _clean_up_worker
w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL)
File "/usr/lib/python3.10/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 40, in wait
if not wait([self.sentinel], timeout):
File "/usr/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/usr/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt:
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f927b10f4d7 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f927b0d936b in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f927b1abb58 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #3: + 0x12513e5 (0x7f927c4bd3e5 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x4d5a16 (0x7f92e1b2aa16 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #5: + 0x3ee77 (0x7f927b0f4e77 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #6: c10::TensorImpl::~TensorImpl() + 0x1be (0x7f927b0ed69e in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f927b0ed7b9 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #8: + 0x75afc8 (0x7f92e1daffc8 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #9: THPVariable_subclass_dealloc(_object*) + 0x305 (0x7f92e1db0355 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #10: python3() [0x622134]
frame #11: python3() [0x53e218]
frame #12: python3() [0x58d328]
frame #13: python3() [0x58d3ff]
frame #14: python3() [0x58d3ff]

frame #17: python3() [0x6c02ca]
frame #21: __libc_start_main + 0xf3 (0x7f9312e55083 in /lib/x86_64-linux-gnu/libc.so.6)

Uk biobank data

Hi, in your paper you mention the Uk biobank data. I have just registered in the Uk biobank but I can find how to download the data. Could you please tell us how to find the data, once registered in the Uk biobank?

Thanks!

代码中求均值和标准差的方式

with open('exp/exp_%s/%s.txt' % (args.dataset, args.unique_name), 'w') as f:
    f.write('Dice     HD    ASD\n')
    for i in range(args.k_fold):
        f.write(str(Dice_list[i]) + str(HD_list[i]) + str(ASD_list[i]) + '\n')

    total_Dice = np.vstack(Dice_list)
    total_HD = np.vstack(HD_list)
    total_ASD = np.vstack(ASD_list)

    **f.write('avg Dice:' + str(np.mean(total_Dice, axis=0)) + ' std Dice:' + str(
        np.std(total_Dice, axis=0)) + ' mean:' + str(total_Dice.mean()) + ' std:' + str(
        np.mean(total_Dice, axis=1).std()) + '\n')
    f.write(
        'avg HD:' + str(np.mean(total_HD, axis=0)) + ' std HD:' + str(np.std(total_HD, axis=0)) + ' mean:' + str(
            total_HD.mean()) + ' std:' + str(np.mean(total_HD, axis=1).std()) + '\n')
    f.write('avg ASD:' + str(np.mean(total_ASD, axis=0)) + ' std ASD:' + str(
        np.std(total_ASD, axis=0)) + ' mean:' + str(total_ASD.mean()) + ' std:' + str(
        np.mean(total_ASD, axis=1).std()) + '\n')**

---------------------------------one result-(ACDC, 3 classes, 5-fold cross validation)-------------------------------------
Dice HD ASD
[0.8989825 0.88934495 0.95528512][7.37562663 1.95534587 2.56416565][0.38804081 0.15725263 0.15678071]
[0.87789556 0.88299032 0.93068331][8.3865384 3.03664405 3.89976089][0.39955415 0.18514004 0.20012759]
[0.88114764 0.88519078 0.94220278][12.51135491 3.11411521 2.81922055][0.32309491 0.18445619 0.15274397]
[0.89216353 0.87782188 0.93736269][5.82227898 3.32463123 4.68759254][0.38424565 0.19442972 0.16922716]
[0.85803481 0.8611997 0.9060287 ][8.70519859 7.42175177 8.22716422][0.46205606 0.29103419 0.32277852]
avg Dice:[0.88164481 0.87930952 0.93431252] std Dice:[0.01402123 0.00978801 0.01627609] mean:0.898422283778588 std:0.012973181725322086
avg HD:[8.5601995 3.77049763 4.43958077] std HD:[2.21640401 1.88651046 2.04163919] mean:5.590092632207561 std:1.4514267131357348
avg ASD:[0.39139831 0.20246255 0.20033159] std ASD:[0.04424211 0.04599499 0.06343822] mean:0.2647308186074508 std:0.04898671094632212
这里的均值应该没有问题,可以分步,但是标准差不是这样求的吧,你这里求的是5个子集均值之后的标准差,真实的标准差应该保持每个验证样本的性能指标,然后才能统计出真实的标准差。不知道我理解是否有误?

About the weights for cross entropy loss

Hi yhygao, thank you for the codes. I have seen you have used weight: [0.5, 1, 3] # weight of each class in the loss function when training the LITS datasets. I was now using my own datasets to segment the lung tumor. Can I ask for your experience if the weights are appropriate if I set it to [0.5, 2] or [0.5, 3]? Thanks again

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.