beckschen / 3d-transunet Goto Github PK
View Code? Open in Web Editor NEWThis is the official repository for the paper "3D TransUNet: Advancing Medical Image Segmentation through Vision Transformers"
License: Apache License 2.0
This is the official repository for the paper "3D TransUNet: Advancing Medical Image Segmentation through Vision Transformers"
License: Apache License 2.0
Dear Authors of the 3D-TransUNet Model,
I am reaching out to seek guidance on reproducing the results presented in your recent publication, specifically those related to the Brats-MET dataset. In my work, I have used your model in my computational pipeline. The configuration parameters for the model were set as follows:
Model Configuration: I have focused on reproducing results for the encoder-only version of your model. The model's argument structure and the default values utilized are illustrated in the attached images.
model's config
default values for the model
Loss Function: The model employs a combined softDice and BCE loss function, weighted equally at 0.5 for each. This loss is computed individually for the three segmentation classes (WT, TC, ET) and subsequently aggregated. The formula used is: total_loss = (loss_wt + loss_tc + loss_et) / 3.0
, where loss_{class_name} = 0.5 * Dice_loss + 0.5 * BCE_loss
.
Dataset and Augmentation: The Brats-MET dataset has been partitioned into 70% for training, 10% for validation, and 20% for testing. I adhered to the crop size of [128,128,128] and applied the same augmentation techniques as mentioned in your study for the training dataset. Additionally, I employed MONAI's sliding window approach with a 0.5 overlap for validation and testing.
Optimizer and Training: The AdamW optimizer was used, initialized with a learning rate of 3e-4 and employing the following scheduler:
Scheduler Details
The model training extended over 300 epochs, incorporating deep supervision with weight calculations as depicted:
Deep Supervision Weights
Issue with Deep Supervision: I encountered a challenge wherein training the model without deep supervision resulted in an 'out of index' error for the seg_outputs
variable in the forward method.
Metrics: For performance evaluation, I utilized the metrics provided in the following repository, as recommended by the Challenge organization:
BraTS 2023 Metrics Repository
Despite adhering closely to the methodology outlined in your paper, the lesion-based Dice metrics obtained were significantly lower than expected:
Dice for ET: 15.34
Dice for TC: 17.28
Dice for WT: 15.90
I would greatly appreciate your insights or suggestions on what might be causing this discrepancy. Is there a particular aspect of the model configuration or training process that I might need to look into or implement correctly? Your guidance in this matter would be invaluable to my research.
Thank you for your time and assistance.
Best regards,
Yousef Sadegheih
Hi there,
I would like to inquire whether it is sufficient to organize the training data according to the path structure specified by nnUNet's requirements in order to store the training data.
Thank you!
parser.add_argument('--config', default='', type=str, metavar='FILE',
help='YAML config file specifying default arguments')这段代码中的默认config文件在哪里
Hi, I would like to train on my own dataset and have come across nnUNetV1 (I assume you are using it). I have no information on how to set up the YAML configuration file, and I hope someone can provide me with some guidance.
Thank you!
Hi, I encountered problems with the distributed training. Can I train your model with a single GPU? Thanks a lot!
When using lr_schedule, from torch.optim.lr_scheduler import LambdaLR, _LRScheduler cannot be imported because it is an internal class. What should I do
Hi, is there any pre-trained model available?
Look forward to seeing a more detailed readme file soon,Thank you very much
inference.py from flop_count.flop_count import flop_count could not be found
Dear Author, thank you for making the codes available for this amazing work!
I am trying to reproduce the results on Synapse dataset, and would appreciate it if you can share the preprocessed dataset that you used for training? And any other needed information to the README file
Best,
Mohamed
run on fold: 0
/train.py: line 14: import: command not found
/train.py: line 15: import: command not found
/train.py: line 16: import: command not found
/train.py: line 17: import: command not found
/train.py: line 18: import: command not found
/train.py: line 19: import: command not found
/train.py: line 20: from: command not found
/train.py: line 21: from: command not found
/train.py: line 22: from: command not found
/train.py: line 24: syntax error near unexpected token (' /train.py: line 24:
def main():'
我已经安装好各种包,为什么会这样报错,我进一步该如何解决
Dear,
In nn_transunet.networks.transunet3d_model, line 497, you have an internal import pointing to an unexisting file:
from .mask2former_modeling.transformer_decoder.maskformer_transformer_decoder3d import StandardTransformerDecoder
I thought it may be something like a typo and the right one was .mask2former_modeling.transformer_decoder.mask2former_transformer_decoder3d but I've seen that this file has no StandardTransformerDecoder class so it is not
Thanks
Dear Authors of the 3D-TransUNet Model,
I have been following the work of your Transunet series and thank you for your selfless contributions in the field of medical image segmentation.
When I trained the Synapse dataset using the encoder only pattern according to the content and code in the paper, the results were very different from those in the paper, which I think may be caused by different data or data processing.
Is there a way to get the Synapse datasets you use or the preprocessing you do? The Synapse dataset I am currently working with has been filtered to 'TransUNet' : Transformers Make Strong Encoders for Medical Image Segmentation ", and 30 cases were pretreated according to nnunetv1. However, the labels of the data set include eso,IVC,venis,rad and lad in addition to the organs in the paper. It is unknown whether this will affect the results.
Thank you for your time and assistance.
Hi. First of all, congratulations for the work and thanks for sharing it!
I created a model with BraTS2021 database, just using T1ce and enhanced tumour region. I have been trying to run inference but there is some problem with the code. There is an example of the running command on fold 0:
nnunet_use_progress_bar=1 torchrun --nproc_per_node=1 /code/maincode/train.py --task="Task001_BraTS_T1captante" --fold=0 --config="/code/data/nnUNet_raw_data_base/nnUNet_raw/nnUNet_raw_data/Task001_BraTS_T1captante/encoder_plus_decoder.yaml" --network="3d_fullres" --resume='' --local-rank=0 --optim_name="adam" --valbest --val_final --npz
Everything works find until the final step of validation (the model ends the training phase). I tried to run the validation on fold 0 to get the predictions:
nnunet_use_progress_bar=1 torchrun --nproc_per_node=1 /code/maincode/train.py --task="Task001_BraTS_T1captante" --fold=0 --config="/code/data/nnUNet_raw_data_base/nnUNet_raw/nnUNet_raw_data/Task001_BraTS_T1captante/encoder_plus_decoder.yaml"
--network="3d_fullres" --resume='' --local-rank=0 --optim_name="adam" --val_final --validation_only
The error is the following:
computing Gaussian
run?
Traceback (most recent call last):
File "/code/maincode/train.py", line 321, in
main()
File "/code/maincode/train.py", line 307, in main
trainer.validate(save_softmax=args.npz, validation_folder_name=val_folder,
File "/code/maincode/nn_transunet/trainer/nnUNetTrainerV2_DDP.py", line 1188, in validate
softmax_pred = self.predict_preprocessed_data_return_seg_and_softmax(data[:-1],
File "/code/maincode/nn_transunet/trainer/nnUNetTrainerV2_DDP.py", line 1313, in predict_preprocessed_data_return_seg_and_softmax
return ret
UnboundLocalError: local variable 'ret' referenced before assignment
I also tried to run the inference.py script:
python3 /code/maincode/inference.py --config="/code/data/nnUNet_raw_data_base/nnUNet_raw/nnUNet_raw_data/Task001_BraTS_T1captante/encoder_plus_decoder.yaml"
--fold=0 --raw_data_dir="/code/data/nnUNet_raw_data_base/nnUNet_raw/nnUNet_raw_data/Task001_BraTS_T1captante/imagesTs" --raw_data_folder="imagesTs"--save_folder="/code/data/nnUNet_results/UNet_IN_NANFang/Task001_BraTS_T1captante/nnUNetTrainerV2_DDP__nnUNetPlansv2.1/GeTU500_3DTransUNet_encoder_plus_decoder/fold_0/predicts" --local_rank=0 --num_examples=250 --n_class=1
The execution works without errors, but the network does not return any files.
Can anybody help me? Thanks!
In the transunet3d_model.py file at line 497 from.mask2former_model.transformer_decoder.maskformer_transformer_decoder3d import Cannot find maskformer_transformer_decoder3d StandardTransformerDecoder, but I found in the directory mask2former_transformer_decoder3d, But still can't find StandardTransformerDecoder in this module
I installed all the packages and tried to run the code. However, I have an error when the code calls to default_configuration.py.
I processed the data using the nnUNet guide. In this step, nnUNet usually processes the data and the output consists of several folders such as nnUNetPlans_2d or nnUNetPlans_3d_fullres and inside these folders, we can find many files with .pkl. Did you use another way of processing the data?
File ~\anaconda3\envs\nnunet\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\chris\onedrive\桌面\project\3d-transunet\train.py:321
main()
File c:\users\chris\onedrive\桌面\project\3d-transunet\train.py:151 in main
with open(args_config.config, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: ''
請問有什麼解決辦法嗎
Hi there,
I'd like to ask what "my codebase" refers to in the code.
Thank you!
Hello,
I am trying to train your model. I already ran the preprocessing from nnUNet. From the preprocessing, we get a folder with the gt_segmentations
, nnUNetPlans_2d
, nnUNetPlans_3d_fullres
and so on. Additionally, we also get dataset_fingerprint.json
, dataset.json
, nnUNetPlans.json
and splits_final.json
. How can I generate the pickle file you need? There isn't much documentation from which I can see how to do it.
Thank you in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.