Giter Club home page Giter Club logo

recursive-cascaded-networks's Introduction

Recursive Cascaded Networks for Unsupervised Medical Image Registration (ICCV 2019)

By Shengyu Zhao, Yue Dong, Eric I-Chao Chang, Yan Xu.

Paper link: [arXiv]

Introduction

We propose recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. Shared-weight techniques are developed in addition to the recursive architecture. We achieve state-of-the-art performance on both liver CT and brain MRI datasets for 3D medical image registration. For more details, please refer to our paper.

cascade_example

cascade_architecture

This repository includes:

  • Training and testing scripts using Python and TensorFlow;
  • Pretrained models using either VTN or VoxelMorph as the base network; and
  • Preprocessed training and evaluation datasets for both liver CT scans and brain MRIs.

Code has been tested with Python 3.6 and TensorFlow 1.4.

If you use the code, the models, or our data in your research, please cite:

@inproceedings{zhao2019recursive,
  author = {Zhao, Shengyu and Dong, Yue and Chang, Eric I-Chao and Xu, Yan},
  title = {Recursive Cascaded Networks for Unsupervised Medical Image Registration},
  booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year = {2019}
}
@article{zhao2019unsupervised,
  title = {Unsupervised 3D End-to-End Medical Image Registration with Volume Tweening Network},
  author = {Zhao, Shengyu and Lau, Tingfung and Luo, Ji and Chang, Eric I and Xu, Yan},
  journal = {IEEE Journal of Biomedical and Health Informatics},
  year = {2019},
  doi = {10.1109/JBHI.2019.2951024}
}

Datasets

Our preprocessed evaluation datasets can be downloaded here:

If you wish to replicate our results, please also download our preprocessed training datasets:

Please unzip the downloaded files into the "datasets" folder. Details about the datasets and the preprocessing stage can be found in the paper.

Pretrained Models

You may download the following pretrained models and unzip them into the "weights" folder.

For liver CT scans,

For brain MRIs,

Evaluation

If you wish to evaluate the pretrained 10-cascade VTN (for liver) for example, please first make sure that the evaluation datasets for liver CT scans, SLIVER, LiTS, and LSPIG, have been placed into the "datasets" folder. For evaluation on the SLIVER dataset (20 * 19 pairs in total), please run:

python eval.py -c weights/VTN-10-liver -g YOUR_GPU_DEVICES

For evaluation on the LiTS dataset (131 * 130 pairs in total, which might be quite slow), please run:

python eval.py -c weights/VTN-10-liver -g YOUR_GPU_DEVICES -v lits

For pairwise evaluation on the LSPIG dataset (34 pairs in total), please run:

python eval.py -c weights/VTN-10-liver -g YOUR_GPU_DEVICES -v lspig --paired

YOUR_GPU_DEVICES specifies the GPU ids to use (default to 0), split by commas with multi-GPU support, or -1 if CPU only. Make sure that the number of GPUs specified evenly divides the BATCH_SIZE that can be specified using --batch BATCH_SIZE (default to 4). The proposed shared-weight cascading technique can be tested using -r TIMES_OF_SHARED_WEIGHT_CASCADES (default to 1).

When the code returns, you can find the result in "evaluate/*.txt".

Similarly, to evaluate the pretrained 10-cascade VTN (for brain) on the LPBA dataset:

python eval.py -c weights/VTN-10-brain -g YOUR_GPU_DEVICES

Please refer to our paper for details about the evaluation metrics and our experimental settings.

Training

The following script is for training:

python train.py -b BASE_NETWORK -n NUMBER_OF_CASCADES -d DATASET -g YOUR_GPU_DEVICES

BASE_NETWORK specifies the base network (default to VTN, also can be VoxelMorph). NUMBER_OF_CASCADES specifies the number of cascades to train (not including the affine cascade), default to 1. DATASET specifies the data config (default to datasets/liver.json, also can be datasets/brain.json). YOUR_GPU_DEVICES specifies the GPU ids to use (default to 0), split by commas with multi-GPU support, or -1 if CPU only. Make sure that the number of GPUs specified evenly divides the BATCH_SIZE that can be specified using --batch BATCH_SIZE (default to 4). Specify -c CHECKPOINT to start with a previous checkpoint.

Demo

We provide a demo that directly takes raw CT scans as inputs (only DICOM series supported), preprocesses them into liver crops, and generates the outputs:

python demo.py -c CHECKPOINT -f FIXED_IMAGE -m MOVING_IMAGE -o OUTPUT_DIRECTORY -g YOUR_GPU_DEVICES

Note that the preprocessing stage includes cropping the liver area using a threshold-based algorithm, which takes a couple of minutes and the correctness is not guaranteed.

Built-In Warping Operation with TensorFlow

If you wish to reduce the GPU memory usage, we implemented a memory-efficient warping operation built with TensorFlow 1.4. A pre-built installer for Windows x64 can be found here, which can be installed with pip install tensorflow_gpu-1.4.1-cp36-cp36m-win_amd64.whl. Please then specify --fast_reconstruction in the training and testing scripts to enable this feature. Otherwise, the code is using an alternative version of the warping operation provided by VoxelMorph.

Acknowledgement

We would like to acknowledge Tingfung Lau and Ji Luo for the initial implementation of VTN.

recursive-cascaded-networks's People

Contributors

microsoft-github-operations[bot] avatar microsoftopensource avatar zsyzzsoft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recursive-cascaded-networks's Issues

LPBA Dice score

Hello, the code to calculate Dice simply uses "eg1 > 128 and seg2 > 128" (in function 'mask_metrics'), but the LPBA dataset has 56 segmented anatomical structures. How could I calculate the Dice for LPBA?

Jacc score

I am confused about the Jacc score, what dose it mean?
the Jacc score
$ | seg1 \cap seg2 | / (|seg1 \cup seg2|) $

Get flow output

Hello @zsyzzsoft !!

Thank you for help on previous issue. Now, while running the demo.py file I also want to output the flow field. For that what changes do I have to make in the demo.py file? As suggested by you - add real_flow in keys between lines 90-100. I did that as well, see below:

sess = tf.Session()

    saver = tf.train.Saver(tf.get_collection(
        tf.GraphKeys.GLOBAL_VARIABLES))
    checkpoint = args.checkpoint
    saver.restore(sess, checkpoint)
    tflearn.is_training(False, session=sess)


    keys = sum([['real_flow_{}'.format(i), 'warped_moving_{}'.format(i)] for i in range(len(framework.network.stems))], ['real_flow'])
    gen = [{'id1': np.ones((1,)), 'id2': np.ones((1,)),
        'voxel1': np.reshape(img_fixed, [1, 280, 280, 84, 1]), 'voxel2': np.reshape(img_moving, [1, 280, 280, 84, 1])}]
    results = framework.validate(sess, gen, keys=keys, summary=False)

It seems these line of code isn't executed. I get the moving and fixed processed images in 'output' folder but i also get following warning while running the code:

WARNING:tensorflow:From demo.py:90: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

Traceback (most recent call last):
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
    return fn(*args)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[{{node save/Assign_20}}]]
         [[GroupCrossDeviceControlEdges_0/save/restore_all/_2]]
  (1) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[{{node save/Assign_20}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 1290, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
    run_metadata_ptr)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
    run_metadata)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[node save/Assign_20 (defined at C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py:1762) ]]
         [[GroupCrossDeviceControlEdges_0/save/restore_all/_2]]
  (1) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[node save/Assign_20 (defined at C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py:1762) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'save/Assign_20':
  File "demo.py", line 287, in <module>
    main()
  File "demo.py", line 90, in main
    tf.GraphKeys.GLOBAL_VARIABLES))
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 828, in __init__
    self.build()
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 840, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 878, in _build
    build_restore=build_restore)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 508, in _build_internal
    restore_sequentially, reshape)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 350, in _AddRestoreOps
    assign_ops.append(saveable.restore(saveable_tensors, shapes))
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saving\saveable_object_util.py", line 73, in restore
    self.op.get_shape().is_fully_defined())
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\ops\state_ops.py", line 227, in assign
    validate_shape=validate_shape)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\ops\gen_state_ops.py", line 69, in assign
    use_locking=use_locking, name=name)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3371, in create_op
    attrs, op_def, compute_device)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3440, in _create_op_internal
    op_def=op_def)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1762, in __init__
    self._traceback = tf_stack.extract_stack()


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "demo.py", line 287, in <module>
    main()
  File "demo.py", line 92, in main
    saver.restore(sess, checkpoint)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 1326, in restore
    err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

2 root error(s) found.
  (0) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[node save/Assign_20 (defined at C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py:1762) ]]
         [[GroupCrossDeviceControlEdges_0/save/restore_all/_2]]
  (1) Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [5,5,2,512,9] rhs shape= [2,2,2,512,9]
         [[node save/Assign_20 (defined at C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py:1762) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'save/Assign_20':
  File "demo.py", line 287, in <module>
    main()
  File "demo.py", line 90, in main
    tf.GraphKeys.GLOBAL_VARIABLES))
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 828, in __init__
    self.build()
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 840, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 878, in _build
    build_restore=build_restore)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 508, in _build_internal
    restore_sequentially, reshape)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 350, in _AddRestoreOps
    assign_ops.append(saveable.restore(saveable_tensors, shapes))
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saving\saveable_object_util.py", line 73, in restore
    self.op.get_shape().is_fully_defined())
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\ops\state_ops.py", line 227, in assign
    validate_shape=validate_shape)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\ops\gen_state_ops.py", line 69, in assign
    use_locking=use_locking, name=name)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3371, in create_op
    attrs, op_def, compute_device)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3440, in _create_op_internal
    op_def=op_def)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1762, in __init__
    self._traceback = tf_stack.extract_stack()

Please help!!

How to continue the training on the saved model?

According to my computer's hardware configuration, a full training session with 20,000 rounds takes a long time.How do you build on the model that you trained before?For example, I have trained 10,000 rounds and saved them. How can I continue the training on the saved model?
Do you have any good Suggestions? Looking forward to your reply!

KeyError: 'volume'

when I run: python eval.py -c weights/VTN-10-liver -g 0
I got a error like:

File "C:\Users\Administrator\Desktop\Recursive-Cascaded-Networks\data_util\liver.py", line 163, in generator
ret['voxel1'][i, ..., 0], ret['voxel2'][i, ..., 0] = d1['volume'], d2['volume']
KeyError: 'volume'

who can solve this? Thx

Run code on data of other 'image_size', not (128,128,128)

Thanks for sharing your work in this field.

I run the provided code on the MRI data of size (160, 192, 224) and before that, I have carefully modified the corresponding image_size. It gives results as:

"Assign requires shapes of both tensors to match. lhs shape= [3,3,4,512,9] rhs shape= [2,2,2,512,9]
[[{{node save/Assign_20}} = Assign[T=DT_FLOAT, _class=["loc:@gaffdfrm/affine_stem/conv7_W/W"]"

I wonder does the data size has to remain (128, 128, 128) to run the code. If so, could you provide some suggestions about how to resample my data (volume and segmentation map) to (128, 128, 128) properly?
Ps: I have tried to resample with the scipy.ndimage.zoom() function, but it results in poor Dice scores.

Thanks a lot.

Questions about loss function

Hi, I have been studying dice index and loss function recently.I have some questions for a long time.
I want to add dice_loss to the loss. It is defined as follows.
def dice_loss(self, seg1, wraped_seg2): function refers to your code def mask_metrics(seg1, seg2):

  1. Firstly, I passed in seg1 and Warped_seg2. ### line 474
    We find that the dice_loss does not converge. dice_loss=1 means that dice_score=0 .
    QQ图片20200711134742
    In order to verify whether I wrote the loss function code has problems.

  2. Secondly, I passed in img1 and Warped_img2. ### line 475
    It's ok. dice_loss converges. This means my code has no problems.
    QQ图片20200711135959

Through comparative experiments, it is found that the input data is different.
test1: seg1self.reconstruction([seg2, stem_result['agg_flow']] ---------------failure
test2: img1stem_result['warped']---------------------------------------------succeed

I think seg1 is fine. self.reconstruction([seg2, stem_result['agg_flow']] occures wrong.
In fact, self.reconstruction([seg2, stem_result['agg_flow']] equals to warped_seg_moving.Is it not in the back propagation at all ?
All the input refers to the warped_moving, the loss function converges. All of the warped_seg_moving involved do not converge.
The above is my experiment. Could you give me some Suggestions? Is there something wrong with warped_seg_moving ?

Looking forward to your reply. Thank you!

confused about landmark handling

i have some questions about landmark handling. Your response will be greatly helpful to me. thanks in advance.

  1. in data util of brain.json, the point1 and point2 input are always of -1.0. how the landmark can be calculated?

  2. in the VTN paper,

"For Landmark Distance, we perform the inverse of the linear
part of the affine transform (which aligns the fixed image to the
atlas) to the difference vector between warped landmark and
landmark in the (aligned) fixed image, so that the length goes
back to the coordinate defined by the original fixed image"

does the following implementation align with above statement:
image

it is quite hard time for me to understand (flow as im, pt1 as offset?)

self.trilinear_sampler([flow, pt1])

why cann't we apply flow to point2?

dataset split

in this project, the dataset is only split to train/val. if i understand correctly, during training process, the val splits are also involved into gradients computing. any reason we should not have test split? thanks

In the liver.py, i see the following code.

        while True:
            ret = dict()
            ret['voxel1'] = np.zeros(
                (batch_size, 128, 128, 128, 1), dtype=np.float32)
            ret['voxel2'] = np.zeros(
                (batch_size, 128, 128, 128, 1), dtype=np.float32)
            ret['seg1'] = np.zeros(
                (batch_size, 128, 128, 128, 1), dtype=np.float32)
            ret['seg2'] = np.zeros(
                (batch_size, 128, 128, 128, 1), dtype=np.float32)
            ret['point1'] = np.ones(
                (batch_size, np.sum(valid_mask), 3), dtype=np.float32) * (-1)
            ret['point2'] = np.ones(
                (batch_size, np.sum(valid_mask), 3), dtype=np.float32) * (-1)
            ret['id1'] = np.empty((batch_size), dtype='<U40')
            ret['id2'] = np.empty((batch_size), dtype='<U40')

The ret['voxle']、ret['seg'] and ['point'] what does they mean? The train data is v[voxel]?

Application for large histology images

Hello, I came across to this nice project and it seems quite powerful.
Have you experienced to use it also for large histology images, talking about images with 10k pixels in diagonal and more? Would you be interested in participating in AHNIR challenge or include your method into BIRL framework?

what's the format of the deformation field?

Thanks for you kindly sharing your codes. It is very nice work and it canbe excuted directly.

I get the deformation field (128128128*3 matrix) by adding "real_flow" in the variable "keys" (in eval.py). However, I cannot get the right warped image using the floating image and the deformation field (I used the SimpleITK.Warp function). I wonder whether you can added some explanation to the deformation field.

how can I train a single affine network?

I want to train an affine network, the inputs are src and tgt, but I don't know what the output should be. Flow field or W +b? And which loss should be taken there? cc loss or det_loss and ortho_loss? I am really confused.

Convert to h5 format

Hello, @zsyzzsoft

Thank you so much for this wonderful work on image registration.
I am new to this field I was able to successfully run your model/code on your dataset. Actually, I have my own dataset with a folder containing image pairs i.e moving and fixed in .nii format eg: fix_img.nii.gz and mov_img.nii.gz.

Can you please guide me on how to convert them into h5 format so that I can use my dataset to train on your model?

Thanks in advance!!

my training result is not as good as paper

thanks for excellent paper and work. i try to re-train the project after migrating the project to tensorflow 2.0. Only api is migrated, no network, loss, structure, data preprocessing ,etc changed.

The following is the screen shot of training tensorboard on liver dataset:
image
image

and the evalation result:
image

any idea what might be go wrong? thanks.

ModuleNotFoundError: No module named 'tensorflow.compat'

Hi Guys,

I tried to run registration demo.
My set up is python 3.6 and tensorflow=1.4 and cuda 8.0
But I am getting all the time an error :
0%| | 0/46 [00:00<?, ?it/s]
2%|â–� | 1/46 [00:00<00:29, 1.51it/s]
13%|█▎ | 6/46 [00:00<00:18, 2.13it/s]
20%|█▉ | 9/46 [00:01<00:14, 2.58it/s]
33%|███▎ | 15/46 [00:01<00:08, 3.61it/s]
39%|███▉ | 18/46 [00:02<00:07, 3.90it/s]
54%|█████� | 25/46 [00:02<00:04, 4.84it/s]
65%|██████▌ | 30/46 [00:02<00:02, 6.59it/s]
72%|███████� | 33/46 [00:03<00:02, 6.09it/s]
85%|████████� | 39/46 [00:03<00:00, 8.26it/s]
91%|█████████�| 42/46 [00:03<00:00, 7.87it/s]
100%|██████████| 46/46 [00:03<00:00, 11.52it/s]
140 180
160 200
Traceback (most recent call last):
File "demo.py", line 285, in
main()
File "demo.py", line 55, in main
import tflearn
File "/.local/lib/python3.6/site-packages/tflearn/init.py", line 4, in
import tensorflow.compat.v1 as tf
ModuleNotFoundError: No module named 'tensorflow.compat'

Do I miss something ? or try any other version ?

I would appreciate for any help please.

Best,
Agata

GPU memery needed for training

Thanks for sharing your code! I want to know the GPU memery needed during training. Because I always suffered the error about resource-exhaused. If I set the batchsize = 1 during traning in liver case, is a 12GB GPU enough to train VTN-10 model?

Requirements

Could you provide details regarding dependencies and versions, preferably in a requirements.txt file from pip freeze?

about liver dataset

Thanks for sharing your work in this field!
I have some questions about these words in the paper
"Raw scans are resampled into 128 × 128 × 128 voxels after cropping unnecessary area around the target object. For liver CT scans, a simple threshold-based algorithm is applied to find a rough liver bounding box for cropping."

1.What should I do to crop and resample?
Sorry I am not familar with image processing, what kind of tools should I use?
2.During training,how do I select two input image (fixed image and moving image)from dataset (e.g LiTs 130 CT scans,512x512x75)?
3.If I were to use a liver dataset, what preprocessing steps would I need to perform?

Some problems encountered while running the GPU version

Dear author,
I am a graduate student from suzhou, China. Thank you for your contributions.I was able to successfully implement your work on the CPU, but there was a problem running on the GPU that I couldn't solve.Based on the error, I guess there is something wrong with the versions of tensorflow and keras.
Could you tell me What versions of tensorflow/tensorflow-gpu/keras/cuda/cudnn/ is supported/expected? Sincerely looking forward to hearing from you soon.
My settings :python36,tensorflow=1.13.1,tensorflow-gpu=1.4.0,cuda=10.0,cudnn=7.4.1,keras=2.3.1, NVIDIA F=GeFroce RTX 2080 Ti,but it is faild.
ERRORS:
QQ图片20200324215315
QQ图片20200324215327

I used CHAOS CT liver dataset to get bad results

The following is the data after processed.
train dataset is 35 ct. valid dataset is 5.
final dice score is 0.5

Is there any problem? Is the intensity difference between the images too large?
I want to registration between ct and mr.What process should i do with mri image?

微信截图_20200428171011

微信截图_20200428165735

How to get moving image.

Thank you for your patient reply before.Your advice is helpful to me. I have successfully get the real_flow, warped_moving and image_fixed pictures. Through add keys = ['real_flow', 'image_fixed', 'warped_moving'] .
For the purpose of getting moving image, following the previous method, I attempted to add the following key words to keys. ['image_moving'] / ['img_moving'] / ['moving'] . But they are all failed.
QQ图片20200702184725
Prompt us for keyword error['img_moving'].
1、Can you tell me what the key words correspond to the moving image?
2、Or how do you get moving images?Can you give me some ideas?
Looking forward to your reply!

大神,你好,求回复

大神,你好,我最近正在进行图像配准方向的研究。有幸拜读到你的论《Recursive Cascaded Networks for Unsupervised Medical Image Registration》,想请教一下你,你在使用级联方法的时候,在计算约束项损失函数的时候是否有除级联层数呢?

some questions about the paper

After reading the paper, I have some questions. It is appreciated if you could spend some time on these questions:

  1. what do you mean by "shared-weight cascading"? In my understanding, the 'shared-weight cascading' may represent that the weight in all the subnetworks is the same, and the weight is trained. However, the paper said: "The reason we do not use shared-weight cascading in training is that shared-weight cascades consume extra GPU memory ....", thus, the weight among different subnetworks is different? It really make me confused.

  2. I am confused about the difference between VTN and this paper. They all use cascade method.

The implementation difference of VTN in this repo comparing to VTN paper and retraining results report.

Thanks a lot for making your codes and dataset publicly available. And also congratuations to authors becasue recursive cascaded strategy of base networks dose work well for image registration.

Recently, I have downloaded your dataset and tried to retrain the model with tensorflow 1.5 and CUDA 9.1. I found the following questions:

  1. According to this repo, the checkpoint is saved every 6 hours and 99500-epoch cpkt is saved for evaluation, is this setting also adopted to report the final results in your paper (it is not mentioned in the paper if the results listed are top results or others)?

2, The implementation of VTN in this repo is different from the original VTN paper "Unsupervised 3D End-to-End Medical Image Registration with Volume Tweening Network". The architecture shown VTN paper Fig3 and Fig4, do not have conv3_1 (see basenet.py, line 68& line 210) in the affine and deform subnetwork. And also, it is mention by another issue that the decoder uses 5 additional upsampling branches ((see basenet.py, line 95 etc.), this is also not mentioned by VTN. (And it is worth nothing but these upsampling branches do not use LeakyReLU after conv and decon as activation interestingly)

  1. And maybe due to the above reason,when I retrain your "improvemented VTN" on live CT scan, the resulst of 1-cascade VTN is dice: 0.915 and landmark 13.1 by evaluating 99500-epoch ckpt closing to your results in "Recursive Cascaded Networks for Unsupervised Medical Image Registration". But I adjusted to retrain the original VTN (remove conv_3 and 5 upsampling branches) , the result of 1-cascade VTN is dice: 0.909 and landmark 13.2 . A relatively obvious difference existed in dice score results.

This findings do not influence the success of recursive cascaded strategy. Maybe you can address the difference between "improvemented VTN" and "original VTN"? or do I misunderstand anything?

Again, thanks a lot.

question ? [checkpoint in demo]

Hello,

I would like to try your solution. I would like to register 2 CT dicom series (abdominal scans).
I tried demo.py code. I have an issue with checkpoint. Which checkpoint to use ?
I would like to test on my own data.

python demo.py -c ./weights/VTN-3-liver -f ./datasets/fixed -m ./datasets/moving -o output
python demo.py -c ???? -f ./datasets/fixed -m ./datasets/moving -o output

What should I do ?

Because until now I can not run code properly.

I would be appreciate for any help please.

how to run 'demo.py'

Thanks for your contributions in this field.

I have successfully run your code of 'train.py' and 'eval.py' .But the results of them don't have any pictures. I really want to get the images and flow field after registration. Can I get them from 'demo.py'?And how can I get -f FIXED_IMAGE -m MOVING_IMAGE to run 'demo.py'?

Thanks a lot.Looking forward to your reply.

Problem with training on my own dataset

@zsyzzsoft
I am trying to run your code on my own liver dataset having pair of moving and fixed images. But I have kept the resolution to 28828896 and not 128128128 as in your code. I have made the necessary changes as well. But I am getting the following error:

Traceback (most recent call last):
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 9 and 10. Shapes are [?,9,9,3] and [?,10,10,4]. for '{{node gaffdfrm/deform_stem_0/concat5}} = ConcatV2[N=3, T=DT_FLOAT, Tidx=DT_INT32](gaffdfrm/deform_stem_0/conv5_1_leakilyrectified, gaffdfrm/deform_stem_0/deconv5_rectified, gaffdfrm/deform_stem_0/upsamp6to5/conv3d_transpose, gaffdfrm/deform_stem_0/concat5/axis)' with input shapes: [?,9,9,3,256], [?,10,10,4,256], [?,10,10,4,3], [] and with computed input tensors: input[3] = <4>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 244, in <module>
    main()
  File "train.py", line 78, in main
    framework = Framework(devices=gpus, image_size=image_size, segmentation_class_value=cfg.get('segmentation_class_value', None), fast_reconstruction = args.fast_reconstruction)
  File "C:\Users\shubh\Desktop\mir_test2\network\framework.py", line 93, in __init__
    self.predictions = self.network(*net_pls)
  File "C:\Users\shubh\Desktop\mir_test2\network\utils.py", line 111, in __call__
    return self.build(*args, **kwargs)
  File "C:\Users\shubh\Desktop\mir_test2\network\recursive_cascaded_networks.py", line 87, in build
    stem_result = stem(img1, stem_results[-1]['warped'])
  File "C:\Users\shubh\Desktop\mir_test2\network\utils.py", line 111, in __call__
    return self.build(*args, **kwargs)
  File "C:\Users\shubh\Desktop\mir_test2\network\base_networks.py", line 92, in build
    concat5 = tf.concat([conv5_1, deconv5, upsamp6to5], 4, 'concat5')
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1677, in concat
    return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1206, in concat_v2
    _, _, _op, _outputs = _op_def_library._apply_op_helper(
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 748, in _apply_op_helper
    op = g._create_op_internal(op_type_name, inputs, dtypes=None,
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3528, in _create_op_internal
    ret = Operation(
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\framework\ops.py", line 2015, in __init__
    self._c_op = _create_c_op(self._graph, node_def, inputs,
  File "C:\Users\shubh\anaconda3\envs\myenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 9 and 10. Shapes are [?,9,9,3] and [?,10,10,4]. for '{{node gaffdfrm/deform_stem_0/concat5}} = ConcatV2[N=3, T=DT_FLOAT, Tidx=DT_INT32](gaffdfrm/deform_stem_0/conv5_1_leakilyrectified, gaffdfrm/deform_stem_0/deconv5_rectified, gaffdfrm/deform_stem_0/upsamp6to5/conv3d_transpose, gaffdfrm/deform_stem_0/concat5/axis)' with input shapes: [?,9,9,3,256], [?,10,10,4,256], [?,10,10,4,3], [] and with computed input tensors: input[3] = <4>.

Please help me on this.

LPBA segmentation ground truth

Thanks for your work. I'm using your dataset to train my model , I want to use LPBA to evaluate my model , but I can't find where the LPBA ground truth is. Would you provide the LPBA ground truth?

About img augmentation

there is img augmentation for moving image. Is it ok to apply same augmentation to the fixed image? I am asking this question since in my case, the dataset is much smaller, if there is no concern, I will try to add augmentation for both images.

image

about VTN network structure

hi,I have some questions about the code of VTN

        pred6 = convolve('pred6', conv6_1, dims, 3, 1)
        upsamp6to5 = upconvolve('upsamp6to5', pred6, dims, 4, 2, shape5[1:4])
        deconv5 = upconvolveLeakyReLU(
            'deconv5', conv6_1, shape5[4], 4, 2, shape5[1:4])
        concat5 = tf.concat([conv5_1, deconv5, upsamp6to5], 4, 'concat5')

in the VTN code,every upsampling part have pred,this structure is not mentioned in the paper.
what is the use of this operation?

msd_dataset.h5 youyi_liver.h5 not found

when I run eval.py,it shows:

curses is not supported on this machine (please install/reinstall curses for an optimal experience)
Using TensorFlow backend.
weights/VTN-10-liver\model-99500
{'base_network': 'VTN', 'n_cascades': 10, 'rep': 1, 'gpu': '0,1,2,3', 'checkpoint': None, 'dataset': 'datasets/liver.json', 'batch': 4, 'round': 20000, 'epochs': 5, 'debug': False, 'val_steps': 100, 'net_args': '', 'data_args': '', 'lr': 0.0001, 'clear_steps': False, 'finetune': None, 'name': None, 'logs': ''}
WARNING:tensorflow:From D:\Recursive-Cascaded-Networks-master\network\framework.py:64: calling reduce_all (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
[(<network.base_networks.VTNAffineStem object at 0x000001DA6E9E2668>, {'raw_weight': 0, 'reg_weight': 0, 'weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E9B0DA0>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E9B0C88>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E9B0E80>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E5B7C88>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6FB4D898>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6FADDE80>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E5B9278>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E5A6F98>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E5A6C18>, {'raw_weight': 0, 'weight': 1, 'reg_weight': 1}), (<network.base_networks.VTN object at 0x000001DA6E5A6F60>, {'raw_weight': 1, 'weight': 1, 'reg_weight': 1})]
WARNING:tensorflow:From C:\Users\pf\Anaconda3\lib\site-packages\tflearn\initializations.py:119: UniformUnitScaling.init (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
Graph built
datasets/msd_dataset.h5 not found!
datasets/youyi_liver.h5 not found!
Number of data in combine-train is 1025
Number of data in sliver-val is 19
Number of data in lits-val is 131
Number of data in lspig-val is 34
2019-11-21 10:27:09.182592: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-21 10:27:09.385214: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.607
pciBusID: 0000:01:00.0
totalMemory: 11.00GiB freeMemory: 9.08GiB
2019-11-21 10:27:09.390234: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1484] Adding visible gpu devices: 0
2019-11-21 10:27:09.954466: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-21 10:27:09.957150: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0
2019-11-21 10:27:09.958786: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:984] 0: N
2019-11-21 10:27:09.960476: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8783 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Validation subset 2
2019-11-21 10:27:24.280417: I T:\src\github\tensorflow\tensorflow\core\kernels\cuda_solvers.cc:159] Creating CudaSolver handles for stream 000001DA1F494840

and I download the liver_val.zip and pretrained weights files , datasets omiting two files.

Can't Visualise results

@zsyzzsoft Thank you so much for all your help till now.

Actually, I wanted to visualize the results as well like moving images, warped moving images, optical flow etc. For that I ran eval.py file and I added the keys as mentioned by you in eval.py file like keys = ['pt_mask', 'landmark_dists', 'jaccs', 'dices', 'jacobian_det','real_flow','image_fixed', 'warped_moving']

But in end I couldn't see any images anywhere, just the .txt file of evaluation results in evaluate folder. Could you please let me know what other changes do I have to make to visualize the results.

Questions about 'flow'

Hello, I have a question about the code in network.py. 'flow': pred0 * 20 * self.flow_multiplier, Why do I have to multiply pred0 by 20? Looking forward to hearing from you soon, thank you.

Questions about Datasets

These are some questions about your datasets that I believe will answer many people's questions about datasets. Thank you for your work and I am looking forward to your reply. I am a loyal fan of your work.

  1. Q1:

SLIVER
Figure 1 is the test file of SILVER.
Could you please tell me whether yan_x10 and yan_x11 are from the same person's liver part or the liver part of different people?
In other words, does yan_x represent liver part of the same person at different times or images of different people's livers?

  1. Q2:

L
lITS
Similar to the first question, in figure2,I want to figure out lits/0, lits/1, lits2...Are they images of different people's livers?
You wrote in your paper that the LITs contained 131scans, does it mean that they came from 131 images of livers of the different person, rather than 131 images of the same person at different moments.

  1. Q3:

LSPIG
Through reading your paper, I understand that the LSPIG data set contains liver images from 17 different pigs, which are registered in pairs before and during surgery. Do I understand you correctly?

  1. Q4:

Thank you for reading here, the last question.
For the training set MSD, it contains four kinds of liver lesion images, such as Hepatic animal and Pancreas Tumours. When you select two images for training, do you select two images from the same lesion image for concatenate ? Or will one of the two different lesion areas be selected and concatenate into the network?
In other words, I would like to know how you select a pair of registration images in the MSD dataset for training.

Determinant of Jacobian

I have noticed that the Jacobian Det. is supplied in your code.
def jacobian_det(self, flow):
_, var = tf.nn.moments(tf.linalg.det(tf.stack([
flow[:, 1:, :-1, :-1] - flow[:, :-1, :-1, :-1] +
tf.constant([1, 0, 0], dtype=tf.float32),
flow[:, :-1, 1:, :-1] - flow[:, :-1, :-1, :-1] +
tf.constant([0, 1, 0], dtype=tf.float32),
flow[:, :-1, :-1, 1:] - flow[:, :-1, :-1, :-1] +
tf.constant([0, 0, 1], dtype=tf.float32)
], axis=-1)), axes=[1, 2, 3])
return tf.sqrt(var)

My question is why choose the tf.sqrt(var) to represent the Det. of Jacobian. Could you help explain why you choose to compute tf.nn.momnets? The Jacobain Det. should be a map instead of a single value for a given deformation flow.

Is there Warping Operation available for Ubuntu versions

Using the option fast_reconstruction under Ubuntu's TensorFlow1.4.1 environment, I experienced the following error:
AttributeError: module 'tensorflow.python.user_ops.user_ops' has no attribute 'reconstruction'

Thank you very much for your help

The flow field output

Thanks for your work in this field.

I have tried your code. It gives results as reported. However, the flow field is not given. Note the flow field is quite needed than the warped image or segmentation map. In this way, could you help figure me out what is the best way to check the flow field in your code? How to save the deformation fields in your eval.py?

Thanks a lot.

Orthogonality Loss

I can't understand the sentence " Since the loss is a symmetric function of those eigenvalues, it can be rewritten as a fraction w.r.t. the coefficients of the characteristic polynomial of (I + A)T(I + A) by Viète’s theorem. " in your VTN paper,it's like a math's question.I thought a lot ,but can't find the answer.Could you tell me the reason.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.