Giter Club home page Giter Club logo

medpy's Introduction

PyPI version anaconda version PyPI pyversions License: GPL v3 Downloads DOI

GitHub | Documentation | Tutorials | Issue tracker

medpy - Medical Image Processing in Python

MedPy is an image processing library and collection of scripts targeted towards medical (i.e. high dimensional) image processing.

Stable releases

Development version

  • Download (development version): https://github.com/loli/medpy
  • HTML documentation and installation instruction (development version): create this from doc/ folder following instructions in contained README file

Contribute

  • Clone master branch from github
  • Install pre-commit hooks or with [dev,test] extras
  • Submit your change as a PR request

Python 2 version

Python 2 is no longer supported. But you can still use the older releases <=0.3.0.

Other links

medpy's People

Contributors

blue-atom avatar cancan101 avatar erotemic avatar fabianisensee avatar iamahern avatar jingnan-jia avatar loli avatar mamrehn avatar nicolascedilnik avatar overdrivr avatar raamana avatar stellarstorm avatar vincentxwd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

medpy's Issues

TypeError with medpy.metric.binary

When using 'medpy.metric.binary.hd' or 'medpy.metric.binary.asd', I get the following error:

TypeError: numpy boolean subtract, the - operator, is deprecated, use the bitwise_xor, the ^ operator, or the logical_xor function instead.

With stack trace pointing to a line in medpy/metric/binary.pyc
1169 result_border = result - binary_erosion(result, structure=footprint, iterations=1)

Does the operator need any update? (I'm using MedPy 0.3.0)

save numpy as mhd

Hello,
I have installed all modules to read and save mhd files. (medpy, itkwrapper, itkbridgenmpy and vtk). For installing ITK wrapper I have followed instructions in this webpage in ubuntu14: http://pythonhosted.org/MedPy/installation/itkwrapper4.7.html:
I just did it for itkwrapper4.10.
Now, I can read mhd file and convert it to numpy, but when I wanna save numpy array as mhd I got following error:
"medpy.io.save(image_orig,directory+str(n+1).zfill(3)+"/image_resize.mhd",hdr=image_header_orig)
File "build/bdist.linux-x86_64/egg/medpy/io/save.py", line 192, in save
medpy.core.exceptions.ImageSavingError: Failed to save image /home/user/Downloads/Datasets/LA-challenge2013/train-mri/a001/image_resize.mhd as type Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: Cannot get an instance of NumPy array."

Also, this is piece of code to read and save mhd:
image_orig, image_header_orig = medpy.io.load(lstFiles_orig[n])
image_seg, image_header_seg = medpy.io.load(lstFiles_gt[n])
image_orig=transform.resize(image_orig, (320,320,110), order=3, mode='constant', cval=0)

I highly appreciate if anybody can help me regarding this issue. Thanks.

Relative bin deviation real metric

The relative bin deviation implemented in medpy.metric.histogram.relative_bin_deviation is a real metric, not a semi-metric, as it fulfils the triangle equation.

SingleIntensityAccumulationError:

Hi, I am trying to use the IntensityRangeStandardization function however I am running to the following issue:

medpy.filter.IntensityRangeStandardization.SingleIntensityAccumulationError: Image no.0 shows an unusual single-intensity accumulation that leads to a situation where two percentile values are equal. This situation is usually caused, when the background has not been removed from the image. Another possibility would be to reduce the number of landmark percentiles landmarkp or to change their distribution.

I am trying to understand what exactly you mean by " background has not been removed from the image"

For instance in the following code I get the same error:

import numpy
from medpy.filter import IntensityRangeStandardization
base_image = numpy.asarray([[0,0,0],[3,5,4],[7,8,9],[2,4,8]])
good_trainingset = [base_image + x for x in range(10)]

print base_image.dtype
print type(good_trainingset)
print good_trainingset

irs = IntensityRangeStandardization(cutoffp=(1, 99), landmarkp=[10, 20, 30, 40, 50, 60, 70, 80, 90], stdrange='auto')
irs.train_transform(good_trainingset,surpress_mapping_check=True)

print its

Output

int64
<type 'list'>
[array([[0, 0, 0],
       [3, 5, 4],
       [7, 8, 9],
       [2, 4, 8]]), array([[ 1,  1,  1],
       [ 4,  6,  5],
       [ 8,  9, 10],
       [ 3,  5,  9]]), array([[ 2,  2,  2],
       [ 5,  7,  6],
       [ 9, 10, 11],
       [ 4,  6, 10]]), array([[ 3,  3,  3],
       [ 6,  8,  7],
       [10, 11, 12],
       [ 5,  7, 11]]), array([[ 4,  4,  4],
       [ 7,  9,  8],
       [11, 12, 13],
       [ 6,  8, 12]]), array([[ 5,  5,  5],
       [ 8, 10,  9],
       [12, 13, 14],
       [ 7,  9, 13]]), array([[ 6,  6,  6],
       [ 9, 11, 10],
       [13, 14, 15],
       [ 8, 10, 14]]), array([[ 7,  7,  7],
       [10, 12, 11],
       [14, 15, 16],
       [ 9, 11, 15]]), array([[ 8,  8,  8],
       [11, 13, 12],
       [15, 16, 17],
       [10, 12, 16]]), array([[ 9,  9,  9],
       [12, 14, 13],
       [16, 17, 18],
       [11, 13, 17]])]
Traceback (most recent call last):
  File "testdelet.py", line 32, in <module>
    irs.train_transform(good_trainingset,surpress_mapping_check=True)
  File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 345, in train_transform
  File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 260, in train
  File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 436, in __compute_stdrange
medpy.filter.IntensityRangeStandardization.SingleIntensityAccumulationError: Image no.0 shows an unusual single-intensity accumulation that leads to a situation where two percentile values are equal. This situation is usually caused, when the background has not been removed from the image. Another possibility would be to reduce the number of landmark percentiles landmarkp or to change their distribution.

The same happens when i set :

base_image = numpy.asarray([[1,1,1],[3,5,4],[7,8,9],[2,4,8]])

Thank you.

'LazyITKModule' object has no attribute 'AnalyzeImageIO'

When I try to load an image, the error troubles me alot......
I run the following line:
from medpy.io import load
Iimage_data, image_header = load(test_data)
and the results are:
ImageLoadingError Traceback (most recent call last)
in ()
----> 1 image_data, image_header = load(test_data)

/home/lx/anaconda2/lib/python2.7/site-packages/medpy/io/load.pyc in load(image)
199 logger.debug('Module {} signaled error: {}.'.format(loader, e))
200
--> 201 raise err
202
203 def __load_nibabel(image):

ImageLoadingError: Failes to load image /home/lx/Personal/source code/lasc-master/data/mri/testing/b002/image.mhd as Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'LazyITKModule' object has no attribute 'AnalyzeImageIO'

dicom data cannot be read

/share/data_bert1/mwilms/Projects/RTUKE/Patient01/4DCT$ medpy_join_xd_to_xplus1d.py ~/combined.nii.gz 01.dcm 02.dcm 03.dcm 04.dcm 05.dcm 06.dcm 07.dcm 08.dcm 09.dcm 10.dcm -s0.2 -v
26.03.2014 13:48:48 [INFO ] Loading image 01.dcm...
26.03.2014 13:48:50 [INFO ] Loading image 02.dcm...
26.03.2014 13:48:50 [INFO ] Loading image 03.dcm...
26.03.2014 13:48:51 [INFO ] Loading image 04.dcm...
26.03.2014 13:48:52 [INFO ] Loading image 05.dcm...
26.03.2014 13:48:52 [INFO ] Loading image 06.dcm...
26.03.2014 13:48:53 [INFO ] Loading image 07.dcm...
26.03.2014 13:48:54 [INFO ] Loading image 08.dcm...
26.03.2014 13:48:54 [INFO ] Loading image 09.dcm...
26.03.2014 13:48:55 [INFO ] Loading image 10.dcm...
Traceback (most recent call last):
File "/usr/local/bin/medpy_join_xd_to_xplus1d.py", line 7, in
execfile(file)
File "/data_kruemel1/mastmeyer/medpy/bin/medpy_join_xd_to_xplus1d.py", line 126, in
main()
File "/data_kruemel1/mastmeyer/medpy/bin/medpy_join_xd_to_xplus1d.py", line 100, in main
update_header_from_array_nibabel(example_header, output_data)
File "/data_kruemel1/mastmeyer/medpy/medpy/io/header.py", line 305, in __update_header_from_array_nibabel
hdr.get_header().set_data_shape(arr.shape)
File "/usr/lib/python2.7/dist-packages/dicom/dataset.py", line 253, in __getattr

raise AttributeError, "Dataset does not have attribute '%s'." % name
AttributeError: Dataset does not have attribute 'get_header'.

Need DOI to cite this package

Hi, I think you should get a DOI for this package, so others like me can cite this package properly. Let me know once you get it from places like Zenodo.

Comment in medpy.metric.binary.hd regarding connectivity is not correct

"connectivity : int
The neighbourhood/connectivity considered when determining the surface of the binary objects. This value is passed to scipy.ndimage.morphology.generate_binary_structure and should usually be >1>1. Presumably does not influence the result in the case of the Hausdorff distance."
I have tested in 3D objects and it actually has a big influence in the result.

Need new PyPI release

The current release on pypi doesn't work for python 3, due to single error in syntax (merged pull request) raising a runtime error. Since its been 3 years, its time to release another version with all the bug fixes so far. Thanks.

Clutter files

.idea folder in root dir added by insufficiently monitored merge. Remove and check whole causing merge.

IntensityRangeStandardization produces negative intensity values

I was using MICCAI BRATS 2015 training dataset. I took a list of images and their respective masks.

# f = list of filenames for t2 sequence
im_ar = [sitk.GetArrayFromImage(sitk.ReadImage(i) for i in f]
im_ar = np.array(im_ar)
im_msk = im_ar>0
n = IntensityRandeStandardization()
# train and transform
r, out = n.train_transform([i[m] for i, m in zip(im_ar, im_msk)])
for i, m, o  in zip(im_ar, im_mask, out):
    i[m] = o

When i checked for some generic output o:

[array([-276.6272986 , -264.94980174, -266.89605121, ..., -266.89605121,
        -272.73479964, -272.73479964]),
 [-256.55019873, -145.10180764,  -40.20920427, ...,   18.69724008,
         -53.32077969, -276.21756186],
 array([-242.07590339, -180.49569523, -160.90199263, ..., -256.07140525,
        -261.66960599, -278.46420821]),
 array([-263.61638514, -270.51243865, -261.89237176, ..., -138.62541528,
        -142.93544873, -268.78842527]),
 array([-289.90758914, -190.99232162, -271.58994701, ..., -125.04880994,
          -2.28223993, -125.04880994]),
 array([-271.58994701, -244.11348381, -264.26289016, ..., -262.43112594,
        -247.77701223, -267.92641858]),
 array([-255.26526434, -237.32862916, -247.29342648, ..., -251.27934541,
        -217.39903452, -257.2582238 ]),
 array([-266.97390119, -261.99150253, -266.97390119, ..., -247.04430655,
        -197.22031995, -222.13231325]),
 array([-267.50772962, -262.16944534, -272.8460139 , ..., -239.03688013,
        -247.9340206 , -265.72830153]),
 array([-260.52822525, -241.17910424, -233.92318386, ..., -262.94686538,
        -279.87734627, -282.29598639]),
 array([-268.32900412, -265.59142244, -264.2226316 , ...,   22.02947516,
        -257.3786774 , -265.59142244]),
 array([-278.91700386, -267.92641858, -267.92641858, ..., -289.90758914,
        -286.24406072, -286.24406072]),
 array([-273.54950873, -276.44625213, -279.34299554, ...,  180.66066588,
         121.10979817, -288.03322576]),
 array([-281.79748804, -201.92697517, -146.77828771, ..., -281.79748804,
        -283.69916692, -283.69916692]),
 array([-274.22631793, -253.27230487, -248.61585753, ..., -269.56987059,
        -269.56987059, -255.60052855]),
 array([-263.2371022 , -258.25470354, -248.28990621, ..., -243.30750755,
        -263.2371022 , -248.28990621]),
 array([-260.39001725, -253.27230487, -256.83116106, ..., -260.39001725,
        -256.83116106, -260.39001725]),
 array([-268.21950086, -273.20189952, -273.20189952, ..., -263.2371022 ,
        -270.71070019, -270.71070019]),
 array([-272.64829967, -220.05631381, -214.52031529, ..., -275.41629892,
        -275.41629892, -278.18429818]),
 array([-271.95629985, -271.95629985, -271.95629985, ..., -271.95629985,
        -268.84230069, -271.95629985])]

Remove spaces in `extras_require`

Right now the extras_require is:

          extras_require = {
            'Additional image formats' : ["itk >= 3.16.0"]
          },

which contains a space in the key, something I have never seen before. This causes problems when trying to pip install. Perhaps there is a way to escape, but this seems non intuitive:

$ pip install "medpy[Additional image formats]"
...
pip._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'[Additio'"

I suggest offering a string without a space.

How to obtain the normalized images?

Thanks for sharing your great work!
I have 20 training subjects with image and label size of 112x128x256. I stored the image and its label in a list such as imgs and masks. After performing the histogram normalization as preprocessing step, I got

irs = IntensityRangeStandardization()
trained_model, transformed_images = irs.train_transform([i[m] for i, m in zip(imgs, masks)],
                                                            surpress_mapping_check="ignore")

I want to extract the result of 20 images after transforming, How can I obtain it? I try to used below code but it only gives me the last image result (20th image)

 for image, m, o in zip(imgs, masks, transformed_images):
     image[m] = o
print (image.shape)  #112x128x256

Thanks!

Python 3 issue: GLIBCXX not found

I installed MedPy in development mode. Also, libboost-python-dev is built with no error.

In python, I can do

from medpy.io import load

But, when I try to from medpy.graphcut import graphcut_from_voxels

I got

ImportError: /anaconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0)

Do you have any idea about this error?

Thank you!

nosetests and python version in virtual environements

If nosetestwas installed globally, it used the globally set python version instead of the local one. This can be circumvented by running the tests with python3 -m "nose".

Adapt README in tests/ to replacing the old recommendation to call nosetest command by python3 -m "nose".

MedPy GraphCut support broken for Mac / Home Brew users

When install MedPy on Mac OS, the GraphCut support fails due to an issue linking to the Boost Python library.

After some investigation, it turns out that the resolution to #20 caused the regression due to how the libraries are named on the Mac.

I have submitted a patch in #64.

Error importing medpy.io's load

I have installed using pip3

from medpy.io import load
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.4/dist-packages/medpy/io/init.py", line 59, in
from .load import load
File "/usr/local/lib/python3.4/dist-packages/medpy/io/load.py", line 28, in
from . import header
File "/usr/local/lib/python3.4/dist-packages/medpy/io/header.py", line 27, in
from ..core import Logger
File "/usr/local/lib/python3.4/dist-packages/medpy/core/init.py", line 55, in
from .logger import Logger
File "/usr/local/lib/python3.4/dist-packages/medpy/core/logger.py", line 94
raise RuntimeError, 'Only one instance of Logger is allowed!'

Image loading fails with ITK 4.X

For a description, see comments in #6 .

Error:
medpy.core.exceptions.ImageLoadingError: Failes to load image 1/1_Prim.mha as Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'LazyITKModule' object has no attribute 'AnalyzeImageIO.

medpy.filter.utilities.pad() creates 0 line when mode = 'mirror'

When the pad function is called with mode='mirror' and size >> image.shape the returned array starts with lines of zeros.

Example:

In [1]: import numpy

In [2]: from medpy.filter.utilities import pad

In [3]: test = numpy.ones([3,3])

In [4]: pad(test, size=3,mode='mirror')
Out[4]: 
array([[ 1.,  1.,  1.,  1.,  1.],
       [ 1.,  1.,  1.,  1.,  1.],
       [ 1.,  1.,  1.,  1.,  1.],
       [ 1.,  1.,  1.,  1.,  1.],
       [ 1.,  1.,  1.,  1.,  1.]])

In [5]: pad(test, size=7,mode='mirror')
Out[5]: 
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.]])

In [6]: pad(test, size=9,mode='mirror')
Out[6]: 
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
       [ 0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.]])

graphcut unittests

tests/graphcut_/enegery_label.py and cut.py are still trying to call the graph-cut functions according to the old system.

Error importing medpy.filter, medpy.metrics ....

Hi Oskar,

ich benutze Python 2.7 (python x,y).

import medpy.filter
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\filter__

File "build\bdist.win32\egg\medpy\filter\bi
File "C:\Python27\lib\site-packages\scipy\n
module>
from .filters import *
File "C:\Python27\lib\site-packages\scipy\n
dule>
from . import _ni_support
ImportError: cannot import name _ni_support

import medpy.metric

Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\metric__init__.py", line 104, in

File "build\bdist.win32\egg\medpy\metric\binary.py", line 25, in
File "C:\Python27\lib\site-packages\scipy\ndimage__init__.py", line 172, in <
module>
from .filters import *
File "C:\Python27\lib\site-packages\scipy\ndimage\filters.py", line 37, in
from scipy.misc import doccer
File "C:\Python27\lib\site-packages\scipy\misc__init__.py", line 47, in
from scipy.special import comb, factorial, factorial2, factorialk
File "C:\Python27\lib\site-packages\scipy\special__init__.py", line 546, in <
module>
from ._ufuncs import *
ImportError: DLL load failed: Die angegebene Prozedur wurde nicht gefunden.

import medpy.features

Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\features__init__.py", line 155, in

File "build\bdist.win32\egg\medpy\features\histogram.py", line 25, in

File "C:\Python27\lib\site-packages\scipy\stats__init__.py", line 334, in
from .stats import *
File "C:\Python27\lib\site-packages\scipy\stats\stats.py", line 181, in
import scipy.special as special
File "C:\Python27\lib\site-packages\scipy\special__init__.py", line 546, in <
module>
from ._ufuncs import *
ImportError: DLL load failed: Die angegebene Prozedur wurde nicht gefunden.

import medpy.graphcut
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named graphcut

import medpy.itkvtk
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\itkvtk__init__.py", line 71, in

File "build\bdist.win32\egg\medpy\itkvtk\filter__init__.py", line 17, in

File "build\bdist.win32\egg\medpy\itkvtk\filter\image.py", line 24, in <module

# See the README file for information on usage and redistribution.

ImportError: No module named itk

Der Import folgender Modulteile klappt:

import medpy.io
import medpy.core
import medpy.core
import medpy.utilities

install_requires required order depends on setuptools-version

In the current ordering, with setuptools 15.00, the installation of scipy fails.

It seems that the package are installed in the order
medpy
scipy
numpy

But scipy requires numpy and throws an error during install. Hence we are left with medpy and numpy install, but no scipy.

This might be related to
pypa/pip#2478

This should be fixed soon!

header.get_offset() produces wrong values

It seems that header.get_offset() produces wrong offset values, when supplied with a NifTi header. I assume, that the sign of the main elements of the qform resp. sform matrices have to be taken into account i.e.

[1 0 0 10]
[0 1 0 10]
[0 0 1 10]

should produce the offset (10, 10, 10), while

[-1 0 0 10]
[ 0 1 0 10]
[ 0 0 1 10]

has to result in (-10, 10, 10).

But not sure about it yet.

What about cases, where the matrix looks like this:

[-0.5 0.5 0 10]
[ 0 1 0 10]
[ 0 0 1 10]

Core dump when saving image using ITK bindings.

Occurs when saving an image as .mhd (therefore probably when using the ITK bindings) at a location where an image of the same name already exists. Correct behaviour would be the error message, that the target image already exists.

Does not appear when the -f flag is set.

Using nibabel as third party lib, the error did not occur.

Observed while calling:
medpy_intensity_range_standardization.py adc.mhd --load-model model.pkl --save-images=tmp/ -vd

Issue could be repeated with medpy_convert.py, but this, as most scripts, does a previous internal check whether the target image exists and is therefore usually not affected by the bug (except in race conditions).

What is 'voxelspacing' ?

I've been seeming this term everywhere. Coudln't figure it out, though nothing of it.
I'm rewriting the tamura textural coarseness feature (for my own repo but would like to merge it here) and would like to support whatever you guys understand under the term voxelspacing.

Any ideas ?

no module named medpy.graphcut.maxflow

I just finished installing medpy (Release-0.3.0) in my Python library, but I'm running into a few issues.

What works:
medpy's io module (load, header)

What gives me problems:
medpy.graphcut

When I try and call medpy.graphcut, I run into the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/<myusername>/medpy/medpy/graphcut/__init__.py", line 200, in <module>
    from .maxflow import GraphDouble, GraphFloat, GraphInt # this always triggers an error in Eclipse, but is right
ImportError: No module named 'medpy.graphcut.maxflow'

However, there's one major thing wrong with this error, which is that the directory it's referencing is where I cloned the package using git, NOT the package library where it was supposed to install the package. If I drop the 'maxflow.py' file into that folder (along with the .so file, which it subsequently requests after an error from running the same import function again), I get another error saying the following:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/<myusername>/medpy/medpy/graphcut/__init__.py", line 206, in <module>
    import energy_label
ImportError: No module named 'energy_label'

Could you please assist me in this matter?

Thanks!

Allow Passing in Function to Anisotropic Diffusion Filtering

Right now the Anisotropic Diffusion Filtering takes in an option {1,2,3}, which limits the set of functions that can be used. I suggest allowing the user to pass in a function.

the signature should be: f(delta, spacing, dim_idx).

This will allow:

  1. different conductance param by dimension
  2. other scaling forms for the existing 3 functions
  3. new functions (eg huber)

Python 3 compatibility for anisotropic_diffusion

I've installed medpy from master branch, however I'm still getting a Python 3 incompatibility, xrange function was removed in py3 and replaced by range. For short lists, it should not make a difference in Python 2.

   [...]\src\medpy\medpy\filter\smoothing.py", line 135, in anisotropic_diffusion
    deltas = [numpy.zeros_like(out) for _ in xrange(out.ndim)]
NameError: name 'xrange' is not defined

I can make the changes and a PR if you're interested, let me know

extension file names in python3

Since python 3, the compiled extensions are no longer named by their module name (e.g., maxflow.so), but a system identifier is appended (e.g., maxflow.cpython-35m-x86_64-linux-gnu.so). This in itself does not represent a problem, as the system dynamically loads the correct version. But the fact should be reflected somewhere in the documentation.

Import issues with Python 3

I see that there is an old branch dedicated to Python3 support. Do you still plan to support Py3 in the future?

I can contribute to it if you need a hand. If you setup a windows-os-based CI like Appveyor it should be pretty straightforward to achieve.

EDIT: First thing to fix is import issues that make testing impossible for now.

Tests for histogram metrics

First of all, great work in producing a nice and useful package. I am trying to use it to build one of my packages: hiwenet.

While trying to write unit tests for my own package, I looked into the tests for this package, and medpy.metric.histogram seems to be missing. Not sure if they were misplaced, or you are yet to get to them.. So this is just to learn more about the amount of testing that has been done already to functions implemented in medpy.metric.histogram..

I forked this package and am gonna try writing few tests myself, and will send a PR when I am done. Let me know if you have any good resources (tests in other packages, or great implementations elsewhere etc).

'LazyITKModule' object has no attribute 'AnalyzeImageIO'

from medpy.io import load
import SimpleITK
import vtk

image_data, image_header = load('/Users/N01-T2.mha')
print image_data.shape

Traceback (most recent call last):
File "", line 1, in
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 880, in runfile
execfile(filename, namespace)
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/Users/wuzhenglin/Python_nice/SAL_LUNG/test.py", line 140, in
changeage()
File "/Users/wuzhenglin/Python_nice/SAL_LUNG/test.py", line 42, in changeage
image_data, image_header = load('/Users/wuzhenglin/Python_nice/SAL_BRAIN/brain_healthy_dataset/Normal001-T2.mha')
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/medpy/io/load.py", line 201, in load
raise err
medpy.core.exceptions.ImageLoadingError: Failes to load image /Users/wuzhenglin/Python_nice/SAL_BRAIN/brain_healthy_dataset/Normal001-T2.mha as
Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module:
'LazyITKModule' object has no attribute 'AnalyzeImageIO'

Module has no attribute swig

I want to save numpy array as a .mha or .mhd file.

Here is dummy code I tried:
data, header = medpy.io.load('/home/sumathipalaya/Desktop/ERCNoERCNoBG/T2WI/39812763482/0.dcm') medpy.io.save(data, 'xxx.mhd', header)

Gives this error:
` medpy.io.save(data, 'xxx.mhd')
Traceback (most recent call last):

File "", line 1, in
medpy.io.save(data, 'xxx.mhd')

File "/home/sumathipalaya/anaconda2/lib/python2.7/site-packages/medpy/io/save.py", line 192, in save
raise ImageSavingError('Failed to save image {} as type {}. Reason signaled by third-party module: {}'.format(filename, type_to_string[image_type], e))

ImageSavingError: Failed to save image xxx.mhd as type Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'module' object has no attribute 'swig'`

I am using medpy version 0.3.0 installed via pip on Python2; no errors or warnings during installation. The dependencies were install using Anaconda2, as I use this package manager. What am I doing wrong?

As an aside, is the header argument to medpy.io.save truly optional? If so, what does medpy impute for the spacing, origin, etc.?

TypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.

result_border = result - binary_erosion(result, structure=footprint, iterations=1)

when i used these two pieces of codes:
evaluate.py : https://paste.ubuntu.com/p/87yBYMhctC/
surface.py: https://paste.ubuntu.com/p/Q72rb2PH7j/

i got an error:
Traceback (most recent call last): File "evaluate.py", line 79, in <module> outpath='117_baseline.csv') File "evaluate.py", line 42, in evaluate loaded_label.header.get_zooms()[:3]) File "evaluate.py", line 26, in get_scores volscores['msd'] = metric.hd(label, pred, voxelspacing=vxlspacing) File "/xwd/envs/python27/lib/python2.7/site-packages/medpy/metric/binary.py", line 348, in hd hd1 = __surface_distances(result, reference, voxelspacing, connectivity).max() File "/xwd/envs/python27/lib/python2.7/site-packages/medpy/metric/binary.py", line 1169, in __surface_distances result_border = result - binary_erosion(result, structure=footprint, iterations=1) TypeError: numpy boolean subtract, the-operator, is deprecated, use the bitwise_xor, the^operator, or the logical_xor function instead.

and i noticed they used medpy. the bug is occured in these two lines:
result_border = result - binary_erosion(result, structure=footprint, iterations=1) reference_border = reference - binary_erosion(reference, structure=footprint, iterations=1)
i changed these two lines to:
result_border = result ^ binary_erosion(result, structure=footprint, iterations=1) reference_border = reference ^ binary_erosion(reference, structure=footprint, iterations=1)
and it works. so please tell me why. thanks.

3D Patch extraction from 3D input

I want to extract 3D patches with shape is 32x32x32 from a 3D input. I have given example of loads images from directory and gets shapes of image axes. Please let me know how to extract 3D patches samples from this input
`
from medpy.io import load
import numpy as np
import os
import h5py

data_path = ../....
for i in range(10):

    subject_name = 'subject-%d-' % i
    f = os.path.join(data_path, subject_name + 'C.hdr')
    img, header = load(f)
    inputs = img.astype(np.float32)
      
    A = inputs.shape[0]     #142
    B = inputs.shape[1]     # 176
    C = inputs.shape[2]     # 181
    D = np.arange(A*B*C).reshape(A,B,C)
    print (D.shape)

`
How could I use the function to create patches with size of 32x32x32 from this input? Please reply.. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.