Giter Club home page Giter Club logo

seung-lab / connected-components-3d Goto Github PK

View Code? Open in Web Editor NEW
332.0 27.0 41.0 21.28 MB

Connected components on discrete and continuous multilabel 3D & 2D images. Handles 26, 18, and 6 connected variants; periodic boundaries (4, 8, & 6)

License: GNU Lesser General Public License v3.0

Makefile 0.14% C++ 40.27% Python 30.16% Dockerfile 4.09% Shell 0.44% Cython 24.90%
connected-components python numpy biomedical-image-processing cpp union-find image-processing cython 3d decision-tree

connected-components-3d's Issues

Additional metrics support

Hi,
I am benefiting from this amazing library in my deep learning research, namely detecting and measuring liver tumours in CT scans. It would be great if we could have a features allowing unitless longest dimension, surface area metrics to be computed as well. Getting volume is easy since the functions already return voxel counts for labels.

1D Array of 4 Elements Incorrect

An error occurs when using an array of 4 elements

import numpy as np
import cc3d
labels, seg_count = cc3d.connected_components(np.array([[[1,1,1,1]]]), return_N=True)
labels

output:

[1,0,0,0]

expected output:

[1,1,1,1]

IndexError is generated when using cc3d.connected_components

The follow index error is generated when using connected components:

IndexError: index 3 is out of bounds for axis 0 with size 3
Exception ignored in: 'cc3d.epl_special_row'
Traceback (most recent call last):
File "****", line 9, in
new = cc3d.connected_components(test)
IndexError: index 3 is out of bounds for axis 0 with size 3

This can be reproduced with the following code:

import cc3d
import numpy as np

c = 5

test = np.zeros((3, 4, 4,), dtype='int')
test[0, 0, 1] = c
test[0, 0, 2] = c
new = cc3d.connected_components(test)
print(np.count_nonzero(new) == 0)

In addition the array returned is all zeros, which I don't think should be the case. Surprisingly if we init to ones all runs as expected as in:

test = np.ones((3, 4, 4,), dtype='int')

I may be missing something, but I don't think this expected behavior.

About the lastest_k function

Hello, William
when i use lastest_k , i got some error
this is code

import numpy as np
import cc3d
import cv2
labels_in = np.ones((512, 512, 512), dtype=np.int32)
labels_out, N = cc3d.largest_k(
labels_in, k=10,
connectivity=26,
return_N=True,
)

and I got TypeError: Argument 'delta' has incorrect type (expected int, got float)
so what can I do to reduce this error
looking forward to your answer

largest_k fails for transposed arrays

Hi!
I faced a problem when using largest_k with transposed array - the result is nonsense. And during experiments with other functions (I tried connected_components) I did not face that problem. My cc3d version is 3.12.1

My minimal code:

import numpy as np
import cc3d
import matplotlib.pyplot as plt

strange = np.load('strange.npy')

plt.imshow(cc3d.largest_k(strange.transpose(2, 1, 0), 1)[100])
plt.imshow(cc3d.largest_k(strange, 1)[:, :, 100])
plt.imshow(cc3d.connected_components(strange)[:, :, 100])
image image image

max_labels questions

I'm a bit confused by the max_labels argument. If I run a connected_component call without the argument and then do a np.max(labels_out), I should get the number of components in the version after the recent 1.2.0 release. However, if now use this number with some margin to set max_labels, the procedure fails with exception:

Connected Components Error: Label 60000 cannot be mapped to union-find array of length 60000.
terminate called after throwing an instance of 'char const*'

It seems that internally, the union-find algorithm requires a higher number, but it is not clear to me how to estimate this number.

It would be great to find a way to reduce the peak memory footprint of this very nice package. :)

cc3d.statistics['bounding_boxes'] is contains floats instead of int as slice positions

Hi, the slices returned by the stats method of cc3 contain float entries but should be int. Here is a reproducible minimal example:

tt = np.zeros((3,3,3))
tt[0,1]= 2 
tt[1,1]=2

cc_dusted , N= cc3d.connected_components(tt,return_N=True)
stats_dusted = cc3d.statistics(cc_dusted)
bboxes = stats_dusted['bounding_boxes']

print(bboxes)

Above snippet returns:

[(slice(0, 541.0, None), slice(0, 530.0, None), slice(0, 530.0, None)), (slice(116, 272.0, None), slice(125, 215.0, None), slice(247, 336.0, None)), (slice(130, 270.0, None), slice(32
7, 421.0, None), slice(230, 344.0, None))]

As you can see there are float entries

Not Compiling on Windows 10

Hi Thank you for providing this essential package, but I couldn't install this in my machine. I am using python 3.6 version through anaconda in Windows 10 OS. While I am installing the package it is saying that " Could not find a version that satisfies the requirement connected-component-3d (from versions: )
No matching distribution found for connected-component-3d". Is that this package not supporting Windows OS ? Please let me know about this issue.

Thank you

Not actually 26 connected?

I may have made an oversight when I restricted the forward mask to 9 neighbors. It seems I neglected to add loc + 1 - sx to the mask.

This is being treated in #8 which will be faster anyway...

Better Algorithm Mk. II

Discussed this some in #6

There are several faster algorithms than Wu et al's 2005 decision tree.

  • He et al 2007 describes a way of using a simpler data structure than union-find, but it requires 3x as much memory (3 arrays) for what is probably a 10-20% improvement. They don't compare with Wu et al in their paper.
  • Chang's contour tracing algorithm is very fast and probably shouldn't be ignored. It might be the best for our number of labels (see figure below from Grana).
  • Grana's 2009 paper on 2x2 block based evaluation is promising. It's complex in 2D, and extending it to 3D requires a very complicated tree, though if the chip can handle it efficiently, there's a lot of efficiencies to be gained, maybe even more than in 2D.

image

Figure from Grana, Borghesani, Cucchiara. "FAST BLOCK BASED CONNECTED COMPONENTS LABELING". IEEE 2009. doi: 10.1109/ICIP.2009.5413731

warning from numpy

this is not a big issue, but it might create problems in the future.
I just would like to keep a mark here, nothing urgent to fix.

Processing connected-components-3d-3.2.0.tar.gz
Writing /var/folders/gc/b6s140td6xdbfxdrsmkjtww80001lg/T/easy_install-wc89qksb/connected-components-3d-3.2.0/setup.cfg
Running connected-components-3d-3.2.0/setup.py -q bdist_egg --dist-dir /var/folders/gc/b6s140td6xdbfxdrsmkjtww80001lg/T/easy_install-wc89qksb/connected-components-3d-3.2.0/egg-di
st-tmp-c5t0i0lp
In file included from cc3d.cpp:645:
In file included from /Users/jwu/opt/anaconda3/envs/wasp/lib/python3.7/site-packages/numpy/core/include/numpy/arrayobject.h:4:
In file included from /Users/jwu/opt/anaconda3/envs/wasp/lib/python3.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:12:
In file included from /Users/jwu/opt/anaconda3/envs/wasp/lib/python3.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1944:
/Users/jwu/opt/anaconda3/envs/wasp/lib/python3.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: "Using deprecated NumPy API, disable it with "
      "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it with " \
 ^
1 warning generated.

Cannot find reference 'dust' in 'cc3d.py'

I installed cc3d on windows 10 for python 3.6.12:

Requirement already satisfied: connected-components-3d in c:\users\max\miniconda3\lib\site-packages (3.10.3)
Requirement already satisfied: numpy in c:\users\max\miniconda3\lib\site-packages (from connected-components-3d) (1.18.5)

When I try to use dust() as shown in the Readme, I get the following error:
AttributeError: module 'cc3d' has no attribute 'dust'

Am I missing something?

What's the meaning of index list returned?

Hi, thank you for your great contribution first!
But I wonder about the index list the function 'connected_components(data)' returns, why is the index list not continuous, such as[0,1,2,3], but in fact it returns list like [0,210,213,220].

Another, I want to know for the multi_label case, such as [0,1,2], then it returns connected components labels like [0,210,213,220]. Then how can we tell the different connected components belonging to original label?

Last, does this repository has any attribute to find if the connected components neighbored?

Apply mask prior to segmentation?

Hi, I'm a new user of the package and was wondering if there is functionality to apply a mask before running connected components, or if masks must be applied after processing the whole volume? If possible, this could be helpful to speed up processing for large data volumes (1000s of pixels on edge) where the regions of interest are a small subset of the sample volume.

I don't know the details of your implementation, so could envision this actually slowing down the process instead of speeding it up depending on the specifics of the algorithm, especially if this means you need to keep a whole duplicate array for the mask in memory at the same time.

How to access/extract the connected components using this package ?

Thank you for making this package windows compatible. I segmented the region of interest from Lung CT scans. My segmented output size 512x512x130, where 130 is the total number of cross sectional images in the Lung CT scan and 512x512 is the size of each cross sectional image.
I use the following code to find the connected components (connected tissue clusters) across the cross section of the CT scan.
import numpy as np
from cc3d import connected_components
nod_arr=np.load('nodule_arr.npy')
nod_3d = connected_components(nod_arr)

It doesn't throw any error, but I can't able to extract any label information from nod_3d. How can I extract the tissues clusters which are connected across the cross section of CT scan ? For example, if one tissue cluster is exist from slice number 40 to 43 (4 cross section images), I need to extract that tissue cluster alone separately.
image

Reduce Peak Memory Consumption

This package is currently several times more memory intensive than SciPy. There are a few avenues to reducing memory consumption: .

  • Strip out union-by-size from union-find. Not sure what the theoretical justification is (it's supposed to be faster!), but it seems to be more performant and lower memory. It's possible that the union-by-size feature is more useful for arbitrary graphs but not the structure of graphs that are implied in images.
  • Allow use of uint16_t or uint8_t for output images. It's pretty rare that more than 65k labels are used in typical images. We would need a good way to estimate when this is okay or allow users to specify uint16.
  • As in #11, we can use std::unordered_map in union-find which for many images which would sparsely utilize the union-find array would result in large memory reductions. However, for images which densely use it, it would use more. It also supports labels larger than the maximum index of the image. However, it is also slower than the array implementation. We should allow the user to choose which implementation is right for them. (whenever I try this it's embarrassingly slow) this one would be too slow
  • Allocate the output array in the pyx file and pass it by reference to avoid a copy.
  • Is it possible to do this in-place? Might be restricted to data-types uint16 or bigger. (No, you need to be able to check the original labels.)
  • Allow binary images as input and represent them as bit-packed using vector<bool>
  • limit memory used for binary images based on maximum possible number of prospective labels
  • estimate the number of provisional labels before allocating

Pictured Example

Will,
Thank you for this amazing work. I am new to python and the idea of CCL. The 'python use' code snippet helps to understand how the code is to be used but I would appreciate if you could please provide an example of a 3D object with multiple labels being run on your code..
The reason I ask is that I am currently working on a segmentation process for 3D objects and my binary matrix is (64x64x64) obtained using binvox. I tried to read the object and then pass the (64x64x64) binary matrix into your code but I just received 1 output label. I want to verify if I am doing something wrong and if there is some more pre-processing that I need to do.
Also the region adjacency graph is not working, there seems to be no output at all.
With so many issues that I am facing, I am sure there is something that I am doing wrong. You help would be appreciated.

Add Remove Dust Function

This is probably one of the most common uses of cc3d and it's easy to screw it up and make it run slow.

Probably need to support three modes:

  • remove fewer than this number of voxels
  • remove smaller than this percent of the maximum size shape.
  • remove smaller than this percent of the image size (can be done using option 1 and some math)

the limit of union-find array is 65535?

I have a big data(102410241024) to use connected-components-3d and got the error : Label 65535 cannot be mapped to union-find array of length 65535.
Is this because the limit of union-find array?
And will you update a feature to cancel the limit of union-find?

Does cc3d also work with memmory-mapped numpy arrays and array-like data?

Hey,

Thanks for this really cool package. By now, I am using it quite frequently and it is simply awesome!

I often work with larger-than-memory 3D images and the only solution is often to memory-map them as a numpy/zarr array in order to process them.

Is cc3d capable of working with memory-mapped numpy arrays or even array-like data (Zarr, Dask, Tensor, Xarray, ...)? Or will it simply throw an exception or convert it internally to an in-memory numpy array?

If it is not possible are you aware of other libraries or approaches that could perform operations such as connected component analysis on larger-than-memory data (either memory-mapped or via patchification)?

Best,
Karol

14 Connected

I'm not sure who would use this, but 14 connected is faces + corners but no edges. Someone mentioned it to me in conversation, but in such a way that it sounded like it's something people use. It was unclear to me why it would be useful.

Region Graph Function

In #13 it was requested that we have a way to return the region graph of a labeled volume. This is a pretty good idea and is probably useful in this lab and in others.

Question on comparing individual lesions between two masks based on the cc3d.statistics output.

Hello,

First of all thank you for cc3d, I am very new to the field and I found it much easier to use compared to other implementations of connected components for 3D images.

I used the 'stats' function to get the voxel sizes of individual lesions and their bounding boxes from a mask, which if I understand correctly represent the position of each lesion in the mask.

What I would like to do, is to compare the number of correctly identified lesions that my model predicted in the ground truth regardless of whether their volumes match (or regardless of whether they are precisely delineated). I guess this would be done by comparing the bounding boxes between the mask my model predicted and the ground truth mask. Is there a straightforward way to do this using cc3d or another package?

Best and thank you for your time!

Visualization

Hello
I am trying to use your component, and i want to visualize the 3d plot. Sample from my code:

# 3D Plot
fig = plt.figure()
ax = plt.axes(projection="3d")

# depth_image = skimage.io.imread("/host/datasets/dataset1/depth_20201030T171046.png")
image = depth_image
# filter out points that are too far
idx = np.where((image != 0) & (image <= 230))

# visualize
x_points = idx[1]
y_points = 10*image[idx]
z_points = -idx[0]
col = np.arange(30)

ax.scatter3D(x_points, y_points, z_points, c=z_points, cmap='viridis');
plt.show()

Currently what I have is a depth image, with a separated rgb.
I want to do some 3d clustering and I have been reading a lot until I found your repo for 3d connected components.

Is there a way for me to visualize the results from code you have on your main README?

Applying Dust and largest_k dtype output option

Hi,
I apply dust before largest_k; is this the right order? Or performance wise largest_k should be applied first?
My input is a boolean array.

Do you consider casting of largest_k output to the relevant dtype based on k value?
If k in largest_k is less than 65535 no need to for label_out to be uint32, uint16 will be sufficient. So for k<255 label_out can be uint8.
Can this be considered to reduce memory requirements?

Dimitris

6-connected for DNS

Is it possible to calculate 6-connectivity, or 18 instead of 26? A bit of a departure from brain imaging, but this feature would be very useful for finite volume solvers.

voxel_connectivity_graph and contacts can not be applied in 2D label

Nice job!. I want to find the neighborhood of each label in 2D segmentation and I think it is the function of contacts. However, I find that these functions could not be applied to 2D segmentation and report "TypeError: No matching signature found".

Could you please help you to fix this bug?

Statistics output

In this example, I expect the just one centroid near [149, 149, 149]. Could you let me know where the [64, 64, 64] comes from? Also why are the pixel indices (~149) not the same in all three dimensions with this input?

import numpy as np
import cc3d

labels_in = np.zeros((512, 512, 512), dtype=np.int32) 
labels_in[100:200, 100:200, 100:200] = 1
labels_out, N = cc3d.largest_k(labels_in, k = 1, connectivity = 26, delta=0, return_N = True)

a = cc3d.statistics(labels_out)
a['centroids']

This is what I get:

array([[ 64.480415,  64.480415,  64.480415],
       [149.26369 , 149.49011 , 149.0991  ]], dtype=float32)

Thank you for sharing this software.

List of label indices?

Is there a way to return a list of tuples/arrays that contains the indices of each label within the 3D array? Other approaches I've implemented using numpy or pandas can get quite memory intensive and are slow even in vectorized form. I was hoping there might already be such a list accessible through this module that can be created while the labelling algorithm is being executed. You cc3d labelling has no issue labelling a volume on the order of 3.5 billion voxels with ~40 million unique labels, but trying to get the indices of each label to perform subsequent operations has proved challenging.

Make a PyPI Package

Do you think it's worthwhile? Upvote if yes. It'd be interesting to hear your use case. Please contribute it below.

Add support to release linux aarch64 wheels

Problem

On aarch64, pip install connected-components-3d builds the wheels from source code and then installs it. It requires the user to have a development environment installed on his system. Also, it takes more time to build the wheels than downloading and extracting the wheels from PyPi.

Resolution

On aarch64, pip install connected-components-3d should download the wheels from PyPi

@william-silversmith, Please let me know your interest in releasing aarch64 wheels. I can help with this.

Build Equivalence Table Two Z Slices at a Time

For very large volumes, it might be helpful to be able to provide a facility for processing Z slices in sequential order. This would allow the user to manage memory efficiently on their end. The interface would look something like:

import cc3d
import numpy as np

builder = cc3d.connected_components_builder()
for z in range(128):
     builder.add_z_slice(img[:,:,z])
for z in range(128):
     img[:,:,z] = builder.relabel(img[:,:,z])

cc3d.dust fails

Working in Ubuntu 22.04
python 3.8
connected-components-3d==3.10.2
numpy==1.23.2

Trying

labels_out = cc3d.dust(
            source_cube[i_range, x_range, s_slice], threshold=100,
            connectivity=26, in_place=False
        )

results to error

    labels_out = cc3d.dust(
  File "cc3d.pyx", line 1070, in cc3d.dust
  File "cc3d.pyx", line 969, in cc3d.erase
  File "cc3d.pyx", line 937, in cc3d.draw
  File "cc3d.pyx", line 939, in cc3d.__pyx_fused_cpdef
TypeError: No matching signature found

Any idea

Better Algorithm

Parallel operation (#6) is the rich dumb person's way of getting more performance. At minimum it would be good to apply Wu's or Grana's "World's Fastest" algorithms in this case.

ValueError: numpy.ufunc size changed, may indicate binary incompatibility

Hi!

I have successfully installed connected-components-3d using pip.
But when I import cc3d it gives following error:

Traceback (most recent call last):
File "", line 1, in
File "init.pxd", line 918, in init cc3d
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

Can you please help?

connected_components does not labels single pixels in 2D

Dear all,

Using cc3d for a while in python I just noticed single pixels are not labeled. At the beginning I though cc3d did not catch isolated pixels but the behavior is a little bit stranger :

    import cc3d
    import numpy as np

    # NOT WORKING
    binary_img = np.zeros((3, 3), dtype=np.uint8)
    binary_img[1, 1] = 1
    labels = cc3d.connected_components(binary_img, connectivity=4)
    print(f'{binary_img}\n => \n', labels)
    
    binary_img = np.zeros((5, 5), dtype=np.uint8)
    binary_img[1, 1] = 1
    labels = cc3d.connected_components(binary_img, connectivity=4)
    print(f'{binary_img}\n => \n', labels)
    
    # WORKING
    binary_img = np.zeros((5, 5), dtype=np.uint8)
    binary_img[1, 1] = 1
    binary_img[3, 3] = 1
    labels = cc3d.connected_components(binary_img, connectivity=4)
    print(f'{binary_img}\n => \n', labels)

Corresponding outputs

    [[0 0 0]
     [0 1 0]
     [0 0 0]]
     => 
     [[0 0 0]
     [0 0 0]
     [0 0 0]]
    
    [[0 0 0 0 0]
     [0 1 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]]
     => 
     [[0 0 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]
     [0 0 0 0 0]]
    
    [[0 0 0 0 0]
     [0 1 0 0 0]
     [0 0 0 0 0]
     [0 0 0 1 0]
     [0 0 0 0 0]]
     => 
     [[0 0 0 0 0]
     [0 1 0 0 0]
     [0 0 0 0 0]
     [0 0 0 2 0]
     [0 0 0 0 0]]

This issue may looks minor in most of the case but I use sometimes cc3d with very small 2D patches and this situation is happening.

Notice this issue seams not reproducible in 3D with third dimension bigger than 1

Best regards

What is process?

The README.md file says to call the process function:


# You can extract individual components using numpy operators
# This approach is slow, but makes a mutable copy.
for segid in range(1, N+1):
  extracted_image = labels_out * (labels_out == segid)
  process(extracted_image)

# If a read-only image is ok, this approach is MUCH faster
# if the image has many contiguous regions. A random image 
# can be slower. binary=True yields binary images instead
# of numbered images.
for label, image in cc3d.each(labels_out, binary=False, in_place=True):
  process(image)

What is this process function? I can't find it in default python or in cc3d.

Massive memory Leak

Unfortunately, this fantastic package has a massive memory leak

i suggest wrapping it in the following command to avoid memory leaks.


import cc3d

import concurrent.futures


def connected_components(bianry_array, return_N=True):
    with concurrent.futures.ProcessPoolExecutor() as executor:
        f = executor.submit(cc3d.connected_components, (bianry_array), return_N=return_N)
        ret = f.result()

    return ret

dust sugnature

dust signature does not contain connectivity parameter although is an acceptable one.
Consider adding it to allow pre-commit working
Thanks!

def dust(img, threshold, in_place=False): # real signature unknown; restored from __doc__
    """
    dust(img, threshold, in_place=False) -> np.ndarray
    
      Remove from the input image connected components
      smaller than threshold ("dust"). The name of the function
      can be read as a verb "to dust" the image.
    
      img: 2D or 3D image
      threshold: discard components smaller than this in voxels
      connectivity: cc3d connectivity to use
      in_place: whether to modify the input image or perform
        dust 
    
      Returns: dusted image
    """
    pass

Label Statistics

Would be good to have something like cv2.connectedComponentsWithStats which provides:

  • Bounding box for each label
  • Centroids
  • Voxel Count / Volume (Vol = res * Ct)

It would be even more interesting if this could be done as a one-pass algorithm too. However, it's pretty easy to add this as an additional pass.

Voxel count is pretty easily handled by https://github.com/seung-lab/fastremap but because we know the image statistics here, we can get some minor efficiencies. Centroids are also easy to compute (a 2d version is here: https://github.com/seung-lab/kimimaro/blob/master/ext/skeletontricks/skeletontricks.pyx#L463).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.