Giter Club home page Giter Club logo

stpls3d's People

Contributors

huguesthomas avatar meidachen avatar qingyonghu avatar rockyatasu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stpls3d's Issues

HAIS one point assigned to multiple instance proposals

Hello there,

As described in the original paper of HAIS, "one point is only clustered into a single instance in point aggregation, resulting no overlap among instance prediction."

However, during test, I found some points being assigned to multiple instances. For instance, I put an assert after outlier filtering and got an AssertionError as follows:

image

Look forward to your reply.

Regards

Evaluation script for the competition of instance segmentation

Hi, thanks again for the great work! For instance segmentation, you set up the HAIS baseline and provide the evaluation script to calculate AP. However, this evaluation assumes that we separate the ground truth and predication into individual blocks. For the competition, the evaluation server accepts the submission of a single .predict file for the whole point cloud (e.g. area 26). I wonder could you please also provide the evaluation script used on the evaluation server? Since the output of my current method is the prediction for the whole point cloud instead of individual blocks, I think the evaluation script used on your evaluation server would make the evaluation easier. Thanks in advance!

Results visualization

Hi! Sorry for bothering you again)
Considering mergeResults.py -- do you form test folder manually?

I manually placed into test 100 results files from exp/Synthetic_v3_InstanceSegmentation/hais/hais_run_stpls3d/result/val/ , related to one area (? as I assumed) with index 5 (for example, files 5points_GTv300_inst_nostuff.npy ... 5points_GTv399_inst_nostuff.npy from coords_offsets, and corresponding files from semantic and predicted_masks).
As test/coordShift.json I took coordShift.json file from dataset/Synthetic_v3_InstanceSegmentation/val/.
The resulting point cloud out.txt visualized in CloudCompare:

pcl5

It looks like some shift values are wrong. Could you possibly tell how to fix it?

mAP (Intersection) calculation in the evaluation script

Hi! I'm looking into STPLS3DInstanceSegmentationChallenge_Codalab_Evaluate.py
Could you please tell me how Intersection is calculated for two instances?

intersection = np.count_nonzero(np.logical_and(gt_ids == gt_inst['instance_id'], pred_mask))

Here gt_ids are the ground truth [semantic_label + instance_id] for all points;
gt_inst['instance_id'] are [semantic_label + instance_id] matching with selected semantic label;
pred_mask is a binary mask of a separate predicted instance;

Do we calculate the number of points that have the same semantic label inside two instances? How do we know they actually intersect, if a single object can have different instance ids in ground truth and prediction, and we don't look at x,y,z as usually is done with bounding boxes in object detection tasks?

SoftGroup for STPLS3D

Hi @meidachen,

Thank you for sharing a new interesting outdoor dataset for instance segmentation. Based on your shared code, we have implemented our SoftGroup for STPLS3D, which achieves AP/AP50/AP25 of 46.4/62.0/69.5 on val set, significantly higher than the existing ones. I hope this may serve as a solid baseline for Urban3D@ECCV2022 challenge. Feel free to check https://github.com/thangvubk/SoftGroup.

Train, validation, test split

Hi! Thank you for your great work on STPLS3D dataset.
I’m looking into HAIS performance on STPLS3D. Could you please clarify the following:

  1. In HAIS official there are nice data folders: dataset/scannetv2/ train | test | val.
    I can see trainSplit and valSplit point clouds’ numbers hardcoded in prepare_data_inst_instance_stpls3d.py – is this the only place where I need to modify them to change train/val split?

  2. If I want to run test.py on specific point cloud – should I place it into the valSplit in prepare_data_inst_instance_stpls3d.py?

  3. I unzipped all 25 point clouds into the Synthetic_v3_InstanceSegmentation folder and ran prepare_data_statistic_stpls3d.py to check statistic parameters – and they turn out to be different from the default ones, which is suspicious. My output is:
    Using the printed list in hierarchical_aggregation.cpp for class_numpoint_mean_dict:
    [1.0, 4121.0, 53.0, 115.0, 1013.0, 141.0, 363.0, 748.0, 374.0, 23.0, 40.0, 56.0, 35.0, 100.0, 605.0]
    Using the printed list in hierarchical_aggregation.cu for class_radius_mean:
    [1.0, 12.87, 1.99, 2.66, 7.46, 2.73, 3.88, 7.7, 4.2, 2.17, 1.5, 5.2, 2.5, 2.23, 10.28]
    Using the printed list in hais_run_stpls3d.yaml for class_weight
    [1.0, 1.0, 51.87, 28.43, 1.39, 20.51, 25.37, 48.86, 57.42, 82.29, 85.26, 64.34, 73.92, 13.56, 17.37]

    Is it OK that they are different from default?

The dataloader of SensatUrban with point-transformer?

Hi @meidachen

How to set the dataloader of SensatUrban dataset with point-transformer? As SensatUrban and STPLS3D datasets are both large-scale, could I refer to and follow the generate_blocks.py and stpls.py upon SensatUrban dataset with point-transformer code?

Thanks.

What exactly is done in HAIS/data/stpls3d_inst.py ?

Hi!
I have changed the input data and am trying to run train.py.
When it comes to trainMerge(), I get the following error:

ValueError: Caught ValueError in DataLoader worker process 1.
Original Traceback (most recent call last):
  File "envs/hais/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "envs/hais/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
    return self.collate_fn(data)
  File "STPLS3D/HAIS/data/stpls3d_inst.py", line 231, in trainMerge
    instance_label = self.getCroppedInstLabel(instance_label, valid_idxs)
  File "STPLS3D/HAIS/data/stpls3d_inst.py", line 186, in getCroppedInstLabel
    while (j < instance_label.max()):
  File "envs/hais/lib/python3.7/site-packages/numpy/core/_methods.py", line 40, in _amax
    return umr_maximum(a, axis, None, out, keepdims, initial, where)
ValueError: zero-size array to reduction operation maximum which has no identity

Running stpls3d_inst.py line-by-line showed that for some input data files, the function

    def getCroppedInstLabel(instance_label, valid_idxs):
        instance_label = instance_label[valid_idxs]
        j = 0
        while (j < instance_label.max()):
            if (len(np.where(instance_label == j)[0]) == 0):
                instance_label[instance_label == instance_label.max()] = j
            j += 1
        return instance_label

raises an error.

So, could you please clarify:

  1. Why do we crop data in stpls3d_inst.py, weren't it already cropped by prepare_data_inst_instance_stpls3d.py ? What are valid_idx returned by crop()?
  2. What does getCroppedInstLabel() function do?

UPD.: For some files, crop() function outputs valid_idxs - all False. What can be the problem?

Error while set up the environment for Instance Segmentation in Ubuntu 20.04

Hi @meidachen

Thanks for the great work.

I am setting up the environment for the Instance Segmentation as per your instructions. I am using Ubuntu 20.04 with Cuda 11.6 support.

Before I ran the python setup.py bdist_wheel command, I exported CUDACXX with "/usr/local/cuda-11.6/bin/nvcc" and ran python setup.py bdist_wheel command.

(hais) spartan@spartan-NUC11BTMi9:~/GitHubCode/STPLS3D/HAIS/lib/spconv$ python setup.py bdist_wheel running bdist_wheel running build running build_py running build_ext /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/build/lib.linux-x86_64-cpython-37 Release -- Caffe2: CUDA detected: 11.6 -- Caffe2: CUDA nvcc is: /usr/local/cuda-11.6/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.6 -- Caffe2: Header version is: 11.6 CMake Warning (dev) at /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/cmake/data/share/cmake-3.26/Modules/FindPackageHandleStandardArgs.cmake:438 (message): The package name passed to find_package_handle_standard_args(CUDNN) does not match the name of the calling package (Caffe2). This can lead to problems in calling code that expectsfind_packageresult variables (e.g.,_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/share/cmake/Caffe2/public/cuda.cmake:107 (find_package_handle_standard_args)
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:88 (include)
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:40 (find_package)
CMakeLists.txt:23 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.

-- Found cuDNN: v8.8.1 (include: /usr/local/cuda-11.6/include, library: /usr/local/cuda-11.6/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s): 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
CMake Warning (dev) at /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/cmake/data/share/cmake-3.26/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to find_package_handle_standard_args (torch) does
not match the name of the calling package (Torch). This can lead to
problems in calling code that expects find_package result variables
(e.g., _FOUND) to follow a certain pattern.
Call Stack (most recent call first):
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:93 (find_package_handle_standard_args)
CMakeLists.txt:23 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.

-- Found torch: /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/lib/libtorch.so
-- Found PythonInterp: /home/spartan/anaconda3/envs/hais/bin/python3.7 (found suitable version "3.7.16", minimum required is "3.7")
-- Found PythonLibs: /home/spartan/anaconda3/envs/hais/lib/libpython3.7m.so
-- Performing Test HAS_CPP14_FLAG
-- Performing Test HAS_CPP14_FLAG - Success
-- pybind11 v2.3.dev0
-- Configuring done (0.6s)
CMake Warning (dev) in src/spconv/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.

CUDA_ARCHITECTURES is empty for target "spconv".
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) in src/spconv/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.

CUDA_ARCHITECTURES is empty for target "spconv".
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning at src/spconv/CMakeLists.txt:1 (add_library):
Cannot generate a safe runtime search path for target spconv because files
in some directories may conflict with libraries in implicit directories:

runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
  /usr/local/cuda-11.6/lib64

Some of these libraries may not be found correctly.

CMake Warning (dev) in src/utils/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.

CUDA_ARCHITECTURES is empty for target "spconv_nms".
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) in src/utils/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.

CUDA_ARCHITECTURES is empty for target "spconv_nms".
This warning is for project developers. Use -Wno-dev to suppress it.

-- Generating done (0.0s)
-- Build files have been written to: /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/build/temp.linux-x86_64-cpython-37
[ 7%] Building CUDA object src/utils/CMakeFiles/spconv_nms.dir/nms.cu.o
[ 15%] Building CXX object src/spconv/CMakeFiles/spconv.dir/all.cc.o
[ 23%] Building CUDA object src/spconv/CMakeFiles/spconv.dir/indice.cu.o
[ 30%] Building CXX object src/spconv/CMakeFiles/spconv.dir/indice.cc.o
[ 38%] Linking CUDA shared library /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/build/lib.linux-x86_64-cpython-37/spconv/libspconv_nms.so
[ 38%] Built target spconv_nms
[ 46%] Building CXX object src/spconv/CMakeFiles/spconv.dir/reordering.cc.o
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:21:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h: In function ‘void tv::check_torch_dtype(const at::Tensor&)’:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:32:29: error: ‘remove_const_t’ is not a member of ‘std’
auto val = std::is_same<std::remove_const_t, double>::value;
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:32:29: note: suggested alternative:
In file included from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/ArrayRef.h:19:0,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/ScalarType.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/Scalar.h:9,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/core/Type.h:8,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Type.h:2,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/script.h:3,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:20,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/C++17.h:39:77: note: ‘c10::guts::remove_const_t’
template using remove_const_t = typename std::remove_const::type;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:21:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:32:29: error: ‘remove_const_t’ is not a member of ‘std’
auto val = std::is_same<std::remove_const_t, double>::value;
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:32:29: note: suggested alternative:
In file included from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/ArrayRef.h:19:0,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/ScalarType.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/Scalar.h:9,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/core/Type.h:8,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Type.h:2,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/script.h:3,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:20,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/C++17.h:39:77: note: ‘c10::guts::remove_const_t’
template using remove_const_t = typename std::remove_const::type;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:21:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:32:50: error: wrong number of template arguments (1, should be 2)
auto val = std::is_same<std::remove_const_t, double>::value;
^
In file included from /home/spartan/anaconda3/envs/hais/gcc/include/c++/bits/move.h:57:0,
from /home/spartan/anaconda3/envs/hais/gcc/include/c++/bits/stl_pair.h:59,
from /home/spartan/anaconda3/envs/hais/gcc/include/c++/utility:70,
from /home/spartan/anaconda3/envs/hais/gcc/include/c++/algorithm:60,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/tensorview/tensorview.h:16,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/maxpool.h:17,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:19,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/anaconda3/envs/hais/gcc/include/c++/type_traits:958:12: note: provided for ‘template<class, class> struct std::is_same’
struct is_same;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/pool_ops.h:21:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/all.cc:16:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:37:29: error: ‘remove_const_t’ is not a member of ‘std’
auto val = std::is_same<std::remove_const_t, float>::value;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/spconv_ops.h:22:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/indice.cc:17:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h: In function ‘tv::TensorView tv::torch2tv(const at::Tensor&)’:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:40: error: ‘remove_const_t’ is not a member of ‘std’
return tv::TensorView(tensor.data<std::remove_const_t>(), shape);
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:40: note: suggested alternative:
In file included from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/ArrayRef.h:19:0,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/ScalarType.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/Scalar.h:9,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/core/Type.h:8,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Type.h:2,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/script.h:3,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/spconv_ops.h:21,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/indice.cc:17:
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/C++17.h:39:77: note: ‘c10::guts::remove_const_t’
template using remove_const_t = typename std::remove_const::type;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/spconv_ops.h:22:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/indice.cc:17:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:40: error: ‘remove_const_t’ is not a member of ‘std’
return tv::TensorView(tensor.data<std::remove_const_t>(), shape);
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:40: note: suggested alternative:
In file included from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/ArrayRef.h:19:0,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/ScalarType.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/core/Scalar.h:9,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/core/Type.h:8,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Type.h:2,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/torch/script.h:3,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/spconv_ops.h:21,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/indice.cc:17:
/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/torch/include/c10/util/C++17.h:39:77: note: ‘c10::guts::remove_const_t’
template using remove_const_t = typename std::remove_const::type;
^
In file included from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/spconv/spconv_ops.h:22:0,
from /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/src/spconv/indice.cc:17:
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:35: error: parse error in template argument list
return tv::TensorView(tensor.data<std::remove_const_t>(), shape);
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:27: error: expected primary-expression before ‘(’ token
return tv::TensorView(tensor.data<std::remove_const_t>(), shape);
^
/home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/include/torch_utils.h:64:64: error: expected primary-expression before ‘)’ token
return tv::TensorView(tensor.data<std::remove_const_t>(), shape);
^
make[2]: *** [src/spconv/CMakeFiles/spconv.dir/build.make:90: src/spconv/CMakeFiles/spconv.dir/indice.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 53%] Building CXX object src/utils/CMakeFiles/spconv_utils.dir/all.cc.o
make[2]: *** [src/spconv/CMakeFiles/spconv.dir/build.make:76: src/spconv/CMakeFiles/spconv.dir/all.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:136: src/spconv/CMakeFiles/spconv.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 61%] Linking CXX shared library /home/spartan/GitHubCode/STPLS3D/HAIS/lib/spconv/build/lib.linux-x86_64-cpython-37/spconv/spconv_utils.cpython-37m-x86_64-linux-gnu.so
[ 61%] Built target spconv_utils
make: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "setup.py", line 86, in
zip_safe=False,
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/init.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 325, in run
self.run_command("build")
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "setup.py", line 39, in run
self.build_extension(ext)
File "setup.py", line 70, in build_extension
subprocess.check_call(['cmake', '--build', '.'] + build_args, cwd=self.build_temp)
File "/home/spartan/anaconda3/envs/hais/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j4']' returned non-zero exit status 2.
`
Does my environment is missing some packages? Above errors belongs to C++14 and C++17 settings. I have gcc 9.4.0 version installed in the ubuntu and GCC 5.4.0 installed in the hais conda environment.

Thanks for the help.

training with randla

Hi, I would like to use your RandLA-Net code to achiev,In main_STPLS3D.py you write to RealWorldData in data_folder,ResidentialArea_GT in test_area.ResidentialArea_GT Which partial dataset is this?
I try to train with RealWorldData four regions, but the input_0.300 and original_ply folders I get after preprocessing are empty。
Can you answer me, thanks a lot!thank you

About Randla

你好,这个数据集使用Randlanet训练代码只有一个区域RealWorlddata可以使用吗,还是说有别的代码可以更改使用4个区域训练

KPConv-STPLS3D- Data Problem

first of all thanks.
I use KPConv and STPLS3D dataset for training,
When the training progresses to epoch130, an error will appear:
'It seems this dataset only containes empty input spheres'

Currently I skip these abnormal epochs, is it a data problem, and is there a better solution?
part of the code:

        # Number collected
        n = input_inds.shape[0]

        # Safe check for empty spheres  
        if n < 2:
            failed_attempts += 1
            if failed_attempts > 100 * self.config.batch_num:
                # raise ValueError('It seems this dataset only containes empty input spheres')
                print('It seems this dataset only containes empty input spheres')
                return []
            t += [time.time()]
            t += [time.time()]
            continue

        # Collect labels and colors
        input_points = (points[input_inds] - center_point).astype(np.float32)

Data augmentation for instance segmentation

Hi Thanks for your great work! I am checking the data preparation script for instance segmenation: prepare_data_inst_instance_stpls3d.py. In the code, it seems that you only apply data augmentation to these classes [0, 2, 3, 7, 8, 9, 12, 13]. Is there any motivation behind that? Besides, why do we even apply random rotation augmentation to class 0 (i.e. ground)?
Thanks in advance!

Data format of STPLS3D txt point clouds for Instance Segmentation

Hi,

maybe I did not search enough, but I can not find the header information of the .txt files where the instance segmentation point clouds are released. Once downloaded the dataset, I checked that all .txt files has 8 columns, and I already know the meaning of the 6 firsts (x,y,z,r,g,b), but the last two are unknown for me.

As I could saw, taking the '1_points_GTv3.txt' file as an example, 7th column has only values between 0 and 14, which makes sense to think that this is the label column. However, 8th column has 1655 values in the range of [0,2736] and also -100. In the documentation related to instance segmentation is written that all ground points are labelled with -100 values, but this value appears only in the 8th column and not in the 7th. This, together with the fact that 8th column has values in the range mentioned previously, tricks me.

Sorry for opening this issue, but this should be clearer

Scores in the evaluation script

Hi!
In your evaluation script you fill confidence with dummy 1 values:

# conf value doesn't really matter
preConf = np.full((len(preSem),1),1.0)

  1. However, when plotting precision/recall curve, unique confidence values are used for points sorting and deciding on number of points where to sample precision-recall values:

    # unique thresholds
    (thresholds, unique_indices) = np.unique(y_score_sorted, return_index=True)

    This results in the number of unique recall values being always 1. Whatever the number of instances with a given semantic label, your precision-recall curve always has 2 points (one is artificial).
    Is this the correct way to calculate area under precision-recall curve?

  2. Moreover, when searching for ground truth - prediction matches, the first match with overlap>threshold is considered true positive, while all the later matches are automatically marked as false positives:

    if overlap > overlap_th:
    confidence = pred['confidence']
    # if already have a prediction for this gt,
    # the prediction with the lower score is automatically a false positive

    -- this assumes predictions were sorted by confidence, otherwise not-optimal prediction with not the highest confidence would become true positive. Do you think this can happen?

Overfitting was observed in the Synthetic v3 region

I used Pointct ,a net like Point transnformer ,I found that when I used Synthetic v3 as a test aera ,training losses are declining and validation losses are rising,it seems overfitting.I used Synthetic v1 as test aera,it seems normal.Is this a problem with the code or a problem with the dataset。

Thanks!

How to handle the seperated semantic instance segmentation results at the edge of different blocks?

Thanks for providing such a useful dataset! This is my instance segmentation predict result. I found the predict result is ok when the featue instance is inside every block, but at the edge of different blocks, the instance are predicted as seperated ones (for example the building). So how to handle it? Or can I input the whole point cloud file at once instead of dividing sepetated blocks?

thanks!

2022-06-13_15-46

Interested in the projection code

Hi,

I would like to use your projection method to my own photogammetry dataset where we already created the meshes using the rgb but we would like to add a layer with the semantic labels. Is it possible to release that code or let me know where i can find it? thank you!

I want to make my own dataset

I have apple tree point cloud dataset with two semantic classes trunk, branch. One line of my annotation data is as follows.
0.013332, 0.111240, 1.698000,112.000000,109.000000,102.000000, 0.000000, 0.000000

There are 8 columns in total:x,y,z,R,G,B, semantic label(trunk is 0/branch is 1), instance label(trunk is 0, branch1 is 1,branch2 is 2,...).
I'm not sure if this is right and need your help. I use CloudCompare for annotation.

Many thanks.

Download failed in Google Drive

I am downloading the source images. The Google Drive flushs download URL in one hour, which breaks down downloading. Could you provide any other download URLs to replace Google Drive?

multi gpu

I saw that "import torch.distributed as dist" in HAIS's train.py code.
Can I use a multi gpu when I train a HAIS model??

If not, could give me any advise how to execute multi gpu possible?
Thank you.

Which area is used as the testing area for STPLS3D semantic segmentation?

Hi @meidachen

In the codes of RandLA-Net, SCF-Net, KPConv, following the table3 and table6 from the paper, I want to know clear which area of the RealWorldData is used as the testing area? WMSC_split? Or which one? I want to have a fair comparison.

The STPLS3D dataset is only splited into training and validation, right? Not training, validation, and testing.

In addition, following the code of data_preparation_STPLS3D.py in RandLA-Net and SCF-Net, even using .txt files, the data_preparation_STPLS3D.py points to using RealWorldData to train and test, instead of using Synthetic dataset. If to use both RealWorldData and Synthetic data to train and use a area of RealWorldData to test, the dataset preparation may follow the data_preparation_STPLS3D.py in KPConv.

Thanks.

instance label 0 mean

Hi, @meidachen
Thank you for sharing a new interesting outdoor dataset for instance segmentation first. I have a little problem that I haven't understood for a long time. For HAIS instance result , if the instance label equal to 0,What does it mean? I found that the instance label 0 appeared in different blocks. I'm confused. Look forward to your reply!

CUDA kernel failed : an illegal memory access was encountered

Hi @meidachen , I am currently run the HAIS model for instance segmentation. After the training process, I meet an error when running python test.py --config config/hais_run_stpls3d.yaml --pretrain exp/Synthetic_v3_InstanceSegmentation/hais/hais_run_stpls3d/hais_run_stpls3d-000000500.pth :

CUDA kernel failed : an illegal memory access was encountered

I traced the problem and found the code breaks around the following lines:

proposals_idx, proposals_offset = hais_ops.hierarchical_aggregation(

Could you have some suggestions? Looking forwarding to your reply!

train / validation split

Hi Thanks for the great work! For instance segmentation, I downloaded the training point clouds (1-25). Are we allowed to use all the data to train the model, or we must split it to training (all - 5, 10, 15, 20, 25) and validation set (5, 10, 15, 20, 25)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.