Giter Club home page Giter Club logo

gcnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gcnet's Issues

Test and visualize on custom data

Hi, thanks for sharing your work. Is it easy to give some guidance how to test/evaluate the pre-trained models on two individual given point clouds and/or possibly to custom data?

train the net

Hi!Thank you so much for sharing!
1:How many epochs has this been trained for? I didn't find any instructions on the source code,
for epoch in range(config.max_epoch):
print('=' * 20, epoch, '=' * 20)
train_step, val_step = 0, 0
for inputs in tqdm(train_dataloader):
for k, v in inputs.items():
if isinstance(v, list):
for i in range(len(v)):
inputs[k][i] = inputs[k][i].cuda()
else:
inputs[k] = inputs[k].cuda()
2: Why is training so slow? my graphics card is titan

About creating my own data set

I want to add my own dataset based on the 3DMatch dataset. How do I create it? In 3DMatch data file, the training set and the test set are both a.txt file and a.pth file. What is the matrix in the.txt file? I also want to know where to store the conversion matrix between source point cloud and target point cloud (for point cloud registration training)?

ERROR: Unexpected segmentation fault encountered in worker.

Any idea how to fix the following error:

eval_mvp_rg.py --data_root datasets/mvp_rg --checkpoint weights/mvp_rg.pth --vis
[20 31 34]
  0%|          | 0/1200 [00:00<?, ?it/s]ERROR: Unexpected segmentation fault encountered in worker.
 ERROR: Unexpected segmentation fault encountered in worker.
 ERROR: Unexpected segmentation fault encountered in worker.
 ERROR: Unexpected segmentation fault encountered in worker.
  0%|          | 0/1200 [00:17<?, ?it/s]
Traceback (most recent call last):
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 986, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/usr/lib/python3.9/multiprocessing/queues.py", line 113, in get
    if not self._poll(timeout):
  File "/usr/lib/python3.9/multiprocessing/connection.py", line 262, in poll
    return self._poll(timeout)
  File "/usr/lib/python3.9/multiprocessing/connection.py", line 429, in _poll
    r = wait([self], timeout)
  File "/usr/lib/python3.9/multiprocessing/connection.py", line 936, in wait
    ready = selector.select(timeout)
  File "/usr/lib/python3.9/selectors.py", line 416, in select
    fd_event_list = self._selector.poll(timeout)
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 2845706) is killed by signal: Segmentation fault. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ttsesm/Development/NgeNet/eval_mvp_rg.py", line 174, in <module>
    main(args)
  File "/home/ttsesm/Development/NgeNet/eval_mvp_rg.py", line 53, in main
    for pair_ind, inputs in enumerate(tqdm(test_dataloader)):
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
    for obj in iterable:
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1182, in _next_data
    idx, data = self._get_data()
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1148, in _get_data
    success, data = self._try_get_data()
  File "/home/ttsesm/Development/NgeNet/venv_ngenet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 999, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 2845706) exited unexpectedly

Process finished with exit code 1

Thanks.

Perfomance

Hello, thank you for your work, I encountered a problem, I used the pre-trained model and eval code provided by you to test, and I could not restore the effect provided by your paper on the MVP_RG data set. Where are the details that need to be changed for testing? ?

The question about the loss function

Hi, Thanks for sharing the codebase for your work. I am trying to train the network on custom data, but I got the following error:

/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [1,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
0%| | 1/15356 [00:01<5:05:31, 1.19s/it]
Traceback (most recent call last):
File "/media/home/C/NgeNet/train.py", line 224, in
main()
File "/media/home/C/NgeNet/train.py", line 124, in main
loss_dict = model_loss(coords_src=coords_src,
File "/home/anaconda3/envs/Negnet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/home/C/NgeNet/losses/loss.py", line 130, in forward
overlap_loss_v = 0.5 * self.overlap_loss(ol_scores_src, ol_gt_src) +
File "/media/home/C/NgeNet/losses/loss.py", line 65, in overlap_loss
weights[ol_gt > 0.5] = 1 - ratio
RuntimeError: CUDA error: device-side assert triggered

Process finished with exit code 1

It seems that during the network training, the weight parameter is too large, cause the variables ‘q_feats_local’ is too large, cause the leaky_relu to be nan, which causes an error in the back propagation of loss. Have you ever encountered this situation during the experiment?

Segmentation fault when evaluate 3DMatch

我在3DMatch数据集上测试,遇到了Segmentation fault错误,为了方便描述,直接用中文了,见谅

环境:
Ubuntu 18.04.5 LTS
cuda 10.2
torch 1.8.1
open3d 0.15.2

执行命令:
python eval_3dmatch.py --benchmark 3DMatch --data_root ./data/indoor/ --checkpoint NgeNet_weights/3dmatch.pth --saved_path work_dirs/3dmatch --no_cuda

定位在 ThreeDMatch.py 74行,执行normal的时候
src_pcd, tgt_pcd = normal(npy2pcd(src_points)), normal(npy2pcd(tgt_points))

normal的代码我做了注释,发现生成dataloader时调用执行正常,测试阶段会在执行pcd.estimate_normals时报错,执行结果代码贴在下面了

def normal(pcd, radius=0.1, max_nn=30, loc=(0, 0, 0)):
    print("before estimate_normals")
    pcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=radius, max_nn=max_nn),
                         fast_normal_computation=False)
    print("after estimate_normals")
    pcd.orient_normals_towards_camera_location(loc)
    return pcd
rot 1623 <class 'list'>
trans 1623 <class 'list'>
src 1623 <class 'list'>
tgt 1623 <class 'list'>
overlap 1623 <class 'list'>
before estimate_normals
after estimate_normals
before estimate_normals
after estimate_normals
before estimate_normals
after estimate_normals
before estimate_normals
after estimate_normals
before estimate_normals
after estimate_normals
before estimate_normals
after estimate_normals
[36 35 36 38]
  0%|                                                                                                  | 0/1623 [00:00<?, ?it/s]
before estimate_normals  ## 开始执行normals,cpu模式报错,gpu模式会在这里卡住
ERROR: Unexpected segmentation fault encountered in worker.
  0%|                                                                                                  | 0/1623 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 986, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/usr/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/usr/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 22309) is killed by signal: Segmentation fault.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "eval_3dmatch.py", line 275, in <module>
    main(args)
  File "eval_3dmatch.py", line 89, in main
    for pair_ind, inputs in enumerate(tqdm(test_dataloader)):
  File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1182, in _next_data
    idx, data = self._get_data()
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1148, in _get_data
    success, data = self._try_get_data()
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 999, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 22309) exited unexpectedly

About how to understand the meaning of this comparison in this paper?

Thanks for your outstanding work!
I have a question about the comparison with PREDATOR as mentioned in your paper.

However, different from PREDATOR (Huang et al., 2021) that focuses on the point sampling(i.e overlap and saliency scores), NgeNet pay more attention to the encoding of point features.

As far as i' am concerned that the overlap score is about location information ,but the saliency scores in PREDATOR have the same meaning as the point features you mentioned. So i don't know whether there is anther meaning.
Best regards.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.