Giter Club home page Giter Club logo

p2b's People

Contributors

haozheqi avatar iliiliiliili avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

p2b's Issues

some questions about Calibration point cloud

def getPCandBBfromPandas(self, box, calib):
center = [box["x"], box["y"] - box["height"] / 2, box["z"]]
size = [box["width"], box["length"], box["height"]]
orientation = Quaternion(
axis=[0, 1, 0], radians=box["rotation_y"]) * Quaternion(
axis=[1, 0, 0], radians=np.pi / 2)
Using this function to convert point cloud coordinates to camera coordinates,the orientation use to move the cut point cloud to the origin of the camera coordinate system and turn it in a positive direction,I think orientation = Quaternion(axis=[0, 1, 0], radians=box["rotation_y"]) is ok,so why you choose to multiply Quaternion(axis=[1, 0, 0], radians=np.pi / 2)

training time and machines

Hello, thanks for your good job.
I noticed that you mentioned the running speed on a single NVIDIA 1080Ti GPU. Can you share the training time and what machines are used?

Question about losses

Hi, @HaozheQi thanks for your great work! I have a few questions after reading your code carefully.

  1. Why did you randomly sample 128 points through FPS when selecting seed, but when calculating loss, you used the first 128 points of the input point cloud as ground truth? I know this question is already appear in #Issue1, but I am still confused in this question.

  2. How do you define the ground truth of heading angle (rotation of x-y plane), which cannot be found in your codes clearly but is mentioned in your paper.

Looking forward your reply!

python setup.py build_ext --inplace

Hi, Thanks for your graet work!
When I execute this command, it has the following problem
RuntimeError: Error compiling objects for extension

Cuda 11.1
Pytorch 1.8.0
RTX3090

error when install

ERROR: Command errored out with exit status 128: git clone -q git://github.com/erikwijmans/etw_pytorch_utils.git /tmp/pip-install-5c0vh27l/etw-pytorch-utils_ee2fcee72ccb48a3b07621aa7802776e Check the logs for full command output.

some problem about the figure

Hello, thanks for your good job.
The figure in the paper shows the effect of P2b very well(such as Fig. 7 9 10). What software do you use to visualize these point clouds and boxes? (mayavi? meshlab?)

train problem about FloatingType

When I run the code ,I meet a questions.
RuntimeError: Expected isFloatingType(grads[i].scalar_type()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) (validate_outputs at ..\torch\csrc\autograd\engine.cpp:476)
(no backtrace available)
I don't know how to solve it.I would appreciate it if you could help me.

seed targetness label

label = label[0:128]

Hello, thanks for your good job. I have a question about this line in your code:

Why we can just take the first 128 numbers in the target mask as seed targetness label?

build_exit

pointnet2/_ext-src/include/utils.h:23:5: error: ‘AT_CHECK’ was not declared in this scope
AT_CHECK(x.scalar_type() == at::ScalarType::Float, \

pointnet2/_ext-src/src/sampling.cpp:82:5: error: ‘AT_CHECK’ was not declared in this scope
AT_CHECK(false, "CPU not supported");

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Expected isFloatingType(grads[i].type().scalarType())

Dear @HaozheQi ,

Thanks for your excellent work ! Now I am trying to reproduce the results through the code you provided, but I got this error:

Traceback (most recent call last):
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1668, in
main()
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1662, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/gjt/.pycharm_helpers/pydev/pydevd.py", line 1072, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/gjt/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/gjt/P2B/train_tracking.py", line 165, in
loss.backward()
File "/home/gjt/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/gjt/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag

RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

My environment is:
python 3.6.9
pytorch 1.3.1
torchvision 0.4.2
cudatoolkit 10.0.30
cudnn 7.6.5
h5py 2.10.0
numpy 1.17.4
pprint 0.1
enum34 1.1.10
future 0.18.2
pandas 0.25.3
shapely 1.7b1
matplotlib 3.1.2
pomegranate 0.13.3
ipykernel 5.1.3.0
jupyter 1.0.0
imageio 2.6.1
pyquaternion 0.9.5

Do you know what's wrong with it? Do you know what's wrong with it? Looking forward to hearing from you. Thanks for your excellent work again !

Some questions about your netR_36.model

Hello, thanks for you job.
I want to know that weather the model(netR_36.model) is trained in two GTX1080Ti gpu? if you are, when i want to test this model in a machine with one GTX1080Ti, this model could weather or not work? When i test it, there are some error in my terminal.
I have modified the code in test_tracking.py when i test.
parser.add_argument('--ngpu', type=int, default=1, help='# GPUs')
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
the error is:
Traceback (most recent call last):
File "test_tracking.py", line 177, in
netR.load_state_dict(torch.load(os.path.join(args.save_root_dir, args.model)))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Pointnet_Tracking:

I would appreciate it if you could reply. Thank you!

distanceBB_Gaussian

Hello, I have a doubt about the implementation of distanceBB_Gaussian
def distanceBB_Gaussian(box1, box2, sigma=1): off1 = np.array([ box1.center[0], box1.center[2], Quaternion(matrix=box1.rotation_matrix).degrees ]) off2 = np.array([ box2.center[0], box2.center[2], Quaternion(matrix=box2.rotation_matrix).degrees ]) dist = np.linalg.norm(off1 - off2) score = np.exp(-0.5 * dist / (sigma * sigma)) return score
Why do you use box1.center[0] and box1.center[2] that is x and z?
Why not use box1.center[0] and box1.center[1] that is x and y?

[RuntimeError] train_tracking.py

Hello, @HaozheQi ,thanks for your great job!
I'm running your codes for my own interests, I successfully used the model you provided(netR_36.pth) to reproduce the results of the paper.Then I try to tranin the model by myself. But I ran into some problems when running your train_tracking.py .The problems are as follows:

/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:122: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
======>>>>> Online epoch: #0, lr=0.001000 <<<<<======
0%| | 0/21 [00:02<?, ?it/s]
Traceback (most recent call last):
File "train_tracking.py", line 181, in
loss.backward()
File "/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

I'm not sure how to solve this problem, hope to get your reply!

testing problem

hi! when i run test_tracking.py,the following error occurred:

Namespace(IoU_Space=3, category_name='Car', data_dir='./data/training', model='netR_36.pth', model_fusion='pointcloud', ngpu=1, reference_BB='previous_result', save_root_dir='./model/car_model/', shape_aggregation='firstandprevious')
Traceback (most recent call last):
File "/home/wangkai/P2B/test_tracking.py", line 176, in
netR.load_state_dict(torch.load(os.path.join(args.save_root_dir, args.model)))
File "/home/wangkai/anaconda3/envs/py1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Pointnet_Tracking:
Missing key(s) in state_dict: "backbone_net.SA_modules.0.mlps.0.layer0.conv.weight", "backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.weight", "backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.bias", "backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.running_mean", "backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.running_var", "backbone_net.SA_modules.0.mlps.0.layer1.conv.weight", "backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.weight", "backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.bias", "backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.running_mean", "backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.running_var", "backbone_net.SA_modules.0.mlps.0.layer2.conv.weight", "backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.weight", "backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.bias", "backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.running_mean", "backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.running_var", "backbone_net.SA_modules.1.mlps.0.layer0.conv.weight", "backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.weight", "backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.bias", "backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.running_mean", "backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.running_var", "backbone_net.SA_modules.1.mlps.0.layer1.conv.weight", "backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.weight", "backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.bias", "backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.running_mean", "backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.running_var", "backbone_net.SA_modules.1.mlps.0.layer2.conv.weight", "backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.weight", "backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.bias", "backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.running_mean", "backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.running_var", "backbone_net.SA_modules.2.mlps.0.layer0.conv.weight", "backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.weight", "backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.bias", "backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.running_mean", "backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.running_var", "backbone_net.SA_modules.2.mlps.0.layer1.conv.weight", "backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.weight", "backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.bias", "backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.running_mean", "backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.running_var", "backbone_net.SA_modules.2.mlps.0.layer2.conv.weight", "backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.weight", "backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.bias", "backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.running_mean", "backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.running_var", "backbone_net.cov_final.weight", "backbone_net.cov_final.bias", "mlp.layer0.conv.weight", "mlp.layer0.normlayer.bn.weight", "mlp.layer0.normlayer.bn.bias", "mlp.layer0.normlayer.bn.running_mean", "mlp.layer0.normlayer.bn.running_var", "mlp.layer1.conv.weight", "mlp.layer1.normlayer.bn.weight", "mlp.layer1.normlayer.bn.bias", "mlp.layer1.normlayer.bn.running_mean", "mlp.layer1.normlayer.bn.running_var", "mlp.layer2.conv.weight", "mlp.layer2.normlayer.bn.weight", "mlp.layer2.normlayer.bn.bias", "mlp.layer2.normlayer.bn.running_mean", "mlp.layer2.normlayer.bn.running_var", "FC_layer_cla.0.conv.weight", "FC_layer_cla.0.normlayer.bn.weight", "FC_layer_cla.0.normlayer.bn.bias", "FC_layer_cla.0.normlayer.bn.running_mean", "FC_layer_cla.0.normlayer.bn.running_var", "FC_layer_cla.1.conv.weight", "FC_layer_cla.1.normlayer.bn.weight", "FC_layer_cla.1.normlayer.bn.bias", "FC_layer_cla.1.normlayer.bn.running_mean", "FC_layer_cla.1.normlayer.bn.running_var", "FC_layer_cla.2.conv.weight", "FC_layer_cla.2.conv.bias", "fea_layer.0.conv.weight", "fea_layer.0.normlayer.bn.weight", "fea_layer.0.normlayer.bn.bias", "fea_layer.0.normlayer.bn.running_mean", "fea_layer.0.normlayer.bn.running_var", "fea_layer.1.conv.weight", "fea_layer.1.conv.bias", "vote_layer.0.conv.weight", "vote_layer.0.normlayer.bn.weight", "vote_layer.0.normlayer.bn.bias", "vote_layer.0.normlayer.bn.running_mean", "vote_layer.0.normlayer.bn.running_var", "vote_layer.1.conv.weight", "vote_layer.1.normlayer.bn.weight", "vote_layer.1.normlayer.bn.bias", "vote_layer.1.normlayer.bn.running_mean", "vote_layer.1.normlayer.bn.running_var", "vote_layer.2.conv.weight", "vote_layer.2.conv.bias", "vote_aggregation.mlps.0.layer0.conv.weight", "vote_aggregation.mlps.0.layer0.normlayer.bn.weight", "vote_aggregation.mlps.0.layer0.normlayer.bn.bias", "vote_aggregation.mlps.0.layer0.normlayer.bn.running_mean", "vote_aggregation.mlps.0.layer0.normlayer.bn.running_var", "vote_aggregation.mlps.0.layer1.conv.weight", "vote_aggregation.mlps.0.layer1.normlayer.bn.weight", "vote_aggregation.mlps.0.layer1.normlayer.bn.bias", "vote_aggregation.mlps.0.layer1.normlayer.bn.running_mean", "vote_aggregation.mlps.0.layer1.normlayer.bn.running_var", "vote_aggregation.mlps.0.layer2.conv.weight", "vote_aggregation.mlps.0.layer2.normlayer.bn.weight", "vote_aggregation.mlps.0.layer2.normlayer.bn.bias", "vote_aggregation.mlps.0.layer2.normlayer.bn.running_mean", "vote_aggregation.mlps.0.layer2.normlayer.bn.running_var", "FC_proposal.0.conv.weight", "FC_proposal.0.normlayer.bn.weight", "FC_proposal.0.normlayer.bn.bias", "FC_proposal.0.normlayer.bn.running_mean", "FC_proposal.0.normlayer.bn.running_var", "FC_proposal.1.conv.weight", "FC_proposal.1.normlayer.bn.weight", "FC_proposal.1.normlayer.bn.bias", "FC_proposal.1.normlayer.bn.running_mean", "FC_proposal.1.normlayer.bn.running_var", "FC_proposal.2.conv.weight", "FC_proposal.2.conv.bias".

Unexpected key(s) in state_dict: "module.backbone_net.SA_modules.0.mlps.0.layer0.conv.weight", "module.backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.weight", "module.backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.bias", "module.backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.running_mean", "module.backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.running_var", "module.backbone_net.SA_modules.0.mlps.0.layer0.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.0.mlps.0.layer1.conv.weight", "module.backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.weight", "module.backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.bias", "module.backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.running_mean", "module.backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.running_var", "module.backbone_net.SA_modules.0.mlps.0.layer1.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.0.mlps.0.layer2.conv.weight", "module.backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.weight", "module.backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.bias", "module.backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.running_mean", "module.backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.running_var", "module.backbone_net.SA_modules.0.mlps.0.layer2.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.1.mlps.0.layer0.conv.weight", "module.backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.weight", "module.backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.bias", "module.backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.running_mean", "module.backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.running_var", "module.backbone_net.SA_modules.1.mlps.0.layer0.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.1.mlps.0.layer1.conv.weight", "module.backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.weight", "module.backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.bias", "module.backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.running_mean", "module.backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.running_var", "module.backbone_net.SA_modules.1.mlps.0.layer1.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.1.mlps.0.layer2.conv.weight", "module.backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.weight", "module.backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.bias", "module.backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.running_mean", "module.backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.running_var", "module.backbone_net.SA_modules.1.mlps.0.layer2.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.2.mlps.0.layer0.conv.weight", "module.backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.weight", "module.backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.bias", "module.backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.running_mean", "module.backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.running_var", "module.backbone_net.SA_modules.2.mlps.0.layer0.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.2.mlps.0.layer1.conv.weight", "module.backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.weight", "module.backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.bias", "module.backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.running_mean", "module.backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.running_var", "module.backbone_net.SA_modules.2.mlps.0.layer1.normlayer.bn.num_batches_tracked", "module.backbone_net.SA_modules.2.mlps.0.layer2.conv.weight", "module.backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.weight", "module.backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.bias", "module.backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.running_mean", "module.backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.running_var", "module.backbone_net.SA_modules.2.mlps.0.layer2.normlayer.bn.num_batches_tracked", "module.backbone_net.cov_final.weight", "module.backbone_net.cov_final.bias", "module.mlp.layer0.conv.weight", "module.mlp.layer0.normlayer.bn.weight", "module.mlp.layer0.normlayer.bn.bias", "module.mlp.layer0.normlayer.bn.running_mean", "module.mlp.layer0.normlayer.bn.running_var", "module.mlp.layer0.normlayer.bn.num_batches_tracked", "module.mlp.layer1.conv.weight", "module.mlp.layer1.normlayer.bn.weight", "module.mlp.layer1.normlayer.bn.bias", "module.mlp.layer1.normlayer.bn.running_mean", "module.mlp.layer1.normlayer.bn.running_var", "module.mlp.layer1.normlayer.bn.num_batches_tracked", "module.mlp.layer2.conv.weight", "module.mlp.layer2.normlayer.bn.weight", "module.mlp.layer2.normlayer.bn.bias", "module.mlp.layer2.normlayer.bn.running_mean", "module.mlp.layer2.normlayer.bn.running_var", "module.mlp.layer2.normlayer.bn.num_batches_tracked", "module.FC_layer_cla.0.conv.weight", "module.FC_layer_cla.0.normlayer.bn.weight", "module.FC_layer_cla.0.normlayer.bn.bias", "module.FC_layer_cla.0.normlayer.bn.running_mean", "module.FC_layer_cla.0.normlayer.bn.running_var", "module.FC_layer_cla.0.normlayer.bn.num_batches_tracked", "module.FC_layer_cla.1.conv.weight", "module.FC_layer_cla.1.normlayer.bn.weight", "module.FC_layer_cla.1.normlayer.bn.bias", "module.FC_layer_cla.1.normlayer.bn.running_mean", "module.FC_layer_cla.1.normlayer.bn.running_var", "module.FC_layer_cla.1.normlayer.bn.num_batches_tracked", "module.FC_layer_cla.2.conv.weight", "module.FC_layer_cla.2.conv.bias", "module.fea_layer.0.conv.weight", "module.fea_layer.0.normlayer.bn.weight", "module.fea_layer.0.normlayer.bn.bias", "module.fea_layer.0.normlayer.bn.running_mean", "module.fea_layer.0.normlayer.bn.running_var", "module.fea_layer.0.normlayer.bn.num_batches_tracked", "module.fea_layer.1.conv.weight", "module.fea_layer.1.conv.bias", "module.vote_layer.0.conv.weight", "module.vote_layer.0.normlayer.bn.weight", "module.vote_layer.0.normlayer.bn.bias", "module.vote_layer.0.normlayer.bn.running_mean", "module.vote_layer.0.normlayer.bn.running_var", "module.vote_layer.0.normlayer.bn.num_batches_tracked", "module.vote_layer.1.conv.weight", "module.vote_layer.1.normlayer.bn.weight", "module.vote_layer.1.normlayer.bn.bias", "module.vote_layer.1.normlayer.bn.running_mean", "module.vote_layer.1.normlayer.bn.running_var", "module.vote_layer.1.normlayer.bn.num_batches_tracked", "module.vote_layer.2.conv.weight", "module.vote_layer.2.conv.bias", "module.vote_aggregation.mlps.0.layer0.conv.weight", "module.vote_aggregation.mlps.0.layer0.normlayer.bn.weight", "module.vote_aggregation.mlps.0.layer0.normlayer.bn.bias", "module.vote_aggregation.mlps.0.layer0.normlayer.bn.running_mean", "module.vote_aggregation.mlps.0.layer0.normlayer.bn.running_var", "module.vote_aggregation.mlps.0.layer0.normlayer.bn.num_batches_tracked", "module.vote_aggregation.mlps.0.layer1.conv.weight", "module.vote_aggregation.mlps.0.layer1.normlayer.bn.weight", "module.vote_aggregation.mlps.0.layer1.normlayer.bn.bias", "module.vote_aggregation.mlps.0.layer1.normlayer.bn.running_mean", "module.vote_aggregation.mlps.0.layer1.normlayer.bn.running_var", "module.vote_aggregation.mlps.0.layer1.normlayer.bn.num_batches_tracked", "module.vote_aggregation.mlps.0.layer2.conv.weight", "module.vote_aggregation.mlps.0.layer2.normlayer.bn.weight", "module.vote_aggregation.mlps.0.layer2.normlayer.bn.bias", "module.vote_aggregation.mlps.0.layer2.normlayer.bn.running_mean", "module.vote_aggregation.mlps.0.layer2.normlayer.bn.running_var", "module.vote_aggregation.mlps.0.layer2.normlayer.bn.num_batches_tracked", "module.FC_proposal.0.conv.weight", "module.FC_proposal.0.normlayer.bn.weight", "module.FC_proposal.0.normlayer.bn.bias", "module.FC_proposal.0.normlayer.bn.running_mean", "module.FC_proposal.0.normlayer.bn.running_var", "module.FC_proposal.0.normlayer.bn.num_batches_tracked", "module.FC_proposal.1.conv.weight", "module.FC_proposal.1.normlayer.bn.weight", "module.FC_proposal.1.normlayer.bn.bias", "module.FC_proposal.1.normlayer.bn.running_mean", "module.FC_proposal.1.normlayer.bn.running_var", "module.FC_proposal.1.normlayer.bn.num_batches_tracked", "module.FC_proposal.2.conv.weight", "module.FC_proposal.2.conv.bias".

Process finished with exit code 1

I would appreciate it if you could reply. Thank you!

Bugs in Build _ext module

pointnet2/_ext-src/src/group_points.cpp:56:40: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(false, "CPU not supported");
                                        ^
error: command 'gcc' failed with exit status 1

etw_pytorch_utils not found

Hi, thanks for your job!

when i install dependencie, etw_pytorch_utils it not found. Bo you konw how to install it?

About visualization code

Hello HaozheQi, thanks for your wonderful work.

You mentioned in this issue that you use mayavi to visualize impressive figures in you paper.

Do you have any plan to open the visualization code? Or could you give us some clues about your visualization code? Such as the reference projects?

I want to visualize the input and output results of P2B frame by frame, so I can learn the process in detail.

Some questions about Loss of cls for ss(vote's score Loss )

hi Qi,
I have a question, you get the label of L_cls about vote score.
why the label[0:128] is the labels? in your function: kitty_utils.py/regularizePCwithlabel(PC,label,reg, input_size,istrain=True)

In your paper you say"Those search area seeds located on the surface of ground-truth target are regarded as positives, and the extra as negatives." but in code is first 128' label.

Timing details for SC3D

Dear @HaozheQi,

I tried to contact you via email but I email could find you (rejected email from your server). Congrats for your paper accepted as an oral to CVPR'20, merging 3D Siamese network with VoteNet is a nice idea.

I went through the details of your paper and wondered how you estimated the "Running speed" in section 4.5. You claim that "SC3D in default setting ran with 1.8 FPS on the same platform.". However the original SC3D paper states the following: "Our model takes on average 1.8ms to evaluate 147 candidates." (Section 5 on Timing). Can you enlighten me where the 1.8 FPS originates from? I am afraid it might be a typo.

Thank you,

Silvio Giancola

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.