Dear authors,
Thank you for making the codes public. I am very intrested about your works, and I have some questions about your codes:
- I found that in PRNet's code, transformations of test data are fixed in this way:
if self.partition != 'train':
np.random.seed(item)
But I am fail to find these codes in your codes. So are the transformations of test data fixed?
- You said that you used GNN as feature extractor, but the codes show that it is different from the GNN used in PRNet, in your codes, GNN is in this way:
class GNN(nn.Module):
def __init__(self, emb_dims=64):
super(GNN, self).__init__()
self.propogate1 = Propagate(3, 64)
self.propogate2 = Propagate(64, 64)
self.propogate3 = Propagate(64, 64)
self.propogate4 = Propagate(64, 64)
self.propogate5 = Propagate(64, emb_dims)
def forward(self, x):
nn_idx = knn(x, k=12)
x = self.propogate1(x, nn_idx)
x = self.propogate2(x, nn_idx)
x = self.propogate3(x, nn_idx)
x = self.propogate4(x, nn_idx)
x = self.propogate5(x, nn_idx)
return x
But in this way, the idices will be fixed at first and not changed, and they are calculated by the Euclidean distance between coordinate points, instead of the L1 or L2 distance of the features. I think it is more like a variant of PointNet, not GNN.
I will be very appreciate if you can answer my doubts.