cattaneod / pointnetvlad-pytorch Goto Github PK
View Code? Open in Web Editor NEWPytorch implementation of PointNetVlad
Pytorch implementation of PointNetVlad
Hi, thanks for your Pytorch implementation of PointNetVLAD.
I'm a new learner. When I read the code, I'm confused at PointNetVLAD:
PointNetVlad-Pytorch/models/PointNetVlad.py
Lines 45 to 81 in ff00ff0
According to my view:
activation
at line 59 represents the distribution of each point n belonging to the cluster k. (the weight)a
at line 62 represents the learned clustervlad
at line 68 means? Where is the residuals and the sum of them according to points n?Why not the following:
def forward(self, x):
x = x.transpose(1, 3).contiguous()
x = x.view((-1, self.max_samples, self.feature_size)) # [B,N,C]
activation = torch.matmul(x, self.cluster_weights)
if self.add_batch_norm:
# activation = activation.transpose(1,2).contiguous()
activation = activation.view(-1, self.cluster_size)
activation = self.bn1(activation)
activation = activation.view(-1,
self.max_samples, self.cluster_size)
# activation = activation.transpose(1,2).contiguous()
else:
activation = activation + self.cluster_biases
activation = self.softmax(activation)
activation = activation.view((-1, self.max_samples, self.cluster_size))
a_sum = activation.sum(-2, keepdim=True)
a = a_sum * self.cluster_weights2 # [B,C,k]
### ----------------- different-------------------
N = x.shape[1]
residual = x.unsqueeze(-1).repeat(1,1,1,self.cluster_size) - \
a.unsqueeze(1).repeat(1,N,1,1) # [B,C,N,k]
vlad = activation.unsqueeze(2) * residual
vlad = torch.sum(vlad, dim=1) # [B,C,k]
### ----------------- different-------------------
# intra-normalization and L2 normalize
vlad = F.normalize(vlad, dim=1, p=2)
vlad = vlad.reshape((-1, self.cluster_size*self.feature_size))
vlad = F.normalize(vlad, dim=1, p=2)
# compress into a compact output
vlad = torch.matmul(vlad, self.hidden1_weights)
vlad = self.bn2(vlad)
if self.gating:
vlad = self.context_gating(vlad)
return vlad
Is there anything wrong with my understanding?
Thanks in advance!
Hi,it is a great work, I want to do a test, can you provide a pre-trained model, thank you!
Thanks for your wonderful work, but i find in the file "train_pointnetvlad", the global variable "HARD_NEGATIVES={}" is never updated which means it is useless, is there something wrong here?
Why is batchsize=44 and how can I modify batchsize to be smaller? If you could answer me, I would greatly appreciate it
Thanks for your great work, the recall@1% on Oxford is higher than that in the paper, which is trained by the default config in your repository. Is it normal?
The import:
import config as cfg
is missing n the generate_training_tuples_baseline.py
script, returning an error when defining the dataset directory name in line 10.
Hello, the default training file is oxford. How can I use the University section of the inhouse dataset for training? Thank you very much for your reply!
Hi, I'm a pytorch user and I'm really grateful that you open source this great work. However, I have some questions about the code. My question is, what are these two global variables HARD_NEGATIVES and TRAINING_LATENT_VECTORS in train_pointnetvlad.py? I notice that in the training process, before epoch 5, they won't be used since they are all empty. But after epoch 5 you will generate some latent_vectors via get_latent_vectors. So I wonder if they're something that help the model converge better, since according to its name (HARD_NAGATIVES), I guess you just select some harder negative samples from the training set. In other words, in the early stage of the training process, you just randomly select samples to train, and then after epoch 5 you select the most difficult samples to train the model more efficiently. Am I correct?
line 361, in process_data
ob[i] = torch.roll(ob[i], seed, dims=1)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
Can you suggest why is this error is occurring?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.