✧ Homepage: https://yanx27.github.io/
✧ Google Scholar: https://scholar.google.com.hk/citations?hl=zh-CN&user=TK4Ty0gAAAAJ
3D Graph Neural Networks for RGBD Semantic Segmentation
License: MIT License
✧ Homepage: https://yanx27.github.io/
✧ Google Scholar: https://scholar.google.com.hk/citations?hl=zh-CN&user=TK4Ty0gAAAAJ
我的精度略高于论文中的0.45,达到了0.7以上,部分图片效果比论文展示的好。
I notice that there are 14 classes and I'd like to know the colors corresponding to those classes, 'cause I want to transfer the output to an image to check out if it works. Or you've already had a solution to that and somehow I missed it? Thanks a lot!
Could you please give me the pretrained weight of this network?
i wonder if you have this problem ?
I do as you ask,and then ,find it limit the size of the object.
this problem is in reduction.py line 68, in dump.
I see this "# do this for every sample in batch, not nice, but I don't know how to use index_select batchwise" in model code.Why gnn part need batchwise to compute neighbor features?Thx a lot.
In the code, why use the same batch of data as both training set and validation set? I divided the training set and test set according to the method in the paper, and mIou decreased a lot, only about 30%. Can someone help me answer my doubts?
why get the 3D point in this way instead of use the Camera params and depth data?
The link you offered for downloading your database is invalid now, could you please upload it again, thanks a lot!
Have you tested 40 categories? How is the hyperparameter set?When I tested labels40, MIoU only had 0.25
At first, thanks for your code...
Line 411 in b5e0188
Can we use convolution(e.g. 1x1) operations to achieve this part?
I have a try like this:
self.g_rnn_conv = nn.Sequential(
nn.Conv2d(2048 * self.k, 2048 * self.k, 1),
nn.BatchNorm2d(2048 * self.k),
nn.PReLU()
)
...
# loop over timestamps to unroll
for i in range(self.gnn_iternum):
# do this for every sample in batch, not nice, but I don't know
# how to use index_select batchwise
# fetch features from nearest neighbors
# N H W K*C
h = h.view(N * (H * W), C) # NHW C
neighbor_f = torch.index_select(h, 0, knn).view(N, H, W, K * C)
neighbor_f = neighbor_f.permute(0, 3, 1, 2)
neighbor_f = self.g_rnn_conv(neighbor_f)
neighbor_f = neighbor_f.permute(0, 2, 3, 1).contiguous() # N H W KC
neighbor_f = neighbor_f.view(N, H * W, K, C)
# aggregate and iterate messages in m, keep original CNN features h for later
m = torch.mean(neighbor_f, dim=2)
h = h.view(N, (H * W), C)
# concatenate current state with messages
concat = torch.cat((h, m), 2) # N HW 2C
# get new features by running MLP q and activation function
h = self.q_rnn_actf(self.q_rnn_layer(concat)) # N HW C
and another try:
self.g_rnn_conv = nn.Sequential(
nn.Conv2d(2048 * self.k, 2048 * self.k, 1),
nn.BatchNorm2d(2048 * self.k),
nn.PReLU()
)
self.q_rnn_conv = nn.Sequential(
nn.Conv2d(4096, 2048, 1),
nn.BatchNorm2d(2048),
nn.PReLU()
)
...
# get k nearest neighbors
knn = self.__get_knn_indices(proj_3d) # N HW K
knn = knn.view(N * H * W * K).long() # NHWK
# prepare CNN encoded features for RNN
h = cnn_encoder_output # N C H W
# 调整维度之后, 一般需要在contiguous后才能用view
h = h.permute(0, 2, 3, 1).contiguous() # N H W C
# loop over timestamps to unroll
for i in range(self.gnn_iternum):
# do this for every sample in batch, not nice, but I don't know
# how to use index_select batchwise
# fetch features from nearest neighbors
# N H W K*C
h = h.view(N * (H * W), C) # NHW C
neighbor_f = torch.index_select(h, 0, knn).view(N, H, W, K * C)
neighbor_f = neighbor_f.permute(0, 3, 1, 2) # N KC H W
neighbor_f = self.g_rnn_conv(neighbor_f)
neighbor_f = neighbor_f.permute(0, 2, 3, 1).contiguous() # N H W KC
neighbor_f = neighbor_f.view(N, H * W, K, C)
# aggregate and iterate messages in m, keep original CNN features h for later
m = torch.mean(neighbor_f, dim=2)
h = h.view(N, (H * W), C)
# concatenate current state with messages
concat = torch.cat((h, m), 2).view(N, H, W, 2 * C) # N HW 2C
concat = concat.permute(0, 3, 1, 2)
# get new features by running MLP q and activation function
h = self.q_rnn_conv(concat) # N, C, H, W
h = h.permute(0, 2, 3, 1).contiguous() # N H W C
# format RNN activations back to image, concatenate original CNN embedding, return
h = h.view(N, H, W, C).permute(0, 3, 1, 2).contiguous() # N C H W
output = self.output_conv(
torch.cat((cnn_encoder_output, h), 1)) # N 2C H W
return output
作者您好!我在运行此程序时,控制台出现了这个
0%| | 0/363 [00:00<?, ?it/s]
之后一直没有反映,设断点查看后发现有timeout的死循环,请问这种情况您有遇到过吗?
Hello, I am learning GNN recently, and I want to reproduce your program, but when I run the program, I have encountered such a problem:
Traceback (most recent call last): File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 501, in __iter__ return _DataLoaderIter(self) File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 289, in __init__ w.start() File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\Administrator\AppData\Local\conda\conda\envs\pytorch\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OverflowError: cannot serialize a string larger than 4GiB
The environment`configuration is: torch0.4.1 cuda9.0 win10 python3.6.1
Is it a problem with the pickle module?Can you help me?
thanks a lot!
Hi, I cannot find other way to contact you so I put up this issue. I was about to write a gnn of pytorch version when I get this. In the code I find the class EnetGnn. I wonder is this class the complete pytorch version of gnn at https://github.com/xjqicuhk/3DGNN ? I would like to do something with gnn, and I will appreciate it if your code can help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.