valeoai / 3dgenz Goto Github PK
View Code? Open in Web Editor NEWPublic repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
License: Other
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
License: Other
Very nice work, Could you provide the code and pre-trained models on S3DIS dataset? Thank you so much!!
In table 10, 12, 13, 14 of 3DGenZ paper, there is a mIOU for each class.
I'm curious how this is computed. Is this actually IOU for whole class?
as 3DGenZ/genz3d/fkaconv/examples/scannet/eval.py did?
Or it averages the IOU from each point cloud for a class.
Hi Bjoern,
How do you split the S3DIS dataset for evaluation in Table. 10?
Are you using the data from pointnet? For example here https://github.com/charlesq34/pointnet/tree/master/sem_seg
When I tried to run train_point_sk.py for training the model, there is the following error:
Traceback (most recent call last):
File "train_point_sk.py", line 9, in
from genz3d.kpconv.train_SemanticKitti import SemanticKittiConfig
ModuleNotFoundError: No module named 'genz3d.kpconv.train_SemanticKitti'
Line 9: from genz3d.kpconv.train_SemanticKitti import SemanticKittiConfig
I found there is no 'train_SemanticKitti' file in the kpconv folder. Could anyone to help me solve this problem?
Hi,
In the paper, only generalized zero-shot setting is conducted in the semantic segmentation. I wonder how to evaluate zero-shot segmentation performance, ie only test the performance of segmenting novel classes on the testing point cloud. I imagined one potential way: the final classifier is only trained on generated unseen class feature, so the classifier can only differentiate different new classes. However, the testing point cloud consists both seen and unseen classes. How can the classifier to differentiate seen classes from unseen classes?
Thank you so much for making your code public! We are very interested in your work. Could you open the code on S3DIS and Modelnet? In addition, SN_eval and KP-conv_eval files are missing.
Hi Bjoern,
I'm curious how do you compute the Acc for multi-class? Such as in Table 10 and 11.
Is it TP/(TP+FP) for a class? Which is true_positive_classes[i]/positive_classes[i] for class i using your code variables.
Thank you for publishing your code of such an interesting work. I am new to ZSL/G-ZSL, so I am little bit confused about the source of word representations(Word2Vec/GloVe) of the SemanticKITTI dataset that you provided in this repo. Do you manually build up the corpus of SemanticKITTI dataset, and use Word2Vec/GloVe to get these word representations, or is it publicly available?
Thank you so much!!
Best regards,
RUOYU GENG
please where is the code of about classification,can you provide the code of about classification on ModelNet40?Thank you very much
I see the gmmn generator takes quite a long time to train an epoch. Does that make sense?
If so, could you please recommend on the batch size and the number of epochs hyper-parameters?
Thank you for your work.
I installed all the packages as you guided.
When I try to run 'debug_sn_weight50.sh', there is a error about loading 'nearest_neighbors' Could you please help to see what is wrong?
Traceback (most recent call last): File "/media/SSD/liyang/projects/3DGenZ/3DGenZ/genz3d/seg/train_point_sn.py", line 10, in <module> from genz3d.fkaconv.examples.scannet.train import get_data File "/media/SSD/liyang/projects/3DGenZ/3DGenZ/genz3d/fkaconv/examples/scannet/train.py", line 23, in <module> import genz3d.convpoint.convpoint.knn.lib.python.nearest_neighbors as nearest_neighbors ModuleNotFoundError: No module named 'genz3d.convpoint.convpoint.knn.lib.python.nearest_neighbors'
I'm sorry, but at this time, I can't see the differences in your approach compared to DeViSe-3DSeg, mentioned as an adaptation of DeViSe-Seg to 3D point clouds. Your framework and theirs seem similar. Could you please spot the differences?
Thanks for your great work. I have a question about whether your approach can be applied to a vanilla semantic segmentation task if the dataset is available. I see in the paper, you said that "we consider semantic segmentation only in the GZSL setting".
Thank you for your work!Could you provide the code of backbone training under your ZSL setting? The code here is only open for testing with trained downloaded backbone models (such as FKAConv).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.