hansen7 / occo Goto Github PK
View Code? Open in Web Editor NEW[ICCV '21] "Unsupervised Point Cloud Pre-training via Occlusion Completion"
Home Page: https://hansen7.github.io/OcCo/
License: MIT License
[ICCV '21] "Unsupervised Point Cloud Pre-training via Occlusion Completion"
Home Page: https://hansen7.github.io/OcCo/
License: MIT License
Hi Hanchen,
Thanks for sharing your work!
I have two questions and it would be great if you can help me figure this out!
Why do you generate incomplete point clouds by masking points occluded by a camera view? What's the difference if you just use a usual incomplete point cloud?
What's the difference between this work and PCN? I noticed you directly use PCN network architecture as your model and in my understanding, you just apply PCN to some downstream tasks. If you directly used PCN pre-trained weights, can they also get the improvements on these downstream tasks?
Hi Hansen,
Thanks for your great work. I have one quick question here.
May I ask what's the detailed experiment setting of fine-tuning on ModelNet40? Could you please provide the command of this task?
Now I'm using:
python train_cls.py --model=dgcnn_cls --dataset=modelnet40 --log_dir=dgcnn_occo --num_point=2048 --restore --restore_path=./path/to/dgcnn_occ_cls.pth
If I made any mistakes, please let me know! Thank you!
Hi@hansen7 :
When I run on my own dataset, the following errors occur. What should I do
Traceback (most recent call last): File "./OcCo_Torch/train_completion.py", line 253, in <module> main(args) File "./OcCo_Torch/train_completion.py", line 71, in main lmdb_train, args.batch_size, args.input_pts, args.gt_pts, is_training=True) File "/workspace/OcCo_Torch/OcCo_Torch/LMDB_DataFlow.py", line 77, in lmdb_dataflow df = dataflow.LMDBSerializer.load(lmdb_path, shuffle=False) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/serialize.py", line 106, in load df = LMDBData(path, shuffle=shuffle) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/format.py", line 91, in __init__ self._set_keys(keys) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/format.py", line 111, in _set_keys self.keys = loads(self.keys) File "/opt/conda/lib/python3.6/site-packages/tensorpack/utils/serialize.py", line 43, in loads_msgpack max_str_len=MAX_MSGPACK_LEN) File "/opt/conda/lib/python3.6/site-packages/msgpack_numpy.py", line 287, in unpackb return _unpackb(packed, **kwargs) File "msgpack/_unpacker.pyx", line 201, in msgpack._cmsgpack.unpackb msgpack.exceptions.ExtraData: unpack(b) received extra data.
Hi,
I'm trying to load the lmdb data you shared using the lmdb_dataflow function, but I got an error:
_pickle.UnpicklingError: invalid load key, '\xdc'.
Do you know how to fix this? Thanks
Hi @hansen7
I did not see SensatUrban in your implementation of data setting
https://github.com/hansen7/OcCo/blob/master/OcCo_Torch/data/readme.md
Best regards
hello, when I reproduce the pre-training experiment on your shapenet car data set, I found that the training does not converge very well. I think it is related to the parameter of piece-wise_constant, wish you can provide the hyperparameter setting?
Basically in consistent with the updated iccv submission version:
General Interpretability:
learned features
, Shapley values
, Influential Instance
General Pre-Training:
Ideas from Contrastive Learning:
Point Cloud Specific:
Extensions:
Hi @hansen7
Firstly, thank you for your code.
Secondly, I have a question about your code pre-training OcCo. I read your code in the completion task (OcCo), these encoders have the same architecture as the encoder of DGCNN or Pointnet classification task. These pre-trained weights are suitable to initialize for the downstream task classification (because they have the same architecture). However, in other downstream tasks such as part segmentation or semantic segmentation, the encoder of these tasks is not the same as the pre-trained weight. Thus, there are some components of the encoder that will be randomly initialized?
Thank you!
Hello and thanks for the terrific code.
I have one question, I am trying to use OcCo pretrained network on the semantic segmentation task to extract features and perform dense matching of 3d points (either Pointnet or pcn).
I have saved and I am loading the same h5 file to avoid alternations due to different sampling. However, if I run the encoder twice (with the same tensor as points), I do not get determenistic results and the returned feature tensors are different, without changing anything. I am in mode model.eval() to have a defined dropout. Could you elaborate please?
Thank you for your impressive job!
I am working on few shot learning scenario. After reading your paper and code, I have some questions:
Thank you for organizing a good code. I have a question about 'dataset render'. The code of processing the dataset in the 'Render' folder looks about the code of ModelNet40. 'PC_Normalisation.py' file should be to transforme to '.ply' to '.obj'. However, the data in the original ModelNet40 was not '.ply' format. Can you provide an unpreprocessing ModelNet40 dataset? Do I need to transform the files in the Model40 to '.ply' format?
Hi!
Would you share the weights of DGCNN model? Thank so much!
I am also wondering that why can't I pre-train the model on scanobjectnn dataset?
Hi @hansen7 ,
Thanks for sharing this great work with us.
I have a question regarding the output log during pretraining. The detail is here.
The question is, it seems that the loss did not drop too much or its range is varied from 0.03 to 0.04. Did this happen to your pretraining as well?
But visually, I could see some improvements in the plots
folder.
Best regards
Hi,
On ModelNet40, I was trying to reproduce the experiment results in Table.9 in the supplementary, but got lower numbers:
PointNet with Jigsaw task: 82% vs. 87.5% in the paper
PointNet with OcCo task : 85.49% vs. 88.7% in the paper
Similar case for the DGCNN backbone
The scripts to learn the embeddings are copied from provided bash template, e.g:
python train_completion.py --gpu 0 --dataset modelnet --model pointnet_occo --log_dir modelnet_pointnet_vanilla
for PointNet with OcCo task.
For linear SVM training and testing, the scripts are also carried from the bash template, e.g:
python train_svm.py --gpu 0 --model pointnet_util --dataset modelnet40 --restore_path log/completion/modelnet_pointnet_vanilla/checkpoints/best_model.pth
So I'm wondering if there is some more hyper-parameter tuning needed but not provided in the templates.
Thanks!
Hi @hansen7
I saw you have released two types of weights. They are *_cls.pth and *_seg.pth. My guess is that they have the same encoder but just the seg one has a randomly initialized segmentation network appended to the pre-trained encoder.
Not sure if my guess is correct or not so need your information.
Best regards.
Some of Vincent Sitzmann's work:
Point Cloud Completion
Hi!
Thanks for the great work!
I have a question regarding the baseline results of few-shot classification. How do you get the results of "PointNet rand" in Table 2?
Did you train PointNet from scratch using cross entropy loss on the sampled K-way N-shot training data?
Look forward to your reply! Thanks!
Hi Hansen,
Thanks for your work!
I'm trying to reproduce your result in Table 10. It said with the DGCNN backbone, OcCo has 89.2% accuracy. But I loaded your pre-trained model: dgcnn_occo_cls.pth and train the svm with the following command:
python train_svm.py --gpu=0 --model=dgcnn_util --dataset=modelnet40 --restore_path=./data/pretrained/dgcnn_occo_cls.pth.
But it only gets 88.4% accuracy. May I ask did I do anything wrong?
Hi,
Would you please provide the occluded point cloud data used in the training for the completion task (train_completion.py)? I was looking for it based on the documentation here but in no vain.
Thanks!
Chao
Thanks for your awesome work. Do you use the code in the render directory to generate complete point cloud and partial point cloud from 3D models for the training of point cloud completion?
Hi,
As the PartNormalDataset dataloader for the part-seg task is not provided in the repository, I'm assuming that it is similar, if not the same, as the one here. But according to the training script (train_partseg.py), it seems that both the input and output of the dataloader have some slight differences from the original one.
So could you please clarify the difference or provide the modified PartNormalDataset dataloader?
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.