Giter Club home page Giter Club logo

occo's People

Contributors

dependabot[bot] avatar hansen7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

occo's Issues

Why mask point cloud by camera view?

Hi Hanchen,

Thanks for sharing your work!

I have two questions and it would be great if you can help me figure this out!

  1. Why do you generate incomplete point clouds by masking points occluded by a camera view? What's the difference if you just use a usual incomplete point cloud?

  2. What's the difference between this work and PCN? I noticed you directly use PCN network architecture as your model and in my understanding, you just apply PCN to some downstream tasks. If you directly used PCN pre-trained weights, can they also get the improvements on these downstream tasks?

Experiment setting of fine-tuning on ModelNet40

Hi Hansen,

Thanks for your great work. I have one quick question here.

May I ask what's the detailed experiment setting of fine-tuning on ModelNet40? Could you please provide the command of this task?

Now I'm using:

python train_cls.py --model=dgcnn_cls --dataset=modelnet40 --log_dir=dgcnn_occo --num_point=2048 --restore --restore_path=./path/to/dgcnn_occ_cls.pth

If I made any mistakes, please let me know! Thank you!

msgpack.exceptions.ExtraData: unpack(b) received extra data.

Hi@hansen7 :
When I run on my own dataset, the following errors occur. What should I do
Traceback (most recent call last): File "./OcCo_Torch/train_completion.py", line 253, in <module> main(args) File "./OcCo_Torch/train_completion.py", line 71, in main lmdb_train, args.batch_size, args.input_pts, args.gt_pts, is_training=True) File "/workspace/OcCo_Torch/OcCo_Torch/LMDB_DataFlow.py", line 77, in lmdb_dataflow df = dataflow.LMDBSerializer.load(lmdb_path, shuffle=False) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/serialize.py", line 106, in load df = LMDBData(path, shuffle=shuffle) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/format.py", line 91, in __init__ self._set_keys(keys) File "/opt/conda/lib/python3.6/site-packages/tensorpack/dataflow/format.py", line 111, in _set_keys self.keys = loads(self.keys) File "/opt/conda/lib/python3.6/site-packages/tensorpack/utils/serialize.py", line 43, in loads_msgpack max_str_len=MAX_MSGPACK_LEN) File "/opt/conda/lib/python3.6/site-packages/msgpack_numpy.py", line 287, in unpackb return _unpackb(packed, **kwargs) File "msgpack/_unpacker.pyx", line 201, in msgpack._cmsgpack.unpackb msgpack.exceptions.ExtraData: unpack(b) received extra data.

error when loading data

Hi,

I'm trying to load the lmdb data you shared using the lmdb_dataflow function, but I got an error:

_pickle.UnpicklingError: invalid load key, '\xdc'.

Do you know how to fix this? Thanks

Super parameter to ShapeNet car dataset

hello, when I reproduce the pre-training experiment on your shapenet car data set, I found that the training does not converge very well. I think it is related to the parameter of piece-wise_constant, wish you can provide the hyperparameter setting?

Contents for next PR

Basically in consistent with the updated iccv submission version:

  1. Few-shot learning
  2. Network dissection and adjusted mutual information
  3. Object-Level Contrastive Learning
  4. Readme and website

What Makes a Good Pre-Training on Points Cloud?

General Interpretability:

  • interpretable ml book, specifically sections on learned features, Shapley values, Influential Instance
  • "Network dissection: Quantifying interpretability of deep visual representations", CVPR 2017
  • "Feature Visualisation", Olah, et al., Distill 2017.
  • Bolei's Portfolio
  • Chiyuan's Portfolio (also, transfer learning)

General Pre-Training:

  • "Rethinking ImageNet Pre-training", ICCV 2019
  • "Rethinking Pre-training and Self-training", NeurIPS 2020
  • "What is being transferred in transfer learning?", NeurIPS 2020
  • "What Makes Instance Discrimination Good for Transfer Learning?", ICLR 2021 Sub

Ideas from Contrastive Learning:

Point Cloud Specific:

  • "Rotation Invariant Convolutions for 3D Point Clouds Deep Learning", 3DV 2019
  • "Quaternion Equivariant Capsule Networks for 3D Point Clouds", ECCV 2020
  • "Label-Efficient Learning on Point Clouds using Approximate Convex Decompositions", ECCV 2020
  • "On the Universality of Rotation Equivariant Point Cloud Networks", ICLR 2021 Sub

Extensions:

  • "Neural Similarity Learning", NeurIPS 2019

Pre-training OcCo

Hi @hansen7
Firstly, thank you for your code.
Secondly, I have a question about your code pre-training OcCo. I read your code in the completion task (OcCo), these encoders have the same architecture as the encoder of DGCNN or Pointnet classification task. These pre-trained weights are suitable to initialize for the downstream task classification (because they have the same architecture). However, in other downstream tasks such as part segmentation or semantic segmentation, the encoder of these tasks is not the same as the pre-trained weight. Thus, there are some components of the encoder that will be randomly initialized?

Thank you!

Different features for each run?

Hello and thanks for the terrific code.

I have one question, I am trying to use OcCo pretrained network on the semantic segmentation task to extract features and perform dense matching of 3d points (either Pointnet or pcn).

I have saved and I am loading the same h5 file to avoid alternations due to different sampling. However, if I run the encoder twice (with the same tensor as points), I do not get determenistic results and the returned feature tensors are different, without changing anything. I am in mode model.eval() to have a defined dropout. Could you elaborate please?

About the experiments in few shot learning scenario

Thank you for your impressive job!
I am working on few shot learning scenario. After reading your paper and code, I have some questions:

  1. Could you please upload the few shot data split?
  2. The few shot trainning set up is different from CoverTree? It seems that CoverTree do both the pretranning and down-stream task training in few shot scenario? but your work runs pretraining using the whole ModelNet40 dataset, then train the down-stream task in few show scenario?

A question about "Dataset Render".

Thank you for organizing a good code. I have a question about 'dataset render'. The code of processing the dataset in the 'Render' folder looks about the code of ModelNet40. 'PC_Normalisation.py' file should be to transforme to '.ply' to '.obj'. However, the data in the original ModelNet40 was not '.ply' format. Can you provide an unpreprocessing ModelNet40 dataset? Do I need to transform the files in the Model40 to '.ply' format?

Pre-trained weights for DGCNN

Hi!

Would you share the weights of DGCNN model? Thank so much!

I am also wondering that why can't I pre-train the model on scanobjectnn dataset?

the log of train_completion.py

Hi @hansen7 ,
Thanks for sharing this great work with us.
I have a question regarding the output log during pretraining. The detail is here.
The question is, it seems that the loss did not drop too much or its range is varied from 0.03 to 0.04. Did this happen to your pretraining as well?
But visually, I could see some improvements in the plots folder.
Best regards

script generates lower numbers than those in Table.9 (linear SVM on learned embeddings)

Hi,

On ModelNet40, I was trying to reproduce the experiment results in Table.9 in the supplementary, but got lower numbers:
PointNet with Jigsaw task: 82% vs. 87.5% in the paper
PointNet with OcCo task : 85.49% vs. 88.7% in the paper

Similar case for the DGCNN backbone

The scripts to learn the embeddings are copied from provided bash template, e.g:
python train_completion.py --gpu 0 --dataset modelnet --model pointnet_occo --log_dir modelnet_pointnet_vanilla
for PointNet with OcCo task.
For linear SVM training and testing, the scripts are also carried from the bash template, e.g:
python train_svm.py --gpu 0 --model pointnet_util --dataset modelnet40 --restore_path log/completion/modelnet_pointnet_vanilla/checkpoints/best_model.pth

So I'm wondering if there is some more hyper-parameter tuning needed but not provided in the templates.

Thanks!

two types of pre-trained weights

Hi @hansen7
I saw you have released two types of weights. They are *_cls.pth and *_seg.pth. My guess is that they have the same encoder but just the seg one has a randomly initialized segmentation network appended to the pre-trained encoder.
Not sure if my guess is correct or not so need your information.
Best regards.

More on 3D Pre-Training and Neural Representation Learning

Pre-Training

  • "Don't Stop Pretraining: Adapt Language Models to Domains and Tasks", ACL 2020
  • "Train No Evil: Selective Masking for Task-Guided Pre-Training", EMNLP 2020
  • "Co-Tuning for Transfer Learning", NeurIPS 2020
  • "Learning to Adapt to Evolving Domains", NeurIPS 2020

Neural Representation Learning

Some of Vincent Sitzmann's work:

  • Implicit Neural Representations with Periodic Activation Functions, NeurIPS 2020, website
  • Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations, NeurIPS 2019
  • and his thesis on "Self-supervised Scene Representation Learning"

Update on Literatures

Point Cloud Completion

  • "Topnet: Structural point cloud decoder", CVPR 2019
  • "3D Shape Completion with Multi-view Consistent Inference", AAAI 2020
  • "Morphing and Sampling Network for Dense Point Cloud Completion", AAAI 2020
  • "Cascaded Refinement Network for Point Cloud Completion", CVPR 2020
  • "PF-Net: Point Fractal Network for 3D Point Cloud Completion", CVPR 2020
  • "Point Cloud Completion by Skip-Attention Network With Hierarchical Folding", CVPR 2020
  • "GRNet: Gridding Residual Network for Dense Point Cloud Completion", ECCV 2020
  • "SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification", ECCV 2020
  • "Weakly-supervised 3D Shape Completion in the Wild", ECCV 2020
  • "Variational Relational Point Completion Network", CVPR 2021

Questions on few-shot baseline

Hi!
Thanks for the great work!
I have a question regarding the baseline results of few-shot classification. How do you get the results of "PointNet rand" in Table 2?
Did you train PointNet from scratch using cross entropy loss on the sampled K-way N-shot training data?
Look forward to your reply! Thanks!

Can't reproduce Linear SVM result with DGCNN backbone.

Hi Hansen,

Thanks for your work!

I'm trying to reproduce your result in Table 10. It said with the DGCNN backbone, OcCo has 89.2% accuracy. But I loaded your pre-trained model: dgcnn_occo_cls.pth and train the svm with the following command:

python train_svm.py --gpu=0 --model=dgcnn_util --dataset=modelnet40 --restore_path=./data/pretrained/dgcnn_occo_cls.pth.

But it only gets 88.4% accuracy. May I ask did I do anything wrong?

dataset for OcCo training

Hi,

Would you please provide the occluded point cloud data used in the training for the completion task (train_completion.py)? I was looking for it based on the documentation here but in no vain.

Thanks!
Chao

output format for PartNormalDataset

Hi,

As the PartNormalDataset dataloader for the part-seg task is not provided in the repository, I'm assuming that it is similar, if not the same, as the one here. But according to the training script (train_partseg.py), it seems that both the input and output of the dataloader have some slight differences from the original one.
So could you please clarify the difference or provide the modified PartNormalDataset dataloader?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.