Giter Club home page Giter Club logo

vote2cap-detr's People

Contributors

ch3cook-fdu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

vote2cap-detr's Issues

dataset processing issue

Hi, I've noticed that batch_load_scannet_data.py is the same as one in Scan2Cap. So there is the same issue.

Traceback (most recent call last):
File "batch_load_scannet_data.py", line 84, in
batch_export()
File "batch_load_scannet_data.py", line 79, in batch_export
export_one_scan(scan_name, output_filename_prefix)
File "batch_load_scannet_data.py", line 29, in export_one_scan
mesh_vertices, aligned_vertices, semantic_labels, instance_labels, instance_bboxes, aligned_instance_bboxes = export(mesh_file, agg_file, seg_file, meta_file, LABEL_MAP_FILE, None)
File "/root/autodl-tmp/Vote2Cap-DETR/data/scannet/load_scannet_data.py", line 56, in export
mesh_vertices = scannet_utils.read_mesh_vertices_rgb_normal(mesh_file)
File "/root/autodl-tmp/Vote2Cap-DETR/data/scannet/scannet_utils.py", line 99, in read_mesh_vertices_rgb_normal
assert(os.path.isfile(filename))
AssertionError

So how should I fix it? By the way, here is the original link: daveredrum/Scan2Cap#11

Why the 'nyu40id2class' of Vote2Cap is different with that of these detection methods?

## VoteNet
(Pdb) DC18.nyu40id2class
{3: 0, 4: 1, 5: 2, 6: 3, 7: 4, 8: 5, 9: 6, 10: 7, 11: 8, 12: 9, 14: 10, 16: 11, 24: 12, 28: 13, 33: 14, 34: 15, 36: 16, 39: 17}
(Pdb) len(DC18.nyu40id2class)
18
(Pdb) 

## Vote2Cap
(Pdb) self.dataset_config.nyu40id2class
{5: 2, 23: 17, 8: 5, 40: 17, 9: 6, 7: 4, 39: 17, 18: 17, 11: 8, 29: 17, 3: 0, 14: 10, 15: 17, 27: 17, 6: 3, 34: 15, 35: 17, 4: 1, 10: 7, 19: 17, 16: 11, 30: 17, 33: 14, 37: 17, 21: 17, 32: 17, 25: 17, 17: 17, 24: 12, 28: 13, 36: 16, 12: 9, 38: 17, 20: 17, 26: 17, 31: 17, 13: 17}
(Pdb) len(self.dataset_config.nyu40id2class)
37
(Pdb) 

Inference on Custom DB

Dear authors,

I'd like to test your model on my custom 3D reconstructed pointclouds + rgb map.
In order to directly use the checkpoints you shared, I need to extract the normalization features from 3D mesh (as your models are all trained with the norm features).
However, I have no idea how to make the norm features from my custom dataset. I understand the normalized features are calculated in read_mesh_vertices_rgb_normal() from the face of 3D mesh. But I only have 3D pointclouds and rgb, without the mesh faces.
I initially tried to get the faces using Meshlab tool (surface reconstruction - ball pivoting), and the result is as below.
image

Using the 3D mesh, I followed the all preprocessing steps (xyz, rgb, norm) to extract the similar features to fit your model with the checkpoint(scanrefer_scst_vote2cap_detr_pp_XYZ_RGB_NORMAL.pth).
But the detection and captioning result was really poor, and I suspect the normalized features would be the issue.

Regarding this issue, can you share you insight?

Question about caption evaluation

Hi,

Where could I find the pretrained weights of 4.1 or 4.2, I have tested the weights provided on huggingface by 4.3. However, they all returned errors as the image shown.

image

The performance gap between pretrained models and paper

Hello, @ch3cook-fdu!

Thanks for sharing your work about indoor 3d dense captioning. Recently I have tried to train the Vote2Cap-DETR(++) with different configs. I noticed that there is a slightly performance gap between metrics of (mine model)/(pretrained model of this repo) and (Table results in the paper).

Take scst_Vote2Cap_DETRv2_RGB_NORMAL with SCST settings for example:

My Results:
----------------------Evaluation-----------------------
INFO: [email protected] matched proposals: [1543 / 2068],
[BLEU-1] Mean: 0.6721, Max: 1.0000, Min: 0.0000
[BLEU-2] Mean: 0.5761, Max: 1.0000, Min: 0.0000
[BLEU-3] Mean: 0.4759, Max: 1.0000, Min: 0.0000
[BLEU-4] Mean: 0.3892, Max: 1.0000, Min: 0.0000
[CIDEr] Mean: 0.7539, Max: 6.2306, Min: 0.0000
[ROUGE-L] Mean: 0.5473, Max: 0.9474, Min: 0.1015
[METEOR] Mean: 0.2638, Max: 0.5982, Min: 0.0448

Pretrained Model Results
----------------------Evaluation-----------------------
INFO: [email protected] matched proposals: [1548 / 2068],
[BLEU-1] Mean: 0.6729, Max: 1.0000, Min: 0.0000
[BLEU-2] Mean: 0.5787, Max: 1.0000, Min: 0.0000
[BLEU-3] Mean: 0.4783, Max: 1.0000, Min: 0.0000
[BLEU-4] Mean: 0.3916, Max: 1.0000, Min: 0.0000
[CIDEr] Mean: 0.7636, Max: 6.3784, Min: 0.0000
[ROUGE-L] Mean: 0.5496, Max: 1.0000, Min: 0.1015
[METEOR] Mean: 0.2641, Max: 1.0000, Min: 0.0448

and Paper Results
11AE0D09-CEAA-45AD-BF94-6D4EE0E0FDB8

About 1% ~2.5% performance gap exists in every different configs and settings, I am wondering how to figure it out.

Thanks, Jiaqi

CUDA kernel failed : no kernel image is available for execution on the device

CUDA kernel failed : no kernel image is available for execution on the device
void furthest_point_sampling_kernel_wrapper(int, int, int, const float*, float*, int*) at L:231 in /home/imi1214/WY/wang/Vote2Cap-DETR-master/third_party/pointnet2/_ext_src/src/sampling_gpu.cu
Hello, I would like to ask if the third GPU can run normally on the same machine. When I run on the first GPU, the above error is reported. Do you know what happened?
Thank you very much!

How to visualize the result?

Hi, thanks for sharing this awesome work.

I noticed that you mentioned in another issue that

You can use the tools in this repo to help

but the demo.py outputs just a JSON file.

So could you give me some ideas on how to use the provided 3d-pc-box-viz repo to visualize the JSON file?

scannet_means.npz and scannet_reference_means.npz

Hi @ch3cook-fdu thanks for the great work! I was able to reproduce your results on the ScanRefer dataset and now I want to try it on a new dataset. I see that you use 2 mean arrays - data/scannet/meta_data/scannet_means.npz and data/scannet/meta_data/scannet_reference_means.npz in the model, both with shape (18, 3). Could you let me know how you computed these, and how to do it for a new dataset?

Thanks!
Chandan

Thanks for your great work! I have some question

Dear authors. I have some questions about the lightweight caption head you proposed!
How does the lightweight caption head differ from existing captioning models in terms of architecture and computational efficiency so that it's a "lightweight design"?
Hope for your reply.

Question for ScanRefer benchmark, not Scan2cap

Dear authors,
I am wondering why the paper is said that Vote2Cap is tested on ScanRefer, not Scan2cap benchmark.
As long as I understand, ScanRefer takes pointclouds with a text query as inputs and finds the referred unique 3D box.
On the other hands, Scan2Cap takes only pointclouds input and estimate 3D boxes with descriptions.
I think Vote2Cap is conducted for the task like Scan2Cap, but in your paper it is written to be evaluated on ScanRefer.

Did you also conduct your model on ScanRefer benchmark test? If so, can you share how it works as ScanRefer task requires two inputs, the pointcloud scenes and query. If it was actually tested on Scan2Cap, is there any method to test your mode on ScanRefer?

Thanks for your help in advance!

Questions about performance

Thanks for sharing your great work!

I have some questions about your paper work.
There're 2 options for your inputs: w/o 2D, w/ 2D.
I initially thought that features from w/ 2D could outperform the features from w/o 2D, but it wasn't in your paper.
In Table1 from Vote2Cap-DETR++, some benchmarks like B-4, M, R were acquired better than w/ 2D.
How is it possible and why should we use these multiview features, which are not effective in performance and also could be hard to be extracted.
image

In addition, the 3detr is used as the encoder/decoder for your model.
As 3detr does not perform well in 3D detection benchmark like ScanNet, compared to other non-transformer based architectures, can I substitute the encoder/decoder to other models? would it perform well? For instance, the recently released 3D detector like V-DETR is based on 3detr, so that it would be another option for better performance for your model.

Question about evaluate metric

Hi authors,

I am new to this task and want to consult a question about the metric in this 3D dense captioning domain, which is a little bit contradictory after I checked with several papers.

In you paper, the captioning metric is averaged by the number ground truth instance, so it cannot evaluate redundant bbox prediction. However, in Scan2cap and D3Net, which you put in the same table 1, they will average the captioning metric by the percentage of correct predicted bbox. Therefore, previous related work evaluated the redundant bbox prediction.

Is it unfair of your metric, or am I missing something here? I would really appreciate your help to clarify it out!

loss is negative when running SCST

Hello, when I reproduce scst, loss is negative. Is this normal? The final loss during mle training was around 45. The following is the screenshot of loss. Thank you very much.
1721013289505
1721015945704

Question about caption evaluation results

i use the pretrain weight "scanrefer_scst_vote2cap_detr_pp_XYZ_RGB_NORMAL.pth" and get the follow result:

INFO: [email protected] matched proposals: [1537 / 2068],
[BLEU-1] Mean: 0.6676, Max: 1.0000, Min: 0.0000
[BLEU-2] Mean: 0.5745, Max: 1.0000, Min: 0.0000
[BLEU-3] Mean: 0.4757, Max: 1.0000, Min: 0.0000
[BLEU-4] Mean: 0.3895, Max: 1.0000, Min: 0.0000
[CIDEr] Mean: 0.7525, Max: 6.3784, Min: 0.0000
[ROUGE-L] Mean: 0.5467, Max: 1.0000, Min: 0.1015
[METEOR] Mean: 0.2631, Max: 1.0000, Min: 0.0448

This result is not consistent with the paper, is this normal?

How to get file `ScanRefer_vocabulary.json`?

Hi, thank you for opening your work. When I download the ScanRefer dataset scanrefer.zip, I can't find file ScanRefer_vocabulary.json:

Archive:  scanrefer.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
 30718184  2020-01-30 22:10   ScanRefer_filtered.json
 24370163  2020-01-30 22:09   ScanRefer_filtered_train.json
     7305  2020-01-20 19:03   ScanRefer_filtered_train.txt
  6348023  2020-01-30 22:09   ScanRefer_filtered_val.json
     1832  2020-01-20 19:03   ScanRefer_filtered_val.txt
---------                     -------
 61445507                     5 files

How can I get it?

Suddenly terminates during debugging

Hello, when I do debug, I always break one step before loss and jump directly to do_train under main.(args has been modified in main.py according to train_scannet.sh)
1714360964227
Normally, it should be run further down. So could you please tell me what's causing it? Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.