Comments (6)
@lailvlong Hey, thanks for your interest in our project!
The short answer to your question is Yes, but it doesn't violate the evaluation protocol.
If you look at
Lines 110 to 112 in 238b1de
You will see that we access the reference tensors (containing both TRAIN & TEST split) based on"data_corpus", which is generated in the constructor according to the input "config_path". Look back to the main training script
https://github.com/voidstrike/FPSG/blob/main/src/trainNetwork.py#L85-L90
you will see the "config_path" are indeed different.
Moreover, noted that the base classes and novel classes are mutually exclusive. We use all data from base classes to train the model and test the model on novel classes, it's not a standard 80/20 split.
from fpsg.
@lailvlong Hey, thanks for your interest in our project!
The short answer to your question is Yes, but it doesn't violate the evaluation protocol. If you look at
Lines 110 to 112 in 238b1de
You will see that we access the reference tensors (containing both TRAIN & TEST split) based on"data_corpus", which is generated in the constructor according to the input "config_path". Look back to the main training script
https://github.com/voidstrike/FPSG/blob/main/src/trainNetwork.py#L85-L90
you will see the "config_path" are indeed different.
Moreover, noted that the base classes and novel classes are mutually exclusive. We use all data from base classes to train the model and test the model on novel classes, it's not a standard 80/20 split.
Thanks for your reply.
Yes, the "config_path" for the train and test dataloaders are different, but the “reference_path” of them are the same.
Lines 85 to 87 in 238b1de
And the “reference_path” contains the files of both the base and novel categories. Thus, for the train dataloader, data of novel categories would also be loaded into "self.img_corpus" and "self.pc_corpus".
Lines 130 to 153 in 238b1de
During iterating the train dataloader, data in the "self.img_corpus" and "self.pc_corpus" would be fetched as "ans[xad]" and "ans[pcad]", including those of novel categories.
Lines 110 to 126 in 238b1de
When computing the intra_support loss during training, there are two options, namely "option 2" and "option 1".
Lines 63 to 73 in 238b1de
The codes select "option 2", which uses the "ans[xad]" and "ans[pcad]" and may involves the data of the novel categories. This is the point that i feel puzzled.
In contrast, the "option 1" that uses the "ans[xs]" and "ans[pcs]" should have been adopted. Moreover, i have tried both "option 1" and "option 2". With my observation, in ModelNet dataset, the performance of "option 1" (avg_cd=7.+) are much worser than that of "option 2" (avg_cd=2.+). It is very weried that the two options have such different performances. Or maybe that "option 2" has used the evaluation data for training.
from fpsg.
@lailvlong Oh my god, I see, that's definitely an information leak.
The reason we adopt "option 2" is we found that "option 1" will give us almost the same point cloud during the test phase. We thought it was a model collapse because the model observes one and only one class per episode, therefore, we tried to make some variants by sampling random (IMG, PC) pairs from the dataset (so each episode contains more than one class). We just use "randperm" and forgot "self.img_corpus" and "self.pc_corpus" contain evaluation points.
Thanks for your question and it shows that the problem is solved by information leak rather than bringing more classes in. Really sorry for the mistake.
Last but not least, could you please somehow try to modify the code such that keeps random sampling from "self.img_corpus" and "self.pc_corpus" but avoid information leak? This one is absolutely based on your interest and feel free to ignore it. Sorry again for the bug.
from fpsg.
@voidstrike Yes, i also find that, during the test phase, the generated point clouds of different query images are very similar in an episode. In my opinion, it is not caused by model collapse but by that the support prototypes (class-specific prior) dominates the generation. Since reconstructing the shape of query image is much harder than restoring the shape of the support prototype (mean shape of the support point clouds). Meanwhile, restoring the shape of the support prototype can also offer a low loss. In this way, the model prefers to ignore the query image and output the same results.
To enable the model to consider more categories in one gradient descent step, maybe we can construct a mini-batch with several episodes. In your project, we can set "n_way>1 " and make other corresponding modifications.
from fpsg.
@lailvlong A good perspective! Thanks for your investigation & suggestions, considering more categories in one gradient step definitely sounds promising and makes sense to me. Unfortunately, I already left the university and I'm no longer working in this area.
Please feel free to modify the code if you want and looking forward to your paper. :)
from fpsg.
@lailvlong @voidstrike >
Hello! I get the same question. Could you please share the solution about it. I am also confused about how to test on base class. I have tried but got bad result which is about 10 times as long as the traing phase CD. Looking forward to your reply. Thank you!
from fpsg.
Related Issues (2)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fpsg.