predictive_group_invariance's People
predictive_group_invariance's Issues
Background leakage into foreground
Dear Dr. Ahmed,
I am working with your dataset, COCO_Places, and I believe that I have found an undesired behavior in the dataset creation code (coco/data/data_makers/coco_places.py). When you resize the COCO segmentation masks, you use the line "resized_mask = resize(mask, (64, 64), anti_aliasing=True)" or "resized_mask = resize(mask, (64, 64))". Both commands use bi-linear interpolation by default (https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.resize). Thus, resized_mask are not binary.
Accordingly, we have a small leakage of the Places' backgrounds into the COCO's foregrounds after the command “new_im = resized_place*(1-resized_mask) + resized_image*resized_mask”. Indeed, if you crop and compare just the foreground region in the iid, ood and sg test datasets, you will see that the foregrounds are not identical across the three partitions. I have corrected this behavior by substituting "resized_mask = resize(mask, (64, 64))" and "resize(mask, (64, 64), anti_aliasing=True)" by "resized_mask = resize(mask, (64, 64), anti_aliasing=False, order=0)". The new command uses nearest neighbor interpolation, ensuring that the result is binary. Thus, it prevents the leakage.
I hope this comment will help. Thank you for your code and congratulations for your great work.
Best regards,
Pedro Bassi
PhD Student, University of Bologna, Italy
Coco datasets
Greetings!
First, thank you for sharing the code of your work! :)
I would like to ask you if the provided samples to COCO datasets (both color and places) comprehend the full set or just a subset of the data used in the paper's experiments. In case they are not complete, is it possible for you to share them?
Thanks!
Training of Wide-Resnet for Coco datasets
Hi again Faruk!
When you were training the wideresnet28-4 with the coco colors dataset, did you manage to have a high (>90%) accuracy on the training set? And did you use dropout?
I can't go past 68% acc during training, even though the performance on the iid test set is around 80% acc (which is a little strange...). Do you have any suggestion?
Best regards!
question about implementation of pgi loss
Hi authors,
Thanks for this great work!
I read through your implementation but I am not sure if I understand all details correctly. In the PGI loss code here: https://github.com/Faruk-Ahmed/syst-gen/blob/29d87ae70e608d0159364ee031b83281266c2a65/mnist/main.py#L199-L219, do you compute this loss for batches or for the entire training set? If I understand it correctly, you are optimizing this for data batches, if so, I am wondering whether Eq. 4/5 in the paper holds? I can imagine that Eq. 4/5 should work for the entire dataset if we find the invariant representation across both groups P, Q. But it may or may not be true for each batch (due to random shuffling). If you can share your thoughts that would be great, thank you!
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.