devnkong / flag Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of our FLAG paper (CVPR2022)
License: MIT License
Official implementation of our FLAG paper (CVPR2022)
License: MIT License
when i run the code ogbn-product/gat/gat_ns.py, i didn't find where the 'flag_product' function be defined. So i would like to know what the function mean and where i can find it.
for deepergcn+flag on ogbn-mol-pcba and ogbg_ppa, the default batch-size is too small, and the training is too slow, how could you finish the experiment? have you changed the batch-size, and the according other hyper parameters setting?
Hi!
When I try to run main.py in ogbn-products, I get this error:
RuntimeError: [enforce fail at CPUAllocator.cpp:71] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 64597662208 bytes. Error code 12 (Cannot allocate memory)
This error occurs in the test function model.to(cpu)
. It requires about 60GB memory. Is it necessary to have 64GB RAM? How can I run this if I don't have such big RAM? Thanks!
Hi,
Sorry to bother you, I am wondering if there is any experiment using GCN on Cora dataset in your code? I am still not finding that yet.
no code for DeepGCN+FLAG on ogbg-molpcba and ogbg-code
Dear authors,
Thank you for this awesome work. I found the provided examples are very clear. However, I found it non-trivial to implement FLAG to a multi-GPU training for data parallelism using DataParallel from PyG. Do you have any idea to implement FLAG on such a multi-gpu training pipeline.
Thank you.
Hi, thank you for your great work and leaderboard contribution!
As we have updated our ogbg-code
to ogbg-code2
in ogb>=1.2.5
, would you mind using the new dataset? Apologies for the extra work, and thanks a lot!
Best,
Weihua --- OGB Team
Hi,
I was wondering if rotation and transformations are the only 2 types of augmentations that applied in your pipeline.
FLAG/deep_gcns_torch/utils/data_util.py
Line 76 in 7e48d91
Have you tried other augmentation transformations?
Thanks for your article and codes!
When I read attack.py
, I notice there is no F-norm normalization as the paper said.
Indeed, torch.sign
is used here.
Could you tell me the consideration behind, thank you very much.
def flag(model_forward, perturb_shape, y, args, optimizer, device, criterion) :
model, forward = model_forward
model.train()
optimizer.zero_grad()
perturb = torch.FloatTensor(*perturb_shape).uniform_(-args.step_size, args.step_size).to(device)
perturb.requires_grad_()
out = forward(perturb)
loss = criterion(out, y)
loss /= args.m
for _ in range(args.m-1):
loss.backward()
perturb_data = perturb.detach() + args.step_size * torch.sign(perturb.grad.detach())
perturb.data = perturb_data.data
perturb.grad[:] = 0
out = forward(perturb)
loss = criterion(out, y)
loss /= args.m
loss.backward()
optimizer.step()
return loss, out
Hi:
After reading your paper and code, I have a question.
In ogbn_protiens/attacks.py, function flag, I think it's the core of this algorithm. While In this function, you first calculate a loss under data and perturb, in next args.m -1 times, calculate args.m -1 times loss and accumulate gradients of perturb. In last, the total loss backwards. In your code, accumulating several times gradients of perturb while updating model parameters once. It seems that this doesn't match your paper!
In your paper algotithm1, from line 6 to 8, the adversarial loop run one time, computing both the gradient for perturb and parameter simultaneously.
To my understanding, for the graphprop tasks, the perturb is add to the input node embedding, output of the node encoder while in the nodeprop tasks, the perturb is added directly to the node feature. Have you test the difference?
# graphprop
def forward(self, batched_data, perturb=None):
x, edge_index, edge_attr, batch = batched_data.x, batched_data.edge_index, batched_data.edge_attr, batched_data.batch
### computing input node embedding
tmp = self.atom_encoder(x) + perturb if perturb is not None else self.atom_encoder(x)
# nodeprop
def train_flag(model, x, y_true, train_idx, optimizer, args, device) :
forward = lambda perturb : model(x[train_idx] + perturb)
model_forward = (model, forward)
y = y_true.squeeze(1)[train_idx]
loss, _ = flag(model_forward, x[train_idx].shape, y, args, optimizer, device, F.nll_loss)
return loss.item()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.