Giter Club home page Giter Club logo

flag's Issues

what is flag_product?

when i run the code ogbn-product/gat/gat_ns.py, i didn't find where the 'flag_product' function be defined. So i would like to know what the function mean and where i can find it.

default batch-size is too small

for deepergcn+flag on ogbn-mol-pcba and ogbg_ppa, the default batch-size is too small, and the training is too slow, how could you finish the experiment? have you changed the batch-size, and the according other hyper parameters setting?

RuntimeError: [enforce fail at CPUAllocator.cpp:71]

Hi!

When I try to run main.py in ogbn-products, I get this error:
RuntimeError: [enforce fail at CPUAllocator.cpp:71] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 64597662208 bytes. Error code 12 (Cannot allocate memory)

This error occurs in the test function model.to(cpu). It requires about 60GB memory. Is it necessary to have 64GB RAM? How can I run this if I don't have such big RAM? Thanks!

Experiment on Cora

Hi,
Sorry to bother you, I am wondering if there is any experiment using GCN on Cora dataset in your code? I am still not finding that yet.

Multi-gpu version

Dear authors,

Thank you for this awesome work. I found the provided examples are very clear. However, I found it non-trivial to implement FLAG to a multi-GPU training for data parallelism using DataParallel from PyG. Do you have any idea to implement FLAG on such a multi-gpu training pipeline.

Thank you.

Evaluate on ogbg-code2

Hi, thank you for your great work and leaderboard contribution!

As we have updated our ogbg-code to ogbg-code2 in ogb>=1.2.5, would you mind using the new dataset? Apologies for the extra work, and thanks a lot!

Best,
Weihua --- OGB Team

Augmentation types

Hi,
I was wondering if rotation and transformations are the only 2 types of augmentations that applied in your pipeline.

def random_points_augmentation(points, rotate=False, translate=False, **kwargs):

Have you tried other augmentation transformations?

Why not normalizing the gradient of perturbation

Thanks for your article and codes!

When I read attack.py, I notice there is no F-norm normalization as the paper said.

Indeed, torch.sign is used here.

Could you tell me the consideration behind, thank you very much.

def flag(model_forward, perturb_shape, y, args, optimizer, device, criterion) :
    model, forward = model_forward
    model.train()
    optimizer.zero_grad()

    perturb = torch.FloatTensor(*perturb_shape).uniform_(-args.step_size, args.step_size).to(device)
    perturb.requires_grad_()
    out = forward(perturb)
    loss = criterion(out, y)
    loss /= args.m

    for _ in range(args.m-1):
        loss.backward()
        perturb_data = perturb.detach() + args.step_size * torch.sign(perturb.grad.detach())
        perturb.data = perturb_data.data
        perturb.grad[:] = 0

        out = forward(perturb)
        loss = criterion(out, y)
        loss /= args.m

    loss.backward()
    optimizer.step()

    return loss, out

Algorithm problem

Hi:
After reading your paper and code, I have a question.
In ogbn_protiens/attacks.py, function flag, I think it's the core of this algorithm. While In this function, you first calculate a loss under data and perturb, in next args.m -1 times, calculate args.m -1 times loss and accumulate gradients of perturb. In last, the total loss backwards. In your code, accumulating several times gradients of perturb while updating model parameters once. It seems that this doesn't match your paper!
In your paper algotithm1, from line 6 to 8, the adversarial loop run one time, computing both the gradient for perturb and parameter simultaneously.

Add perturb before or after node encoder?

To my understanding, for the graphprop tasks, the perturb is add to the input node embedding, output of the node encoder while in the nodeprop tasks, the perturb is added directly to the node feature. Have you test the difference?

# graphprop
def forward(self, batched_data, perturb=None):
    x, edge_index, edge_attr, batch = batched_data.x, batched_data.edge_index, batched_data.edge_attr, batched_data.batch

    ### computing input node embedding
    tmp = self.atom_encoder(x) + perturb if perturb is not None else self.atom_encoder(x)
# nodeprop
def train_flag(model, x, y_true, train_idx, optimizer, args, device) :

    forward = lambda perturb : model(x[train_idx] + perturb)
    model_forward = (model, forward)
    y = y_true.squeeze(1)[train_idx]
    loss, _ = flag(model_forward, x[train_idx].shape, y, args, optimizer, device, F.nll_loss)
    return loss.item()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.