graph-mlp's People
Forkers
jeongwhanchoi anryyang zbn123 johorvath henrywenwh2020 shallowdreamoon syvail chuandai tobi2k lightningwar hucui2022 qsoon duggins76 seongjinahn shenzhiyang2000 mohsenfazaeli dl-mlp novel159357 hungtaowang dkhonkergraph-mlp's Issues
len(rand_index)<len(idx_train)
I was trying to run your code with my dataset, and the number of training data is 18000, while my batch size is 500.
The following line can't work with the mentioned parameters because len(rand_index)<len(idx_train)
:
File: train.py
rand_indx = torch.tensor(np.random.choice(np.arange(adj_label.shape[0]), batch_size)).type(torch.long).cuda() rand_indx[0:len(idx_train)] = idx_train
Here's the error that I get:
File "train.py", line 92, in get_batch rand_indx[0:len(idx_train)] = idx_train
RuntimeError: The expanded size of the tensor (500) must match the existing size (18000) at non-singleton dimension 0. Target sizes: [500]. Tensor sizes: [18000]
Could you please tell me how I can fix it?
NAN loss problem[SOLVED]: Performance Plummets When Setting adj_label Diagonal to Zero
rand_indx = torch.tensor(np.random.choice(np.arange(adj_label.shape[0]), batch_size)).type(torch.long) ? what is adj_label?
['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph'] about data loading.
Corrupted Adjacency Matrix
Thank you for your work.
Can you also upload the code that check robustness against corrupted connection in inference which you generate fighre 6 for cora and citeseer dataset to compare Graph-MLP and GCN?
Much appreciated
What is the difference between LINE?
LINE model use the graph structure to generate the node embedding like pre-train, and the embedding is used for multiple downstream tasks. I think this method is only use original feature as the ramdom initialized embedding and combine the two-step training phase into end-to end training phase.
Robust experiment
Hello!I would like to ask what is your specific parameter setting of robustness experiment? I did not reproduce your results with the parameter settings provided in the article. Looking forward to your reply.
How to keep robust when still use the adjacency information implicitly
Hi, your work is really inspiring and I have one question.
In the paper, you were saying the model would be more robust when facing large-scale graph data and corrupted adjacency information, as it utilizes the adjacency information implicitly, rather like GCN which uses adjacency information directly during the information aggregation phase.
However, I am wondering you still use the adjacency information (even multiply 4 times is possible: 4th power of adj) in calculation Ncontrast Loss, how would this maintain robust performance with massive corrupted adjacency information, given you still need the adjacency information in Ncontrast loss in training?
Is that becuase you only need adjacency information during training rather than both train and test phase? Or some other reason to justify?
I am really confused about that and look forward to your reply.
Thanks a lot
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.