Giter Club home page Giter Club logo

planetoid's People

Contributors

kimiyoung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

planetoid's Issues

Need script for preprocessing

Hi,

could you please provide the pre-processing script? Would require that to extend to other datasets.

thanks,
Deep

ImportError: cannot import name downsample

rzai@rzai00:/prj/planetoid$ python test_trans.py
Using gpu device 0: GeForce GTX 1080 (CNMeM is disabled, cuDNN 5005)
Traceback (most recent call last):
File "test_trans.py", line 3, in
from trans_model import trans_model as model
File "/home/rzai/prj/planetoid/trans_model.py", line 2, in
import lasagne
File "/usr/local/lib/python2.7/dist-packages/lasagne/init.py", line 19, in
from . import layers
File "/usr/local/lib/python2.7/dist-packages/lasagne/layers/init.py", line 7, in
from .pool import *
File "/usr/local/lib/python2.7/dist-packages/lasagne/layers/pool.py", line 6, in
from theano.tensor.signal import downsample
ImportError: cannot import name downsample
rzai@rzai00:
/prj/planetoid$

Data problem

Hello, what do files ".x .y .tx .ty .allx .ally .graph .index" store?

Issue in generating .test.index file

Can you please elaborate in detail that how can we generate the test.index file, while using the code with image dataset?
Please elaborate the procedure to generate the test.index file.

Thanks in advance...

数据集预处理/Preprocessing

想问下是怎么生成的.x .y 这些文件的呢?是有一个预处理过程吗?我用原版的cora数据集训练出来只有65左右的精确度(400epoch),是因为预处理的问题吗?
可以的话希望能分享下生成.x .y文件的代码,好多人都想要

Question about bag-of-word dictionary

I sincerely appreciate your efforts in curating diverse datasets for the graph field.

I am trying to analyze the semantics of nodes and their connectivity in the datasets.

So, is there any dictionary for bag-of-word embedding that matches an index to a word?
Or, can I get the source code for preprocessing the bag-of-word embedding?

Python 3

Dear Sir,

Could you please upload the version of code compatible with Python3?

Regards,
Mabi

names of the features

Hi there! I use the CORA dataset to train the model and explain it using model interpretation methods. But in order to fully understand the operation of the model, the names of the features that are encoded in the tensor data.x (cora.x) are missing. Where can I find a dictionary with the names of the encoded features?

Better performance than in paper

Running test_trans and test_ind.py on the CITESEER dataset both yield (a few % points) better performance than reported in the paper. Any idea why that would be? Is the implementation here slightly different, or is it just a function of train/test split?

EDIT: Same goes for CORA, but I haven't been able to reproduce PUBMED -- got to ~ 0.64 and then I killed it, so perhaps I didn't let it run long enough. At that point accuracy was increasing very slowly, though.

about citeseer

When I use citeseer data, it will report error that index 3312 is out of bounds for dim with size 3312. Or expected dim 0 size 3327,got 3312. I don't know whether citeseer has 3312 or 3327 nodes. Please help me to solve the question.Thanks

run test_trans.py error

hello, when I run the test_trans.py, the error message occurs:

File "F:/googledownload/planetoid-master/test_trans.py", line 49, in
OBJECTS.append(cPickle.load(open("data/trans.{}.{}".format(DATASET, NAMES[i]),"rb+")))
UnicodeDecodeError: 'ascii' codec can't decode byte 0x85 in position 16: ordinal not in range(128)

do you know how to solve it?
my python version is 3.6

about acc

While true is an infinite loop.
How many times does it take to cycle to achieve the accuracy of the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.