Giter Club home page Giter Club logo

bigi's Introduction

BiGI

The source code is for the paper: ”Bipartite Graph Embedding via Mutual Information Maximization" accepted in WSDM 2021 by Jiangxia Cao*, Xixun Lin*, Shu Guo, Luchen Liu, Tingwen Liu, Bin Wang (* means equal contribution).

@inproceedings{bigi2021,
  title={Bipartite Graph Embedding via Mutual Information Maximization},
  author={Cao*, Jiangxia and Lin*, Xixun and Guo, Shu and Liu, Luchen and Liu, Tingwen and Wang, Bin},
  booktitle={ACM International Conference on Web Search and Data Mining (WSDM)},
  year={2021}
}

Requirements

Python=3.6.2

PyTorch=1.1.0

CUDA=9.0

Scikit-Learn = 0.22

Scipy = 1.3.1

Preparation

Some datasets have been included in the ./dataset directory. Other datasets can be downloaded from the official website.

Usage

To run this project, please make sure that you have the following packages being downloaded. Our experiments are conducted on a PC with an Intel Xeon E5 2.1GHz CPU and a Tesla V100 GPU.

For running DBLP:

CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --id dblp --struct_rate 0.00001 --GNN 2 > BiGIdblp.log 2>&1&

For running ML-100K:

CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --data_dir dataset/movie/ml-100k/1/ --batch_size 128 --id ml100k --struct_rate 0.0001 --GNN 2 > BiGI100k.log 2>&1&

For running ML-10M:

CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --batch_size 100000 --data_dir dataset/movie/ml-10m/ml-10M100K/1/ --id ml10m --struct_rate 0.00001 > BiGI10m.log 2>&1&

For running Wiki(5:5):

CUDA_VISIBLE_DEVICES=1 nohup python -u train_lp.py --id wiki5 --struct_rate 0.0001 --GNN 2 > BiGIwiki5.log 2>&1&

For running Wiki(4:6):

CUDA_VISIBLE_DEVICES=1 nohup python -u train_lp.py --data_dir dataset/wiki/4/ --id wiki4 --struct_rate 0.0001 --GNN 2 > BiGIwiki4.log 2>&1&

bigi's People

Contributors

caojiangxia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bigi's Issues

Request for test.py

Hello, thank you for the great paper and work.
I cannot find the test code that loads the trained model and evaluate the performance.
After training the model, is there a code to test the saved model?
Thanks again. 👍

Train Multi Gpu

Thanks for your great repo.
I tried to train the model for a big graph (about 7M edges in training data).
When I was Training on a single GPU, training time wasn't efficient, so I decided to try multi-gpu, but I got this error:

  File "/home/BiGI/BiGI_src/train_rec.py", line 163, in <module>
    loss = trainer.reconstruct(UV, VU, adj, corruption_UV, corruption_VU, fake_adj, user_feature, item_feature, batch)  # [ [user_list], [item_list], [neg_item_list] ]
  File "/home/BiGI/BiGI_src/model/trainer.py", line 152, in reconstruct
    self.update_bipartite(user_feature, item_feature, CUV, CVU, fake_adj, fake = 1)
  File "/home/BiGI/BiGI_src/model/trainer.py", line 131, in update_bipartite
    self.user_hidden_out, self.item_hidden_out = self.model(user_feature, item_feature, UV_adj, VU_adj, adj)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 158, in forward
    inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 175, in scatter
    return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 44, in scatter_kwargs
    inputs = scatter(inputs, target_gpus, dim) if inputs else []
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 36, in scatter
    res = scatter_map(inputs)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 23, in scatter_map
    return list(zip(*map(scatter_map, obj)))
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 19, in scatter_map
    return Scatter.apply(target_gpus, None, dim, obj)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 96, in forward
    outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
  File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/comm.py", line 189, in scatter
    return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: Tensors of type SparseTensorImpl do not have strides

Process finished with exit code 1 

How do you train the model on large graphs? Do you have any idea or code for multi-GPU training?

Train on custom dataset

I want to train on a custom dataset for a recommendation task.
The dataset is based on user/group data that users participate in some groups.
The problem is how I should split data for test and train?
when I randomly select some node and edge, I will face this error:
File "/home/BiGI/BiGI_src/utils/GraphMaker.py", line 135, in preprocessing UV_adj = sp.coo_matrix((np.ones(UV_edges.shape[0]), (UV_edges[:, 0], UV_edges[:, 1])), File "/home/.local/lib/python3.8/site-packages/scipy/sparse/coo.py", line 196, in __init__ self._check() File "/home/.local/lib/python3.8/site-packages/scipy/sparse/coo.py", line 283, in _check raise ValueError('row index exceeds matrix dimensions') ValueError: row index exceeds matrix dimensions

request paper

你好,我是合工大的一名研究生,很想学习一下你们的工作。
如果方便的话,能否分享一下你们的论文。我的联系方式: [email protected],谢谢~

作者你好,想问下DMGI单次run只能得到一类节点的embedding,是做了两次run吗?

如果是两次run,metapath的选取是怎样的呢?想知道DMGI实验更加具体一点的细节。
感谢作者的回复。

original question:
For DMGI model, we can get single type of node embedding per run of DMGI, but there are two kinds of node in bipartite graph. In experiment, DMGI baseline conducted item AND user node for performance testing. So my question is to know more about the metapath selection in DMGI baseline and confirm if it runs twice to get both user and item embeddings of DMGI.

Thank you so much in advance.

application for code

really interested in this paper,may I asked when the code will be released?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.