Giter Club home page Giter Club logo

benedekrozemberczki / boostedfactorization Goto Github PK

View Code? Open in Web Editor NEW
33.0 5.0 13.0 849 KB

An implementation of "Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation" (ASONAM 2019).

License: GNU General Public License v3.0

Python 100.00%
deepwalk l-ensnmf nmf boosting gradient-boosting-machine boosting-machine nmf-decomposition matrix-factorization sklearn boostedne network-embedding node-embedding node2vec word2vec dimensionality-reduction unsupervised-learning machine-learning data-mining factorization-machine gradient-boosting

boostedfactorization's Introduction

L-EnsNMF and BoostedNE

Arxiv codebeat badge repo sizeโ €benedekrozemberczki

The factorization procedure L-EnsNMF creates a sequential ensemble factorization of a target matrix. In each factorization round a residual target matrix is created by sampling an anchor row and column. Anchor sampling finds a block of matrix entries that are similar to the row and column and other entries of the residual matrix are downsampled. By factorizing the residuals matrices each relatively upsampled block gets a high quality representation. BoostNE adapts this idea for node embedding. An approximate target matrix obtained with truncated random walk sampling is factorized by the L-EnsNMF method. This way blocks of highly connected nodes get representations that are described by vectors obtained in a given boosting round. Specifically, my implementation assumes that the target matrices are sparse. So far this is the only publicly available Python implementation of these procedures.

The model is now also available in the package Karate Club.

This repository provides an implementation for L-EnsNMF and BoostedNE as described in the papers:

L-EnsNMF: Boosted Local Topic Discovery via Ensemble of Nonnegative Matrix Factorization. Sangho Suh, Jaegul Choo, Joonseok Lee, Chandan K. Reddy ICDM, 2016. http://dmkd.cs.vt.edu/papers/ICDM16.pdf

Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation. Jundong Li, Liang Wu and Huan Liu ASONAM, 2019. https://arxiv.org/abs/1808.08627

The original Matlab implementation is available [here].

Requirements

The codebase is implemented in Python 3.5.2. The package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0
sklearn           0.19.1

Datasets

Graphs

The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. A sample graph for the Wikipedia Giraffes is included in the input/ directory.

Sparse Matrices

The code takes an input matrix in a csv file. Every row indicates a (user,item,score) separated by a comma. The first row is a header. Users and items should be indexed starting with 0, each score is positive. A sample sparse stochastic block matrix is included in the input/ folder.

Options

Learning of the embedding is handled by the src/main.py script which provides the following command line arguments.

Input and output options

  --input-path    STR      Edges path.                        Default is `input/giraffe_edges.csv`.
  --output-path   STR      Embedding path.                    Default is `output/giraffe_embedding.csv`.
  --dataset-type  STR      Whether the dataset is a graph.    Default is `graph`.  

Boosted Model options

  --dimensions   INT         Number of embeding dimensions.   Default is 8.
  --iterations   INT         Number of power interations.     Default is 10.
  --alpha        FLOAT       Regularization coefficient.      Default is 0.001.

DeepWalk options

  --number-of-walks     INT      Number of random walks.                  Default is 10.
  --walk-length         INT      Random walk length.                      Default is 80.
  --window-size         INT      Window size for feature extractions.     Default is 3.
  --pruning-threshold   INT      Minimal co-occurence count to be kept.   Default is 10.

Examples

The following commands learn a graph embedding and write the embedding to disk. The node representations are ordered by the ID.

Creating an embedding of the default dataset with the default hyperparameter settings. Saving the embedding at the default path.

$ python src/main.py

Creating an embedding of the default dataset with 16 dimensions and 20 boosting rounds. This results in a 16x20=320 dimensional embedding.

$ python src/main.py --dimensions 16 --iterations 20

Creating an Lens-NMF embedding of the default dataset with stronger regularization.

$ python src/main.py --alpha 0.1

Creating an embedding of an other dataset the Wikipedia Dogs. Saving the output in a custom folder.

$ python src/main.py --input-path input/dog_edges.csv --output-path output/dog_lensnmf.csv

Creating an embedding of the default dataset with 20 random walks per source and 120 nodes in each vertex sequence.

$ python src/main.py --walk-length 120 --number-of-walks 20

Creating an embedding of a non-graph dataset and storing it under a non-standard name.

$ python src/main.py --dataset-type sparse --input-path input/small_block.csv --output-path output/block_embedding.csv

License


boostedfactorization's People

Contributors

benedekrozemberczki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.