Giter Club home page Giter Club logo

rllm's Introduction

Note: we will release our first version (v0.1) at August 11, 2024.

rLLM (relationLLM) is an easy-to-use Pytorch library for Relational Table Learning (RTL) with LLMs, by performing two key functions:

  1. Breaks down state-of-the-art GNNs, LLMs, and TNNs as standardized modules.
  2. Facilitates novel model building in a "combine, align, and co-train" way using these modules.

How to Try:

Let's run an RTL-type method BRIDGE as an example:

# cd ./examples
# set parameters if necessary

python bridge/bridge_tacm12k.py
python bridge/bridge_tlf2k.py
python bridge/bridge_tml1m.py

Highlight Features:

  • LLM-friendly: Modular interface designed for LLM-oriented applications, integrating smoothly with LangChain and Hugging Face transformers.
  • One-Fit-All Potential: Processes various graphs (like Social/Citation/E-commerce Networks) by treating them as multiple tables linked by foreigner keys.
  • Novel Datasets: Introduces three new relational table datasets useful for RTL model design. Includes the standard classification task, with examples.
  • Community Support: Maintained by students and teachers from Shanghai Jiao Tong University and Tsinghua University. Supports the SJTU undergraduate course "Content Understanding (NIS4301)" and the graduate course "Social Network Analysis (NIS8023)".

Citation

@article{rllm2024,
      title={rLLM: Relational Table Learning with LLMs}, 
      author={Weichen Li and Xiaotong Huang and Jianwu Zheng and Zheng Wang and Chaokun Wang and Li Pan and Jianhua Li},
      year={2024},
      eprint={2407.20157},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2407.20157}, 
}

rllm's People

Contributors

01warpdrive avatar arrebol-logos avatar bireflection avatar biuboomc avatar dlllx avatar jiangling-js avatar junzheshen avatar kseconds avatar lingxiaodiao avatar mr-zhengjw avatar niyaochuangzuo avatar rllm-project avatar stczzz avatar w1nterflow avatar yanjiashuo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

rllm's Issues

Path issues

Hi!
We have found some problems in the example code when running the example code(GCN). From the code we assume we should run python train.py from directory rllm/examples/gcn/cora. However, this would lead to many import errors. After thorough investigation, we believe there are three major errors that exist in the code:

  • The graph convolutional layer is imported from pygcn instead of GCNconv.py in this repo (rllm/examples/gcn/cora/model.py)
  • typo: dataset -> datasets (such as rllm/datasets/cora.py and other codes in this dir)
  • When using string concatenation for path finding, the first character should be '/'. (such as current_path + '../../data' to current_path + '/../data')

We have corrected these errors and run gcn on cora datasets successfully. How should we submit our code?
Thank you UwU.

TACM12K dataset preprocessing

How can I to obtain the embeddings of paper and author in the TACM12K dataset. The code is not found in the rllm.

Requested tokens exceed context window

Describe your question
I am running the label_free_gnn example and occasionally encountering an error when calling a local llama.cpp model. The error message is as follows:

ValueError: Requested tokens (516) exceed context window of 512

It seems that the model is trying to process more tokens than the context window size allows. How do I prevent this?

Describe the solution you'd like to know
The model should process the tokens without exceeding the context window or handle the situation gracefully. If you need my Environment Details, please let me know!

ninja: build stopped: subcommand failed

the problem arose when I run CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python

Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [66 lines of output]
      *** scikit-build-core 0.8.1 using CMake 3.28.3 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmpba3nrde9/build/CMakeInit.txt
      -- The C compiler identification is GNU 9.4.0
      -- The CXX compiler identification is GNU 9.4.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.25.1")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
      -- Check if compiler accepts -pthread
      -- Check if compiler accepts -pthread - yes
      -- Found Threads: TRUE
      -- Found CUDAToolkit: /usr/include (found version "10.1.243")
      -- cuBLAS found
      -- The CUDA compiler identification is NVIDIA 10.1.243
      -- Detecting CUDA compiler ABI info
      -- Detecting CUDA compiler ABI info - done
      -- Check for working CUDA compiler: /usr/bin/nvcc - skipped
      -- Detecting CUDA compile features
      -- Detecting CUDA compile features - done
      -- Using CUDA architectures: 52;61;70
      -- CUDA host compiler is GNU 8.4.0
      
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:30 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      -- Configuring done (2.9s)
      -- Generating done (0.0s)
      -- Build files have been written to: /tmp/tmpba3nrde9/build
      *** Building project with Ninja...
      Change Dir: '/tmp/tmpba3nrde9/build'
      
      Run Build Command(s): /tmp/pip-build-env-2izfxphj/normal/lib/python3.8/site-packages/ninja/data/bin/ninja -v
      [1/23] /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 "--generate-code=arch=compute_52,code=[compute_52,sm_52]" "--generate-code=arch=compute_61,code=[compute_61,sm_61]" "--generate-code=arch=compute_70,code=[compute_70,sm_70]" -Xcompiler=-fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -use_fast_math -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi" -march=native -Xcompiler -pthread -x cu -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-cuda.cu -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o && /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 "--generate-code=arch=compute_52,code=[compute_52,sm_52]" "--generate-code=arch=compute_61,code=[compute_61,sm_61]" "--generate-code=arch=compute_70,code=[compute_70,sm_70]" -Xcompiler=-fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -use_fast_math -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi" -march=native -Xcompiler -pthread -x cu -M /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-cuda.cu -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o.d
      FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 "--generate-code=arch=compute_52,code=[compute_52,sm_52]" "--generate-code=arch=compute_61,code=[compute_61,sm_61]" "--generate-code=arch=compute_70,code=[compute_70,sm_70]" -Xcompiler=-fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -use_fast_math -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi" -march=native -Xcompiler -pthread -x cu -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-cuda.cu -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o && /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 "--generate-code=arch=compute_52,code=[compute_52,sm_52]" "--generate-code=arch=compute_61,code=[compute_61,sm_61]" "--generate-code=arch=compute_70,code=[compute_70,sm_70]" -Xcompiler=-fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -use_fast_math -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi" -march=native -Xcompiler -pthread -x cu -M /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-cuda.cu -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o.d
      nvcc fatal   : Unknown option 'Wall'
      [2/23] cd /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp && /tmp/pip-build-env-2izfxphj/normal/lib/python3.8/site-packages/cmake/data/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=9.4.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/cc -P /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake
      -- Found Git: /usr/bin/git (found version "2.25.1")
      [3/23] /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -march=native -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-alloc.c
      [4/23] /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -march=native -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-backend.c
      [5/23] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -pthread -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/llava.cpp
      [6/23] /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -march=native -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml-quants.c
      [7/23] /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -march=native -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/ggml.c
      [8/23] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -pthread -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/examples/llava/clip.cpp
      [9/23] /usr/bin/c++ -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_USE_CUBLAS -DK_QUANTS_PER_ITERATION=2 -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -pthread -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-l94wx81e/llama-cpp-python_d62e455c80f8451a960c00cd1faa35ca/vendor/llama.cpp/llama.cpp
      ninja: build stopped: subcommand failed.
      
      
      *** CMake build failed
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

After searching, I have downgraded the version of the dependency and upgraded gcc. However, nothing happend ...

I suspected the version of cuda & cudnn was the main reason. Can you give the environmental setting for this code?

`post_filter` in `label_free_gnn` example is Time-Consuming Due to Redundant Computations

Describe your question
I've noticed that the entropy calculation in the post_filter function is significantly time-consuming. It appears that this is due to redundant computations in the sorting process, which introduces unnecessary time cost. It should be optimized to avoid unnecessary computational overhead.

Describe the solution you'd like to know
Opmized the redundant computations to avoid unnecessary computational overhead.

Transposing Sparse Tensors in PyTorch

image

(Pdb) l
 69         adj_i, adj_v = adj.indices(), adj.values()
 70         hop = torch.zeros((dataset.v_num['movie'], dataset.v_num['movie'])).type(torch.LongTensor)
 71         for i in rating_range:
 72             idx = torch.where(adj_v == i, True, False)
 73             A = torch.sparse_coo_tensor(adj_i[:, idx], adj_v[idx], adj.shape)
 74 B->         A = torch.spmm(A.T, A).to_dense()
 75             hop |= torch.where(A > threshold, 1, 0)
 76         hop = hop.type(torch.FloatTensor)
 77  
 78         return dataset, \
 79                hop, \
(Pdb) p A.T
*** RuntimeError: Tensors of type SparseTensorImpl do not have strides
(Pdb) p A.transpose(0, 1)
tensor(indices=tensor([[3089, 3385, 2964,  ..., 3646, 3676, 3792],
                       [   1,    1,    2,  ..., 6039, 6039, 6039]]),
       values=tensor([1., 1., 1.,  ..., 1., 1., 1.]),
       size=(3883, 6040), nnz=56174, layout=torch.sparse_coo)

Hi!
When we are running the example code of GCN on movielens dataset, we encountered RuntimeError: Tensors of type SparseTensorImpl do not have strides. This is because the code here at rllm/dataloader/movielens_classification.py:74 uses A.T to transpose a sparse matrix. Our PyTorch environment is 1.12.1+cu113, where A.T should be replaced by A.transpose(0, 1) (See figure above). Nevertheless, we can not rule out the possibility that this is due to some pytorch version issues. Thus, we recommend TAs provide some details about your environment, or modify the code.
Thank you UwU.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.