Giter Club home page Giter Club logo

hitter's Introduction

HittER

Hierarchical Transformers for Knowledge Graph Embeddings

HittER generates embeddings for knowledge graphs and performs link prediction using a hierarchical Transformer model. It will appear in EMNLP 2021 (arXiv version).

Installation

The repo requires python>=3.7, anaconda and a new env is recommended.

conda create -n hitter python=3.7 -y # optional
conda activate hitter # optional
git clone [email protected]:microsoft/HittER.git
cd HittER
pip install -e .

Data

First download the standard benchmark datasets using the commands below. Thanks LibKGE for providing the preprocessing scripts and hosting the data.

cd data
sh download_standard.sh

Training

Configurations for the experiments are in the /config folder.

python -m kge start config/trmeh-fb15k237-best.yaml

The training process uses DataParallel in all visible GPUs by default, which can be overrode by appending --job.device cpu to the command above.

Evaluation

You can evaluate the trained models on dev/test set using the following commands.

python -m kge eval <saved_dir>
python -m kge test <saved_dir>

Pretrained models are also released for reproducibility.

HittER-BERT QA experiments

QA experiment-related data can be downloaded from the release.

git submodule update --init
cd transformers
pip install -e .

Run experiments

> python hitter-bert.py --help

usage: hitter-bert.py [-h] [--dataset {fbqa,webqsp}] [--filtered] [--hitter]
                      [--seed SEED]
                      [exp_name]

positional arguments:
  exp_name              Name of the experiment

optional arguments:
  -h, --help            show this help message and exit
  --dataset {fbqa,webqsp}
                        fbqa or webqsp
  --filtered            Filtered or not
  --hitter              Use pretrained HittER or not
  --seed SEED           Seed number

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Citation

@inproceedings{chen-etal-2021-hitter,
    title = "HittER: Hierarchical Transformers for Knowledge Graph Embeddings",
    author = "Chen, Sanxing and Liu, Xiaodong and Gao, Jianfeng and Jiao, Jian and Zhang, Ruofei and Ji, Yangfeng",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2021",
    publisher = "Association for Computational Linguistics"
}

hitter's People

Contributors

microsoft-github-operations[bot] avatar microsoftopensource avatar sanxing-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hitter's Issues

About fair comparison

Hi, I have some questions about the result comparison between HittER and baselines.

  1. The reported MRR of CoKE on FB15k-237 and WN18RR are 0.475 and 0.361 in original paper, while in HittER's paper, CoKE's results are 0.484 and 0.364, respectively. So I want to know if you have rerun the CoKE code or adjust the predefiend embedding dimension ?
  2. The embedding dimension in CoKE's original implementation is 256, and other baselines like TuckER and SimplE set the dimension to 200. However, the embedding of HittER is 320, so is that fair for performance comparsion, because I cannot identify whether the performance enhancement of HittER comes from advanced model or large embedding size. Did you evaluate HittER with same embedding size or parameter amount to other baselines ?

Hope you can solve this issue, thank you!

Pretrained model

Hi, thank you for your awesome work !
When will the pretrained model be available ? The link currently doesn't work

Question about search the params

Hey, really appreciate your nice work!
I noticed there isn't a Python file uploaded regarding search parameters in your work. Do you have any plans to upload it later on.
Looking forward to your reply! Thanks a lot!

Why does the hitter-bert not function properly

out = self.bert(kg_attentions=kg_attentions,
output_attentions=True,
output_hidden_states=True,
return_dict=True,
labels=masked_labels,
**sent_input,
)
There is not a missing kg _Attentions parameter.

some questions

hello, I have a few questions for you and I look forward to your answers:
Q1: Your referred neighborhood are edges that connect to the source vertex. Why is this set up?
Q2: why fb15k237 does not use the MLM loss?

DataParallel

Hello,

I'm trying to implement data parallel in libkge by integrating the _CustomDataParallel from your code, but it doesn't seem to work as expected. The data parallelism does not function properly.

Could you please advise if there are any specific steps or considerations for enabling data parallelism with _CustomDataParallel in libkge?

Thank you.

Unable to run in Colab or Lightning or GCP VMs - various dependency issues (tried many workarounds)

I'm running into versioning/dependency issues when trying to get this running.
When I follow the instructions on the readme, I get:

ERROR: Could not find a version that satisfies the requirement torch==1.4.0 (from hitter) (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1)
ERROR: No matching distribution found for torch==1.4.0

When I pick one manually to to get beyond the error, I get blocked by a conflict between PyTorch Lightning 1.3.1 and PyYAML.
Both Lightning Studio and Colab use PyYAML 6.0.1, which was released in 2021, by the version of PyTorch Lightning is not compatible with this version.
I have tried manually removing the dependencies and installing from scratch in both environments, but then I get into even deeper dependency conflicts with cython that look like this:

⚡ main ~/HittER pip install path pytorch-pretrained-bert==0.6.0 pytorch_lightning==1.3.1 numba==0.48.0 networkx==2.4
Collecting path
  Using cached path-16.10.0-py3-none-any.whl.metadata (6.4 kB)
Collecting pytorch-pretrained-bert==0.6.0
  Using cached pytorch_pretrained_bert-0.6.0-py3-none-any.whl.metadata (74 kB)
Collecting pytorch_lightning==1.3.1
  Using cached pytorch_lightning-1.3.1-py3-none-any.whl.metadata (24 kB)
Collecting numba==0.48.0
  Using cached numba-0.48.0.tar.gz (2.0 MB)
  Preparing metadata (setup.py) ... done
Collecting networkx==2.4
  Using cached networkx-2.4-py3-none-any.whl.metadata (4.9 kB)
Requirement already satisfied: torch>=0.4.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pytorch-pretrained-bert==0.6.0) (2.2.1)
Requirement already satisfied: numpy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pytorch-pretrained-bert==0.6.0) (1.26.2)
Requirement already satisfied: boto3 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pytorch-pretrained-bert==0.6.0) (1.34.61)
Requirement already satisfied: requests in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pytorch-pretrained-bert==0.6.0) (2.31.0)
Requirement already satisfied: tqdm in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pytorch-pretrained-bert==0.6.0) (4.66.2)
Collecting future>=0.17.1 (from pytorch_lightning==1.3.1)
  Using cached future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
Collecting PyYAML<=5.4.1,>=5.1 (from pytorch_lightning==1.3.1)
  Using cached PyYAML-5.4.1.tar.gz (175 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [48 lines of output]
      running egg_info
      writing lib3/PyYAML.egg-info/PKG-INFO
      writing dependency_links to lib3/PyYAML.egg-info/dependency_links.txt
      writing top-level names to lib3/PyYAML.egg-info/top_level.txt
      Traceback (most recent call last):
        File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 271, in <module>
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 104, in setup
          return distutils.core.setup(**attrs)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
          return run_commands(dist)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
          dist.run_commands()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
          cmd_obj.run()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 321, in run
          self.find_sources()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 329, in find_sources
          mm.run()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 550, in run
          self.add_defaults()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 588, in add_defaults
          sdist.add_defaults(self)
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/command/sdist.py", line 102, in add_defaults
          super().add_defaults()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
          self._add_defaults_ext()
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
          self.filelist.extend(build_ext.get_source_files())
        File "<string>", line 201, in get_source_files
        File "/tmp/pip-build-env-qhpk1351/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
          raise AttributeError(attr)
      AttributeError: cython_sources
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

What's the best path forward here?

Question about position embedder in model

Hey, really appreciate your nice work!
I notice in the trme.py that for the global cls position embedding, the code uses self.type_embeds = nn.Embedding(100, self.dim)(line 33). However, the latter utilisation (line 155) pos = self.type_embeds(torch.arange(0, 3, device=device)) only uses three positions. So why type_embeds uses nn.Embedding(100, self.dim) instead of nn.Embedding(3, self.dim)? Will this make a difference?
Looking forward to your reply! Thanks a lot!

Unable to reproduce result

Thanks for releasing the code for your work. I am trying to reproduce the numbers for the no context version of the model. Using the config file trmeh-fb15k237-noctx.yaml, I am getting the following metrics

Dev set
MR=167.69, MRR= 0.327, Hits1 = 0.2308, Hits10= 0.520

Test set
MR=170.79, MRR= 0.369, Hits1 = 0.2749, Hits10= 0.5590

However, in Table 3, the reported numbers are MRR=0.373, Hits@10= 0.561 (Dev set)

Kindly explain how to reproduce the results. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.