Giter Club home page Giter Club logo

promptkg's Introduction

PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.

Awesome License: MIT

Directory Description
research • A collection of prompt learning-related research model implementations
lambdaKG • A library for PLM-based KG embeddings and applications
deltaKG • A library for dynamically editing PLM-based KG embeddings
tutorial-notebooks Tutorial notebooks for beginners

Table of Contents

Tutorials

  • Zero- and Few-Shot NLP with Pretrained Language Models. AACL 2022 Tutorial [ppt]
  • Data-Efficient Knowledge Graph Construction. CCKS2022 Tutorial [ppt]
  • Efficient and Robuts Knowledge Graph Construction. AACL-IJCNLP Tutorial [ppt]
  • Knowledge Informed Prompt Learning. MLNLP 2022 Tutorial (Chinese) [ppt]

Surveys

  • Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models (on arxiv 2021) [paper]
  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (ACM Computing Surveys 2021) [paper]
  • reStructured Pre-training (on arxiv 2022) [paper]
  • A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models (on arxiv 2022) [paper]
  • A Survey of Knowledge-Enhanced Pre-trained Language Models (on arxiv 2022) [paper]
  • A Review on Language Models as Knowledge Bases (on arxiv 2022) [paper]
  • Generative Knowledge Graph Construction: A Review (EMNLP, 2022) [paper]
  • Reasoning with Language Model Prompting: A Survey (on arxiv 2022) [paper]
  • Reasoning over Different Types of Knowledge Graphs: Static, Temporal and Multi-Modal (on arxiv 2022) [paper]
  • The Life Cycle of Knowledge in Big Language Models: A Survey (on arxiv 2022) [paper]
  • Unifying Large Language Models and Knowledge Graphs: A Roadmap (on arxiv 2023) [paper]

Papers

Knowledge as Prompt

Language Understanding

  • Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, in NeurIPS 2020. [pdf]
  • REALM: Retrieval-Augmented Language Model Pre-Training, in ICML 2020. [pdf]
  • Making Pre-trained Language Models Better Few-shot Learners, in ACL 2022. [pdf]
  • PTR: Prompt Tuning with Rules for Text Classification, in OpenAI 2022. [pdf]
  • Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction, in EMNLP 2021. [pdf]
  • RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction, in EMNLP 2022 (Findings). [pdf]
  • Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification, in ACL 2022. [pdf]
  • PPT: Pre-trained Prompt Tuning for Few-shot Learning, in ACL 2022. [pdf]
  • Contrastive Demonstration Tuning for Pre-trained Language Models, in EMNLP 2022 (Findings). [pdf]
  • AdaPrompt: Adaptive Model Training for Prompt-based NLP, in arxiv 2022. [pdf]
  • KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction, in WWW 2022. [pdf]
  • Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph Construction, in SIGIR 2023. [pdf]
  • Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning, in NeurIPS 2022. [pdf]
  • Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning, in SIGIR 2022. [pdf]
  • LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting, in COLING 2022. [pdf]
  • Unified Structure Generation for Universal Information Extraction, in ACL 2022. [pdf]
  • LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model, in NeurIPS 2022. [pdf]
  • Atlas: Few-shot Learning with Retrieval Augmented Language Models, in Arxiv 2022. [pdf]
  • Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models, in ACL 2022. [pdf]
  • Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding, in EMNLP 2022. [pdf]
  • Unified Knowledge Prompt Pre-training for Customer Service Dialogues, in CIKM 2022. [pdf]
  • Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding, in EMNLP 2022. [pdf]
  • SELF-INSTRUCT: Aligning Language Model with Self Generated Instructions, in arxiv 2022. [pdf]
  • One Embedder, Any Task: Instruction-Finetuned Text Embeddings, in arxiv 2022. [pdf]
  • Learning To Retrieve Prompts for In-Context Learning, in NAACL 2022. [pdf]
  • Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data, in ACL 2022. [pdf]
  • One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER, in Arxiv 2023. [pdf]
  • REPLUG: Retrieval-Augmented Black-Box Language Models, in Arxiv 2023. [pdf]
  • Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering, in Arxiv 2023. [pdf]

Multimodal

  • Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction, in NAACL 2022 (Findings). [pdf]
  • Visual Prompt Tuning, in ECCV 2022. [pdf]
  • CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models, in EMNLP 2022. [pdf]
  • Learning to Prompt for Vision-Language Models, in IJCV 2022. [pdf]
  • Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, in NeurIPS 2022. [pdf]

Advanced Tasks

  • Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5), in ACM RecSys 2022. [pdf]
  • Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning, in KDD 2022. [pdf]
  • PromptEM: Prompt-tuning for Low-resource Generalized Entity Matching, in VLDB 2023. [pdf]
  • VIMA: General Robot Manipulation with Multimodal Prompts, in Arxiv 2022. [pdf]
  • Unbiasing Retrosynthesis Language Models with Disconnection Prompts, in Arxiv 2022. [pdf]
  • ProgPrompt: Generating Situated Robot Task Plans using Large Language Models, in Arxiv 2022. [pdf]
  • Collaborating with language models for embodied reasoning, in NeurIPS 2022 Workshop LaReL. [pdf]

Prompt (PLMs) for Knowledge

Knowledge Probing

  • How Much Knowledge Can You Pack Into the Parameters of a Language Model? in EMNLP 2020. [pdf]
  • Language Models as Knowledge Bases? in EMNLP 2019. [pdf]
  • Materialized Knowledge Bases from Commonsense Transformers, in CSRR 2022. [pdf]
  • Time-Aware Language Models as Temporal Knowledge Bases, in TACL2022. [pdf]
  • Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? in ACL2021. [pdf]
  • Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries, in EACL2021. [pdf]
  • Scientific language models for biomedical knowledge base completion: an empirical study, in AKBC 2021. [pdf]
  • Multilingual LAMA: Investigating knowledge in multilingual pretrained language models, in EACL2021. [pdf]
  • How Can We Know What Language Models Know ? in TACL 2020. [pdf]
  • How Context Affects Language Models' Factual Predictions, in AKBC 2020. [pdf]
  • COPEN: Probing Conceptual Knowledge in Pre-trained Language Models, in EMNLP 2022. [pdf]
  • Probing Simile Knowledge from Pre-trained Language Models, in ACL 2022. [pdf]

Knowledge Graph Embedding (We provide a library and benchmark lambdaKG)

  • KG-BERT: BERT for knowledge graph completion, in Arxiv 2020. [pdf]
  • Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models, in Coling 2020. [pdf]
  • Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion, in WWW 2021. [pdf]
  • KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation, TACL 2021 [pdf]
  • StATIK: Structure and Text for Inductive Knowledge Graph, in NAACL 2022. [pdf]
  • Joint Language Semantic and Structure Embedding for Knowledge Graph Completion, in COLING. [pdf]
  • Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge Graph Completion, in COLING. [pdf]
  • Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach, in ACL 2022. [pdf]
  • Language Models as Knowledge Embeddings, in IJCAI 2022. [pdf]
  • From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer, in WWW 2022. [pdf]
  • Reasoning Through Memorization: Nearest Neighbor Knowledge Graph Embeddings, in Arxiv 2022. [pdf]
  • SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models, in ACL 2022. [pdf]
  • Sequence to Sequence Knowledge Graph Completion and Question Answering, in ACL 2022. [pdf]
  • LP-BERT: Multi-task Pre-training Knowledge Graph BERT for Link Prediction, in Arxiv 2022. [pdf]
  • Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries, in KDD 2022. [pdf]
  • Knowledge Is Flat: A Seq2Seq Generative framework For Various Knowledge Graph Completion, in Coling 2022. [pdf]

Analysis

  • Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases, in ACL 2021. [pdf]
  • Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View, in ACL 2022. [pdf]
  • How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis, in ACl 2022. [pdf]
  • Emergent Abilities of Large Language Models, in Arxiv 2022. [pdf]
  • Knowledge Neurons in Pretrained Transformers, in ACL 2022. [pdf]
  • Finding Skill Neurons in Pre-trained Transformer-based Language Models, in EMNLP 2022. [pdf]
  • Do Prompts Solve NLP Tasks Using Natural Languages? in Arxiv 2022. [pdf]
  • Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? in EMNLP 2022. [pdf]
  • Do Prompt-Based Models Really Understand the Meaning of their Prompts? in NAACL 2022. [pdf]
  • When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories, in arxiv 2022. [pdf]
  • Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers,in arxiv 2022. [pdf]
  • Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity, in ACL 2022. [pdf]
  • Editing Large Language Models: Problems, Methods, and Opportunities, in arxiv 2023. [pdf]

Contact Information

For help or issues using the tookits, please submit a GitHub issue.

promptkg's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

promptkg's Issues

关于RetrievalRE复现的问题

您好,非常感谢你们能提供代码以及详细的文档,不过我在复现RetrievalRE相关的时候遇到了些问题。

我在复现Standard RE(文中table3)的semeval时,由于没有提供相关的脚本,我简单的复用k-shot的脚本如下

CUDA_VISIBLE_DEVICES=0 python main.py --max_epochs=30  --num_workers=8 \
    --model_name_or_path roberta-large \
    --accumulate_grad_batches 1 \
    --batch_size 16 \
    --data_dir dataset/semeval/ \
    --check_val_every_n_epoch 1 \
    --data_class WIKI80 \
    --max_seq_length 256 \
    --model_class RobertaForPrompt \
    --t_lambda 0.001 \
    --litmodel_class BertLitModel \
    --task_name wiki80 \
    --lr 3e-5 \
    --use_template_words 0 \
    --init_type_words 0 \
    --output_dir output/semeval/all

得到了0.8895692105838618,经过knn_combine之后得到了0.8916030534351145,和table3中的90.4对不上,可能是相关的setting不对,能提供一下正确的setting吗,非常感谢。

请问代码中是如何结合prompt?

感谢作者们的回复~
项目中提供了4种模型,用于补全和问答两项任务,输入、输出均是三元组。但不太清楚怎样结合了prompt,望解答~

key error

I comes up with a running error of:
File "data/processor.py", line 417, in _create_examples
relation_names.append(rel2text[t])
KeyError: '/soccer/football_team/current_roster./soccer/football_roster_position/position'
when I run the command "sh ./scripts/wn18rr.sh" in project GenKGC

loss没有下降

按照教程执行操作,程序运行起来后loss一直是2.98,请问这是什么问题?
执行的任务是semeval,除了batch_size其他的都没有改变

Optimal hyperparameters

Hi,

May I have the optimal hyperparameters used for the reported results in the original paper of GenKGC. We ran the script of scripts/wn18rr.sh, while this set of hyperparameters gives the following result:
image
There is still a gap between this one and the reported results. Could you please check whether the given hyperparameters are for the reported result?

BTW, your work is quite interesting and we would like to reference your result in our current work. But as the MRR metric has not been reported in GenKGC original paper, we have to run this code to get the MRR result. If possible, please help us to check it. It is really appreciated.

運行bash ./scripts/metaqa/run.sh 的hist1, hist3, hist10皆為0

您好,我目前的环境是torch1.10.1 搭配cuda 11.1,目前只有更動run.sh,改为:
CUDA_VISIBLE_DEVICES=0 python main.py --max_epochs=10 --num_workers=0
--model_name_or_path t5-base
--num_sanity_val_steps 0
--model_class T5KBQAModel
--lit_model_class KGT5LitModel
--label_smoothing 0.1
--data_class MetaQADataModule
--precision 16
--batch_size 16
--check_val_every_n_epoch 2
--dataset metaQA
--k_hop 1
--eval_batch_size 64
--max_seq_length 64
--max_entity_length 128
--lr 5e-5

也就是更改 CUDA_VISIBLE_DEVICES、num_workers、batch_size以及eval_batch_size,但输出的结果都为0,请问是什么步骤出错了吗?

[bug] UnboundLocalError: local variable 'hr' referenced before assignment

File "/home/xxx/projects/PromptKG/toolkit/lit_models/transformer.py", line 236, in test_step
head_ids.append(hr[0])
UnboundLocalError: local variable 'hr' referenced before assignment

def test_step(self, batch, batch_idx):
        hr_vector = self.model(**batch)['hr_vector']
        scores = torch.mm(hr_vector, self.entity_embedding.t())
        bsz = len(batch['batch_data'])
        label = []
        head_ids = []
        for i in range(bsz):
            d = batch['batch_data'][i]
            head_ids.append(hr[0])
            inverse = d.inverse
            hr = tuple(d.hr)
            t = d.t
            label.append(t)
            idx = []
            if inverse:
                for hh in self.trainer.datamodule.filter_tr_to_h.get(hr, []):
                    if hh == t:
                        continue
                    idx.append(hh)
            else:
                for hh in self.trainer.datamodule.filter_tr_to_h.get(hr, []):
                    if hh == t:
                        continue
                    idx.append(hh)

            scores[i][idx] = -100
            # scores[i].index_fill_(0, idx, -1)
        rerank_by_graph(scores, head_ids)
        _, outputs = torch.sort(scores, dim=1, descending=True)
        _, outputs = torch.sort(outputs, dim=1)
        ranks = outputs[torch.arange(bsz), label].detach().cpu() + 1

        return dict(ranks=ranks)

create MidRes.json isn't working

When I download FB15K-237 using Google Drive and run MidRes.json file, I get the following error. Please check..
I looked up the problem, but there is no difference between the entity2text.txt file uploaded to Google Drive and the entity2textlong.txt file.

run python ./LLM/create_midres.py

'''
Traceback (most recent call last):
File "./LLM/create_midres.py", line 39, in
build_midres()
File "./LLM/create_midres.py", line 28, in build_midres
for i in entity2textlong[lines[0]].replace('\n',' ').split('. '):
KeyError: '9447'
'''

'dataset/FB15k-237/entity2textlong.txt' not found

It seems that there is not a file named entity2textlong.txt in dataset FB15k-237. But you load this file in /LLM/create_midres.py.
Could you tell me where this file comes from? Or did I miss something?

在下载完数据集,配置好虚拟环境后,运行脚本文件提示报错如下:

Traceback (most recent call last):
File "main.py", line 161, in
main()
File "main.py", line 143, in main
trainer.fit(lit_model, datamodule=data)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 962, in run_evaluation
output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 174, in evaluation_step
output = self.trainer.accelerator.validation_step(args)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 221, in validation_step
batch = self.to_device(args[0])
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu.py", line 69, in to_device
batch = super().to_device(batch)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 394, in to_device
return self.batch_to_device(batch, self.root_device)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 177, in batch_to_device
return model._apply_batch_transfer_handler(batch, device)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 216, in _apply_batch_transfer_handler
batch = self.transfer_batch_to_device(batch, device)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/core/hooks.py", line 704, in transfer_batch_to_device
return move_data_to_device(batch, device)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 161, in move_data_to_device
return apply_to_collection(batch, dtype=dtype, function=batch_to)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 84, in apply_to_collection
return function(data, *args, **kwargs)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 158, in batch_to
return data.to(device, **kwargs)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/transformers/file_utils.py", line 1639, in wrapper
return func(*args, **kwargs)
File "/root/.local/conda/envs/genkgc/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 738, in to
v.to(device=device)
AttributeError: 'list' object has no attribute 'to'

但是tokenization_utils_base.py是下载的transformers安装包中的,为什么会在这里报错嘞?请问作者遇到过这个问题吗,感谢您的回答!

About implementation of the generation method of lambdaKG

你好!
I have a question about KGT5 and GenKGC for KGC tasks- are there any implementation differences between the two?
I seem to be very similar, maybe the difference is using t5-small or bart-base for the pre-train model.
Data process is the same, lit_models/transformers.py are nearly the same, and models are slightly different but I could hardly find a difference.
Also, it seems that the data pre-processing of each paper is different. For example, KGT5 uses "predict_tail: ", "predict_head", and "predict_answer" as prompts, and GenKGC uses type information, and the insertion position of the descriptions is at the beginning of the input.
Other than that, I think there are some differences between the lambdaKG implementation and both methods described in the paper.
But the lambdaKG paper only says that generation-based models are given the tag [inverse].
I would like to ask whether the lambdaKG implementation does not distinguish between minor differences, or whether it is an implementation mistake.
I would like to know if you have also compared the differences in accuracy in such cases.

best,

关于MetaQA的问题

您好,对于kbqa以及kgc领域都是非常棒的工作。但是关于MetaQA任务我有几点疑问,若您能解答,感激不尽~
1,在此KGQA任务中,Promp**体现在哪?(是.json数据集中的'triples'吗。)
2,1-hop下train.json、test.json中每条数据中‘triples’是如何构建的?(因为在原始MetaQA数据中并没有提供question所对应的subgraphs。)
3,您在复现KGT5工作时,是否采用了与原论文相同的kgc pretraining策略?(在框架中好像并没有该阶段所对应的代码。)
再次感谢,还望能解答一下,祝好~!

文件缺失

research/RetrievalRE/下边的get_label_word.py文件缺失,麻烦上传一下啊

dataset access

Hello!
Where can I get to access your constructed dataset of deltaKG for reproducing the experiment results?

请教一下关于将实体/关系视作special_token的问题。

您好,感谢分享工作成果。

在您的论文里指出将实体/关系视作special_token,我看了公布的代码后发现,tokenizer只添加100个类似[ENTITY_{j}{i}]的特殊字符,这一块似乎还没做完对吗?

不知道能否讲一下关于这一块的细节?

bash ./scripts/kgrec/pretrain_item.sh 沒有ml20m dataset

您好,在运行REC task的第一个script时,产生以下错误:
with open(f"./dataset/{self.args.dataset}/item2text.txt") as file:
FileNotFoundError: [Errno 2] No such file or directory: './dataset/ml20m/item2text.txt'

请问有提供ml20m这个dataset吗?
谢谢!

Some questions about the Open-book Knowledge-store

Dear researchers,

I have been reading the research paper "Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning" and I have a few questions regarding the Open-book Knowledge-store mentioned in the paper. I would like to know how this knowledge store is built and if this method can be applied to the field of computer vision.

Could you kindly provide me with more information or resources regarding these topics? Any help or insights would be greatly appreciated.

Thank you for your time and consideration. I look forward to hearing back from you soon.

Sincerely,

sone

Hyper-parameter for GenKGC on FB15k-237

Hi,

Thanks for building the cool KG-relevant package. I am following your work "From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer". Could you please provide a set of hyper-parameters to reproduce the results on FB15k-237? That would be so helpful to me :)

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.