Giter Club home page Giter Club logo

sifrank's Introduction

SIFRank

The code of our paper SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model

Versions Notes

  • 2020/02/21——Initial version Provided the most basic functions.
  • 2020/02/28——Second version Added new algorithms DS(document segmentation) and EA(embeddings alignment) to speed up SIFRank and SIFRank+.
  • 2020/03/02——Third version A little change of SIFRank+ in ./model/method.py about making a simple normalization of position_score.

Environment

Python 3.6
nltk 3.4.3
StanfordCoreNLP 3.9.1.1
torch 1.1.0
allennlp 0.8.4

Download

  • ELMo elmo_2x4096_512_2048cnn_2xhighway_options.json and elmo_2x4096_512_2048cnn_2xhighway_weights.hdf5 from here , and save it to the auxiliary_data/ directory
  • StanfordCoreNLP stanford-corenlp-full-2018-02-27 from here, and save it to anywhere

Usage

import nltk
from embeddings import sent_emb_sif, word_emb_elmo
from model.method import SIFRank, SIFRank_plus
from stanfordcorenlp import StanfordCoreNLP
import time

#download from https://allennlp.org/elmo
options_file = "../auxiliary_data/elmo_2x4096_512_2048cnn_2xhighway_options.json"
weight_file = "../auxiliary_data/elmo_2x4096_512_2048cnn_2xhighway_weights.hdf5"

porter = nltk.PorterStemmer()
ELMO = word_emb_elmo.WordEmbeddings(options_file, weight_file, cuda_device=0)
SIF = sent_emb_sif.SentEmbeddings(ELMO, lamda=1.0)
en_model = StanfordCoreNLP(r'E:\Python_Files\stanford-corenlp-full-2018-02-27',quiet=True)#download from https://stanfordnlp.github.io/CoreNLP/
elmo_layers_weight = [0.0, 1.0, 0.0]

text = "Discrete output feedback sliding mode control of second order systems - a moving switching line approach The sliding mode control systems (SMCS) for which the switching variable is designed independent of the initial conditions are known to be sensitive to parameter variations and extraneous disturbances during the reaching phase. For second order systems this drawback is eliminated by using the moving switching line technique where the switching line is initially designed to pass the initial conditions and is subsequently moved towards a predetermined switching line. In this paper, we make use of the above idea of moving switching line together with the reaching law approach to design a discrete output feedback sliding mode control. The main contributions of this work are such that we do not require to use system states as it makes use of only the output samples for designing the controller. and by using the moving switching line a low sensitivity system is obtained through shortening the reaching phase. Simulation results show that the fast output sampling feedback guarantees sliding motion similar to that obtained using state feedback"
keyphrases = SIFRank(text, SIF, en_model, N=15,elmo_layers_weight=elmo_layers_weight)
keyphrases_ = SIFRank_plus(text, SIF, en_model, N=15, elmo_layers_weight=elmo_layers_weight)
print(keyphrases)
print(keyphrases_)

Evaluate the model

Use this eval/sifrank_eval.py to evaluate SIFRank on Inspec, SemEval2017 and DUC2001 datasets We also have evaluation codes for other baseline models. We will organize and upload them later, so stay tuned. F1 score when the number of keyphrases extracted N is set to 5.

Models Inspec SemEval2017 DUC2001
TFIDF 11.28 12.70 9.21
YAKE 15.73 11.84 10.61
TextRank 24.39 16.43 13.94
SingleRank 24.69 18.23 21.56
TopicRank 22.76 17.10 20.37
PositionRank 25.19 18.23 24.95
Multipartite 23.05 17.39 21.86
RVA 21.91 19.59 20.32
EmbedRank d2v 27.20 20.21 21.74
SIFRank 29.11 22.59 24.27
SIFRank+ 28.49 21.53 30.88

Cite

If you use this code, please cite this paper

@article{DBLP:journals/access/SunQZWZ20,
  author    = {Yi Sun and
               Hangping Qiu and
               Yu Zheng and
               Zhongwei Wang and
               Chaoran Zhang},
  title     = {SIFRank: {A} New Baseline for Unsupervised Keyphrase Extraction Based
               on Pre-Trained Language Model},
  journal   = {{IEEE} Access},
  volume    = {8},
  pages     = {10896--10906},
  year      = {2020},
  url       = {https://doi.org/10.1109/ACCESS.2020.2965087},
  doi       = {10.1109/ACCESS.2020.2965087},
  timestamp = {Fri, 07 Feb 2020 12:04:22 +0100},
  biburl    = {https://dblp.org/rec/journals/access/SunQZWZ20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

sifrank's People

Contributors

sunyilgdx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

sifrank's Issues

实验结果不匹配

你好,
我复现代码的结果与论文里面不太一样,在Semeval2017上面,elmo L0,lamda=0.6,N=5 的结果是f1@5 20.21, 与论文中22.59有一定差距,有可能是什么原因呢?

About the result of EmbedRank s2v

Hi.
Thanks for your great work.
I check the paper which proposed EmbedRank, they use two different ways to generate the embedding: Sent2vec and Doc2vec. I find that the results of EmbedRank Sent2vec are better than Doc2vec on some datasets.
Did you compare EmbedRank Sent2vec on the SemEval2017 dataset?
Thanks
Kawamura

Code gets stuck in the StanfordCoreNLP call

Hello, thank you for sharing your code. I am trying to run your code, and am stuck in the Usagesection in the following line of code:
en_model = StanfordCoreNLP(‘stanford-corenlp-full-2018-02-27',quiet=True)

The code simply 'hangs' or gets stuck in this line and does not move forward. The execution of this line does not finish even after waiting for a few hours. Can you please provide any help with this? Thank you

evaluation question

Thank you for great work!

However, your paper said using macro F1 score. But i know real implementation is micro avg f1 score not macro.
Isn't right macro needs average f1 score each document?

p, r, f = get_PRF(num_c_5, num_e_5, num_s)

模型增强

您好,非常感谢您的工作。近期通过替换 SIFRank中的elmo预训练模型(例如electra,bert),做了一些对比实践,但发现差距较大,不知道什么原因。其次如果其他的预训练模型的确表现不如传统的Elmo,那SIFRank方法进一步提升的可能性在什么方面?

Evaluation results not reproducible

Hi Sun Yi,

thank you for sharing your valuable work. I have run the evaluation myself and the results I obtain slightly differ from the ones in your paper.
Below are the evaluation results for SIFRank on the SemEval2017 dataset.

N=5
P=0.4864097363083164
R=0.14057920037519053
F1=0.21811897398581043

N=10
P=0.43448275862068964
R=0.25114315863524445
F1=0.31830002228991755

N=15
P=0.39143979412163077
R=0.3388439441904092
F1=0.3632478632478633
totally cost 501.45456099510193

Could you explain what might be the reason?

Best regards,
Charles

Installating error

Hello!

I try to install SIFRank, but I have error:

"No module named 'allennlp.commands.elmo'"

Do you know, what can I do with it?

I read that the current version of allennlp does not contain the elmo.py file, and whan i try to install previous version, i have error too.

I will be grateful for the help because SIFRank is very interested me.

Question about the reported results of Embedrank

Hi.

Thank you for the great work. I have one question that I hope you could help with.

In the original Embedrank paper, the variant EmbedRank d2v achieved an F1-score of 31.51 in the Inspec dataset (with N = 5). But your paper reported that it only achieved an F1-score of 27.20. Why is there such a difference? Did you run the original code of Embedrank and it only gave an F1-score of 27.20?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.