Giter Club home page Giter Club logo

deep-pico-detection's Introduction

Deep-PICO-Detection

A model for identifying PICO elements in a given biomedical/clinical text.

This is the source code for the paper: Di Jin, Peter Szolovits, Advancing PICO Element Detection in Biomedical Text via Deep Neural Networks, Bioinformatics, , btaa256. If you use the code, please cite the paper:

@article{10.1093/bioinformatics/btaa256,
    author = {Jin, Di and Szolovits, Peter},
    title = "{Advancing PICO element detection in biomedical text via deep neural networks}",
    journal = {Bioinformatics},
    year = {2020},
    month = {04},
    issn = {1367-4803},
    doi = {10.1093/bioinformatics/btaa256},
    url = {https://doi.org/10.1093/bioinformatics/btaa256},
    note = {btaa256},
    eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa256/33363807/btaa256.pdf},
}

Prerequisites:

Run the following command to install the prerequisite packages:

pip install -r requirements.txt

Data:

Please download the data including PICO and NICTA-PIBOSO from the Google Drive and unzip it to the main directory of this repository so that the folder layout is like this:

./BERT
./lstm_model
./data

How to use

For LSTM based models

  • The code for the LSTM based models is in the folder of "lstm_model", so run the following command to enter it:
cd lstm_model
  • First we need to process the data to get vocabulary and trim the embedding file. The embeddings we used in experiments are from here. Please download it and convert it to "txt" format. Of course, you can also try other kinds of embeddings such as fasttext. Then run the following command:
python build_data.py --data_keyname DATA_KEYNAME --filename_wordvec PATH_TO_EMBEDDING_FILE

DATA_KEYNAME can be "pico" for the PICO dataset and "nicta" for the NICTA-PIBOSO dataset; PATH_TO_EMBEDDING_FILE specifies where you store the embedding file.

  • Then we can start training the model for the PICO dataset by running the following command:
python run_train_pico.py --data_keyname pico

And the following command is for the NICTA-PIBOSO dataset:

python run_train_nicta.py --data_keyname nicta
  • If we want to implement the 10 fold cross-validation, we run the following commands:
python run_train_cross_validate_pico.py --data_keyname pico
python run_train_cross_validate_nicta.py --data_keyname nicta

There are several important arguments in the file of "src/config.py" that configures the model architecture and they are explains here:

  • --adv_reg_coeff: The coefficient for the adversarial loss regularization. Setting it to zero means we do not conduct the adversarial training.
  • --va_reg_coeff: The coefficient for the virtual adversarial loss regularization. Setting it to zero means we do not conduct the virtual adversarial training.
  • --num_augmentation: The number of samples we use for the virtual adversarial training.

For the BERT Models

  • Code for the BERT models is in the folder of "BERT" and please enter this folder.

  • The best BERT model we found is the BioBERT model. The pretrained model parameter files available in the original repository only have the tensorflow version, and if you want the pytorch version, you can download from here. Once you obtain the pretrained BERT model file, run the following commands for training:

python run_classifier_pico.py PATH_TO_BERT_MODEL
python run_classifier_nicta.py PATH_TO_BERT_MODEL

In this command, PATH_TO_BERT_MODEL specifies the directory where you put your downloaded BERT model files.

  • The following commands are for the 10-fold cross-validation:
python run_classifier_pico_cross_validate.py PATH_TO_BERT_MODEL
python run_classifier_nicta_cross_validate.py PATH_TO_BERT_MODEL

deep-pico-detection's People

Contributors

jind11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deep-pico-detection's Issues

pytorch

i really want to realize them by pytorch, it is easy to check.
Best wishes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.