In this project, our undertaking aims to address the challenge of inadequately labeled legal data in Natural Language Processing (NLP) by leveraging neural network-driven data augmentation techniques. Initially, We begin by training a large language model BART on vast unlabelled legal data and fine-tuning it using a masking approach. Further, we plan to measure our model's performance across proposed baselines using selective masking and empirically measuring it using evaluation metrics F1 Score, Perplexity, and Accuracy. We believe that with more accurate and robust models, NER models are more equipped to handle variations in the data they may encounter during testing, such as misspellings, abbreviations, and word orders. The overall advancement in NER will benefit Information Extraction, Question Answering, Machine Translation, and several other areas in NLP.
See LegalBART/README.md
for detailed instructions. Link
See Gold_Only/README.md
for detailed instructions. Link
See DAGA/README.md
for detailed instructions. Link
See ReplacementBasedAugmentation/README.md
for detailed instructions. Link
See MULDA/README.md
for detailed instructions. Link