Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
This is the code of our paper underreview for ESWC-2023, there are some explanations for it
For non-pretraining models, you need to run Pre-processing.py to generate data before running the models. For pretaining models, please download BERT models before you run hownet_bert.py.
We just take BERT model and BQ dataset for example, it is easy to expand to other text semantic matching datasets or replace with other pretraining models.
Our experimental results on the BQ, AFQMC and PAWSX-zh datasets are as follows: