A sarcasm detection model using Bidirectional Encoder Representations for Transformers (BERT) and Graph Convolutional Networks (GCN) has shown state-of-art results against conventional models and vanilla transformer-based approaches.
Traceback (most recent call last):
File "C:\Users\RIS-FC3PF63\Downloads\Sarcasm-Detection-with-BERT-and-GCN-main\Sarcasm-Detection-with-BERT-and-GCN-main\infer.py", line 110, in
t_probs = inf.evaluate(raw_text)
File "C:\Users\RIS-FC3PF63\Downloads\Sarcasm-Detection-with-BERT-and-GCN-main\Sarcasm-Detection-with-BERT-and-GCN-main\infer.py", line 69, in evaluate
t_outputs = self.model(t_inputs)
File "D:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\RIS-FC3PF63\Downloads\Sarcasm-Detection-with-BERT-and-GCN-main\Sarcasm-Detection-with-BERT-and-GCN-main\models\bertgcn.py", line 27, in forward
text_bert_indices, bert_segments_ids, dependency_graph, affective_graph = inputs #text_indices,
ValueError: too many values to unpack (expected 4)