Giter Club home page Giter Club logo

rgcn's Introduction

Residual-Based Graph Convolutional Network for Emotion Recognition in Conversation for Smart Internet of Things

Young-Ju Choi, Young-Woon Lee, and Byung-Gyu Kim

Intelligent Vision Processing Lab. (IVPL), Sookmyung Women's University, Seoul, Republic of Korea


This repository is the official PyTorch implementation of the paper published in Big Data, Mary Ann Liebert, Inc., publishers.

paper


Summary of paper

Abstract

Recently, emotion recognition in conversation (ERC) has become more crucial in the development of diverse Internet of Things devices, especially closely connected with users. The majority of deep learning-based methods for ERC combine the multilayer, bidirectional, recurrent feature extractor and the attention module to extract sequential features. In addition to this, the latest model utilizes speaker information and the relationship between utterances through the graph network. However, before the input is fed into the bidirectional recurrent module, detailed intrautterance features should be obtained without variation of characteristics. In this article, we propose a residual-based graph convolution network (RGCN) and a new loss function. Our RGCN contains the residual network (ResNet)-based, intrautterance feature extractor and the GCN-based, interutterance feature extractor to fully exploit the intra–inter informative features. In the intrautterance feature extractor based on ResNet, the elaborate context feature for each independent utterance can be produced. Then, the condensed feature can be obtained through an additional GCN-based, interutterance feature extractor with the neighboring associated features for a conversation. The proposed loss function reflects the edge weight to improve effectiveness. Experimental results demonstrate that the proposed method achieves superior performance compared with state-of-the-art methods.

Network Architecture

Experimental Results


Getting Started

Dependencies and Installation

  • Anaconda3

  • Python == 3.6

    conda create --name rgcn python=3.6
  • PyTorch (NVIDIA GPU + CUDA)

    Trained on PyTorch 1.0.0 CUDA 10.0

    conda install pytorch==1.0.0 torchvision==0.2.1 cuda100 -c pytorch
    conda install cudatoolkit=10.0 -c pytorch
  • torch-geometric, torch-sparse, torch-scatter, torch-cluster

    pip install torch-geometric==1.1.0
    pip install torch-sparse==0.2.4
    pip install torch-scatter==1.1.2
    pip install torch-cluster==1.2.4
  • pickle, pandas, scikit-learn, spacy, torchtext

    pip install pickle-mixin
    pip install pandas
    pip install scikit-learn
    pip install spacy
    pip install torchtext

Dataset Preparation

We used the interactive emotional dyadic motion capture (IEMOCAP), MELD, EmoContext (EC) datasets.

[IEMOCAP] Busso, Carlos, et al. "IEMOCAP: Interactive emotional dyadic motion capture database." Language resources and evaluation 42.4 (2008): 335-359.
[MELD] Poria, Soujanya, et al. "Meld: A multimodal multi-party dataset for emotion recognition in conversations." arXiv preprint arXiv:1810.02508 (2018).
[EC] Chatterjee, Ankush, et al. "Semeval-2019 task 3: Emocontext contextual emotion detection in text." Proceedings of the 13th international workshop on semantic evaluation. 2019.
  • Download

    You can download three datasets in below link.

    google-drive

    Put the datasets in ./datasets/

Model Zoo

Pre-trained models are available in below link.

google-drive

Also, we used the '300-dimensional pretrained 840B GloVe' (glove.840B.300d) for embedding and you can download the glove.840B.300d in here.


Training

Run in ./codes/

  • IEMOCAP

    python train_RGCN_IEMOCAP.py
  • MELD

    python train_RGCN_MELD.py
  • EC

    python train_RGCN_EC.py

Prediction

Run in ./codes/

  • IEMOCAP

    python predict_RGCN_IEMOCAP.py
  • MELD

    python predict_RGCN_MELD.py
  • EC

    python predict_RGCN_EC.py

Citation

@article{choi2021residual,
    title={Residual-based graph convolutional network for emotion recognition in conversation for smart Internet of Things},
    author={Choi, Young-Ju and Lee, Young-Woon and Kim, Byung-Gyu},
    journal={Big Data},
    volume={9},
    number={4},
    pages={279--288},
    year={2021},
    publisher={Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New~…}
}

Acknowledgement

The codes are heavily based on DialogueGCN. Thanks for their awesome works.

DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation. D. Ghosal, N. Majumder, S. Poria, N. Chhaya, & A. Gelbukh. EMNLP-IJCNLP (2019), Hong Kong, China.

rgcn's People

Contributors

younggjuuchoi avatar

Stargazers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.