Name: Fan Qian
Type: User
Company: Harbin Institute of Technology, HIT
Bio: CS PhD at Harbin Institute of Technology (HIT). My main research interests include speech/audio processing, affective computing, multimodal machine learning.
Location: Harbin, China
Fan Qian's Projects
speech emotion recognition using a convolutional recurrent networks based on IEMOCAP
Implementation of Attention-based Deep Multiple Instance Learning in PyTorch
TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20
Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)
Baseline scripts for the Audio/Visual Emotion Challenge 2019
bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
Reading list for research topics in multimodal machine learning
TensorFlow code and pre-trained models for BERT
Bidirectional LSTM network for speech emotion recognition.
This repo contains code to detect sarcasm from text in discussion forum using deep learning
中文文本分类,TextCNN,TextRNN,FastText,TextRCNN,BiLSTM_Attention,DPCNN,Transformer,基于pytorch,开箱即用。
CMU MultimodalSDK is a machine learning platform for development of advanced multimodal models as well as easily accessing and processing multimodal datasets.
Context-Dependent Sentiment Analysis in User-Generated Videos
This repo contains implementation of different architectures for emotion recognition in conversations
CTC for emotion recognition
A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,近30万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系[email protected] 版权所有,违权必究 Tan 2018.06
Source code for "Distilling Knowledge From Graph Convolutional Networks", CVPR'20
A machine learning application for emotion recognition from speech
Multimodal sentiment analysis using hierarchical fusion with context modeling
Capturing High-level Semantic Correlations via Graph for Multimodal Sentiment Analysis
哈尔滨工业大学毕业论文LaTeX模板
An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Code for Memory Fusion Network, AAAI 2018
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
Attention-based multimodal fusion for sentiment analysis