This is build for group work, may include all the information and materials used in our course, including but not limited to:
- paper list(NLP focused)
- paper summary
- presentation slides
- final work
- Understanding Convolutional Neural Networks for Text Classification
A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval
- Convolutional Neural Networks for sentence classification
2017 Very Deep Convolutional Networks for Text Classification
- 2016 Combination of Convolutional and Recurrent Neural Network for Sentiment Analysis of Short Texts
- 2017 CNN for situations understanding based on sentiment analysis of twitter data
- Blog: Best Practices for Text Classification with Deep Learning by Jason Brownlee on October 23, 20
- A CNN-BiLSTM Model for Document-Level Sentiment Analysis
Deep CNN-LSTM with combined kernels from multiple branches for IMDb review sentiment analysis
- Sentiment analysis of movie reviews based on cnn-blstm.
- Multi-Channel Lexicon Integrated CNN-BiLSTM Models for Sentiment Analysis.
Analyzing and Interpreting Convolutional Neural Networks in NLP
Interpreting Neural Networks to Improve Politeness Comprehension
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models
Visualizing and Understanding Neural Models in NLP
Attention Visualization of Gated Convolutional Neural Networks with Self Attention in Sentiment Analysis
Learning Sentimental Weights of Mixed-gram Terms for Classification and Visualization
Explaining Predictions of Non-Linear Classifiers in NLP
Interpreting Self-Attention Weights
- Interpreting Neural Networks to Improve Politeness Comprehension
- Understanding Neural Networks through Representation Erasure
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
"What is relevant in a text document?": An interpretable machine learning approach
- Importance of Self-Attention for Sentiment Analysis
Adversarial Attack on Sentiment Classification
- Sentiment analysis is not solved! Assessing and probing sentiment classification
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- Sentiment Classification Using Document Embeddings Trained with Cosine Similarity
Fine-grained Sentiment Classification using BERT
- Unsupervised Data Augmentation
- A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs
- Mazajak: An Online Arabic Sentiment Analyser
- Detecting Political Bias in News Articles Using Headline Attention
What does BERT look at? An Analysis of BERT’s Attention
Analyzing the Structure of Attention in a Transformer Language Model
- Exploiting Attention to Reveal Shortcomings in Memory Models
- Extracting Syntactic Trees from Transformer Encoder Self-Attentions
- Interpreting Neural Networks with Nearest Neighbors
- Interpretable Neural Architectures for Attributing an Ad’s Performance to its Writing Style
- Learning Explanations from Language Data
Open Sesame: Getting Inside BERT’s Linguistic Knowledge
Interpreting deep models for text analysis via optimization and regularization methods
Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure
- Representation of Linguistic Form and Function in Recurrent Neural Networks
- Analyzing Linguistic Knowledge in Sequential Model of Sentence
- Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
Interpreting Recurrent and Attention-Based Neural Models: A Case Study on Natural Language Inference
- Are BLEU and Meaning Representation in Opposition?
- Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context
- Modeling Paths for Explainable Knowledge Base Completion
GNNExplainer: Generating Explanations for Graph Neural Networks
Interpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs
- Analyzing Learned Representations of a Deep ASR Performance Prediction Model
- Evaluating Textual Representations through Image Generation
- LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation
- Interpretable Textual Neuron Representations for NLP
- Interpretable Word Embedding Contextualization
- Portable, layer-wise task performance monitoring for NLP models
- Debugging Sequence-to-Sequence Models with SEQ2SEQ-VIS
- Multi-Granular Text Encoding for Self-Explaining Categorization
- Evaluating Recurrent Neural Network Explanations
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
Understanding Neural Networks Through Deep Visualization.
Visualizing and Measuring the Geometry of BERT.
Interpreting CNNs via Decision Trees
Visual Interpretability for Deep Learning: a Survey
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
Inverting Visual Representations with Convolutional Networks
Towards Better Analysis of Deep Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Visualizing and Understanding Convolutional Networks