Top-cited open-sourced ACL19 papers:
Transformer-XL: Attentive Language Models beyond a Fixed-Length Context (citation 445)
Multi-Task Deep Neural Networks for Natural Language Understanding (citation 237)
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference (citation 105)
ERNIE: Enhanced Language Representation with Informative Entities (citation 92)
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned (citation 89)
What Does BERT Learn about the Structure of Language? (citation 76)
Is Attention Interpretable? (citation 61)
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions (citation 61)
Probing Neural Network Comprehension of Natural Language Arguments (citation 58)
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction (citation 43)
MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension (citation 42)
Attention Guided Graph Convolutional Networks for Relation Extraction (citation 36)
Cognitive Graph for Multi-Hop Reading Comprehension at Scale (citation 36)
Compositional Questions Do Not Necessitate Multi-hop Reasoning (citation 34)
BAM! Born-Again Multi-Task Networks for Natural Language Understanding (citation 34)
Learning Deep Transformer Models for Machine Translation (citation 32)
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank (citation 32)
The rest can be found in ACL2019 paper, citation and link to code