EMNLP 2020 Reading list
Findings
-
must
-
An Empirical Methodology for Detecting and Prioritizing Needs during Crisis Events
-
#Turki$hTweets: A Benchmark Dataset for Turkish Text Correction
-
A Shared-Private Representation Model with Coarse-to-Fine Extraction for Target Sentiment Analysis
-
Exploiting Unsupervised Data for Emotion Recognition in Conversations
-
A Fully Hyperbolic Neural Model for Hierarchical Multi-Class Classification
-
PBoS: Probabilistic Bag-of-Subwords for Generalizing Word Embedding
-
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification
-
Octa: Omissions and Conflicts in Target-Aspect Sentiment Analysis
-
Cost-effective Selection of Pretraining Data: A Case Study of Pretraining BERT on Social Media
-
DomBERT: Domain-oriented Language Model for Aspect-based Sentiment Analysis
-
Rethinking Topic Modelling: From Document-Space to Term-Space
-
On Romanization for Model Transfer Between Scripts in Neural Machine Translation
-
Denoising Multi-Source Weak Supervision for Neural Text Classification
-
STANDER: An Expert-Annotated Dataset for News Stance Detection and Evidence Retrieval
-
-
should
- TinyBERT
- BERT for Monolingual and Cross-Lingual Reverse Dictionary
- What’s so special about BERT’s layers? A closer look at the NLP pipeline in monolingual and multilingual models
- Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional Networks and Syntax-based Regulation
- PolicyQA: A Reading Comprehension Dataset for Privacy Policies
- E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT
- A Multi-task Learning Framework for Opinion Triplet Extraction
- Event Extraction as Multi-turn Question Answering
- Cross-lingual Alignment Methods for Multilingual BERT: A Comparative Study
- Multiˆ2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT
- Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection
- Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank
- Optimizing Word Segmentation for Downstream Task
- Investigating Transferability in Pretrained Language Models
-
maybe
Main Conf
-must
-should
- maybe