1

A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries

We analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries

Enriching BERT with Knowledge Graph Embedding for Document Classification

In this paper, we focus on the classification of books using short descriptive texts (cover blurbs) and additional metadata. Building upon BERT, a deep neural language model, we demonstrate how to combine text representations with metadata and …

Improving Relation Extraction by Pre-Trained Language Representations

We show that transfer learning through generative language model pre-training improves supervised neural relation extraction, achieving new state-of-the-art performance on TACRED and SemEval 2010 Task 8.

Neural Vector Conceptualization for Word Vector Space Interpretation

Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, …

Train, Sort, Explain: Learning to Diagnose Translation Models

Evaluating translation models is a trade-off between effort and detail. On the one end of the spectrum there are automatic count-based methods such as BLEU, on the other end linguistic evaluations by humans, which arguably are more informative but …

mEx - an Information Extraction Platform for German Medical Text

Learning Explanations From Language Data

PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.