1

Cross-lingual Neural Vector Conceptualization

Recently, Neural Vector Conceptualization (NVC) was proposed as a means to interpret samples from a word vector space. For NVC, a neural model activates higher order concepts it recognizes in a word vector instance. To this end, the model first needs …

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain. Graph Convolutional Networks (GCN) allow this projection, but existing explainability …

Fine-Tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction

We show that generative language model pre-training combined with selective attention improves recall for long-tail relations in distantly supervised neural relation extraction.

A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries

We analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries

Enriching BERT with Knowledge Graph Embedding for Document Classification

In this paper, we focus on the classification of books using short descriptive texts (cover blurbs) and additional metadata. Building upon BERT, a deep neural language model, we demonstrate how to combine text representations with metadata and …

Improving Relation Extraction by Pre-Trained Language Representations

We show that transfer learning through generative language model pre-training improves supervised neural relation extraction, achieving new state-of-the-art performance on TACRED and SemEval 2010 Task 8.

Neural Vector Conceptualization for Word Vector Space Interpretation

Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, …

Train, Sort, Explain: Learning to Diagnose Translation Models

Evaluating translation models is a trade-off between effort and detail. On the one end of the spectrum there are automatic count-based methods such as BLEU, on the other end linguistic evaluations by humans, which arguably are more informative but …

mEx - an Information Extraction Platform for German Medical Text

Learning Explanations From Language Data

PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.