DFKI NLP
DFKI NLP
Home
News
People
Publications
Projects
Datasets
Contact
Interpretability
Conversational XAI and Explanation Dialogues
My main research interest is human-centric explainability, i.e., making language models more interpretable by building applications …
Nils Feldhus
PDF
Cite
Towards Modeling and Evaluating Instructional Explanations in Teacher-Student Dialogues
For dialogues in which teachers explain difficult concepts to students, didactics research often debates which teaching strategies lead …
Nils Feldhus
,
Aliki Anagnostopoulou
,
Qianli Wang
,
Milad Alshomary
,
Henning Wachsmuth
,
Daniel Sonntag
,
Sebastian Möller
Cite
DOI
Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit
Inseq 1 is a recent toolkit providing an intuitive and optimized interface to conduct feature attribution analyses of generative …
Gabriele Sarti
,
Nils Feldhus
,
Jirui Qi
,
Malvina Nissim
,
Arianna Bisazza
PDF
Cite
QoEXplainer: Mediating Explainable Quality of Experience Models with Large Language Models
In this paper, we present QoEXplainer, a QoE dashboard for supporting humans in understanding the internals of an explainable, …
Nikolas Wehner
,
Nils Feldhus
,
Michael Seufert
,
Sebastian Möller
,
Tobias Hoßfeld
Cite
DOI
The Role of Explainability in Collaborative Human-AI Disinformation Detection
Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative …
Vera Schmitt
,
Luis Felipe Villa-Arenas
,
Nils Feldhus
,
Joachim Meyer
,
Robert P. Spang
,
Sebastian Möller
PDF
Cite
DOI
Inseq: An Interpretability Toolkit for Sequence Generation Models
Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking …
Gabriele Sarti
,
Nils Feldhus
,
Ludwig Sickert
,
Oskar Van Der Wal
,
Malvina Nissim
,
Arianna Bisazza
PDF
Cite
Code
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Saliency maps can explain a neural model’s predictions by identifying important input features. They are difficult to interpret …
Nils Feldhus
,
Leonhard Hennig
,
Maximilian Dustin Nasert
,
Christopher Ebert
,
Robert Schwarzenberg
,
Sebastian Möller
PDF
Cite
Code
XAINES: Explaining AI with Narratives
Artificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car intelligent devices, robots, and virtual …
Mareike Hartmann
,
Han Du
,
Nils Feldhus
,
Ivana Kruijff-Korbayová
,
Daniel Sonntag
PDF
Cite
DOI
Towards Personality-aware Chatbots
Chatbots are increasingly used to automate operational processes in customer service. However, most chatbots lack adaptation towards …
Daniel Fernau
,
Stefan Hillmann
,
Nils Feldhus
,
Tim Polzehl
,
Sebastian Möller
PDF
Cite
Mediators: Conversational Agents Explaining NLP Model Behavior
The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a …
Nils Feldhus
,
Ajay Madhavan Ravichandran
,
Sebastian Möller
PDF
Cite
Slides
»
Cite
×