TRAILS - Trustworthy and Inclusive MachinesLeonhard HennigAugust 1, 2024Go to Project SiteBias Evaluation Large Language ModelsLeonhard HennigSenior ResearcherPublicationsMultilingual Datasets for Custom Input Extraction and Explanation Requests Parsing in Conversational XAI SystemsConversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered considerable …Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Fedor Splitt, Jiaao Li, Yoana Tsoneva, Sebastian Möller, Vera SchmittPDF Cite Project ProjectPolBiX: Detecting LLMs' Political Bias in Fact-Checking through X-phemismsLarge Language Models are increasingly used in applications requiring objective assessment, which could be compromised by political …Charlott Jakob, David Harbecke, Patrick Parschan, Pia Wenzel Neves, Vera SchmittPDF Cite Code ProjectReverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference ResolutionIn this work, we reimagine classical probing to evaluate knowledge transfer from simple source to more complex target tasks. Instead of …Tatiana Anikina, Arne Binder, David Harbecke, Stalin Varanasi, Leonhard Hennig, Simon Ostermann, Sebastian Möller, Josef Van GenabithPDF Cite ProjectCross-Refine: Improving Natural Language Explanation Generation by Learning in TandemNatural language explanations (NLEs) are vital for elucidating the reasoning behind large language model (LLM) decisions. Many …Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian Möller, Vera SchmittPDF Cite ProjectCoXQL: A Dataset for Parsing Explanation Requests in Conversational XAI SystemsConversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant …Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian MöllerPDF Cite Code Dataset ProjectEnhancing Editorial Tasks: A Case Study on Rewriting Customer Help Page Contents Using Large Language ModelsIn this paper, we investigate the use of large language models (LLMs) to enhance the editorial process of rewriting customer help …Aleksandra Gabryszak, Daniel Röder, Arne Binder, Luca Sion, Leonhard HennigPDF Cite Dataset ProjectGerman Voter Personas can Radicalize LLM Chatbots via the Echo Chamber EffectWe investigate the impact of LLMs on political discourse with a particular focus on the influence of generated personas on model …Maximilian Bleick, Nils Feldhus, Aljoscha Burchardt, Sebastian MöllerPDF Cite Code Dataset Project DOIDFKI-MLST at DialAM-2024 Shared Task: System DescriptionThis paper presents the dfki-mlst submission for the DialAM shared task (Ruiz-Dolz et al., 2024) on identification of argumentative and …Arne Binder, Tatiana Anikina, Leonhard Hennig, Simon OstermannPDF Cite Code ProjectEvaluating the Robustness of Adverse Drug Event Classification Models Using TemplatesAn adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often …Dorothea MacPhail, David Harbecke, Lisa Raithel, Sebastian MöllerPDF Cite Code ProjectSymmetric Dot-Product Attention for Efficient Training of BERT Language ModelsInitially introduced as a machine translation model, the Transformer architecture has now become the foundation for modern deep …Martin Courtois, Malte Ostendorff, Leonhard Hennig, Georg RehmPDF Cite Code Project