TRAILS - Trustworthy and Inclusive MachinesLeonhard HennigAug 1, 2024Go to Project SiteBias Evaluation Large Language ModelsLeonhard HennigSenior ResearcherPublicationsCoXQL: A Dataset for Parsing Explanation Requests in Conversational XAI SystemsConversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant …Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian MöllerPDF Cite Code Dataset ProjectEnhancing Editorial Tasks: A Case Study on Rewriting Customer Help Page Contents Using Large Language ModelsIn this paper, we investigate the use of large language models (LLMs) to enhance the editorial process of rewriting customer help …Aleksandra Gabryszak, Daniel Röder, Arne Binder, Luca Sion, Leonhard HennigPDF Cite Dataset ProjectGerman Voter Personas can Radicalize LLM Chatbots via the Echo Chamber EffectWe investigate the impact of LLMs on political discourse with a particular focus on the influence of generated personas on model …Maximilian Bleick, Nils Feldhus, Aljoscha Burchardt, Sebastian MöllerPDF Cite Code Dataset Project DOIDFKI-MLST at DialAM-2024 Shared Task: System DescriptionThis paper presents the dfki-mlst submission for the DialAM shared task (Ruiz-Dolz et al., 2024) on identification of argumentative and …Arne Binder, Tatiana Anikina, Leonhard Hennig, Simon OstermannPDF Cite Code ProjectEvaluating the Robustness of Adverse Drug Event Classification Models Using TemplatesAn adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often …Dorothea MacPhail, David Harbecke, Lisa Raithel, Sebastian MöllerPDF Cite Code ProjectSymmetric Dot-Product Attention for Efficient Training of BERT Language ModelsInitially introduced as a machine translation model, the Transformer architecture has now become the foundation for modern deep …Martin Courtois, Malte Ostendorff, Leonhard Hennig, Georg RehmPDF Cite Code Project