TRAILS - Trustworthy and Inclusive MachinesLeonhard HennigAug 1, 2024Go to Project SiteBias Evaluation Large Language ModelsLeonhard HennigSenior ResearcherPublicationsEnhancing Editorial Tasks: A Case Study on Rewriting Customer Help Page Contents Using Large Language ModelsIn this paper, we investigate the use of large language models (LLMs) to enhance the editorial process of rewriting customer help …Aleksandra Gabryszak, Daniel Röder, Arne Binder, Luca Sion, Leonhard HennigCite Dataset ProjectDFKI-MLST at DialAM-2024 Shared Task: System DescriptionThis paper presents the dfki-mlst submission for the DialAM shared task (Ruiz-Dolz et al., 2024) on identification of argumentative and …Arne Binder, Tatiana Anikina, Leonhard Hennig, Simon OstermannPDF Cite Code ProjectEvaluating the Robustness of Adverse Drug Event Classification Models Using TemplatesAn adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often …Dorothea MacPhail, David Harbecke, Lisa Raithel, Sebastian MöllerPDF Cite Code ProjectSymmetric Dot-Product Attention for Efficient Training of BERT Language ModelsInitially introduced as a machine translation model, the Transformer architecture has now become the foundation for modern deep …Martin Courtois, Malte Ostendorff, Leonhard Hennig, Georg RehmPDF Cite Code ProjectGerman Voter Personas can Radicalize LLM Chatbots via the Echo Chamber EffectMaximilian Bleick, Nils Feldhus, Aljoscha Burchardt, Sebastian MöllerCite Project DOI