As a postodoctoral researcher, I seek to provide empirical evidence to inform the design, development, and deployment of explainable artifical intelligence (xAI) systems that effectively meet user needs and enhance user trust. Specifically, I focus on the usability of counterfactual explanations, simplified ‘what-if’ scenarios that explore how changes to input variables trigger different model outcomes.
In this way, I aspire to contribute to a more comprehensive understanding of the human factors involved in explainable AI, ultimately fostering the development of more user-centered and actionable explanations in AI systems.
In conjunction with my research work, I serve as the scientific coordinator of the Data-NInJA research training group, supporting young AI researchers to build trustworthy AI for seamless problem solving.
Lüdemann R, Schulz A, Kuhl U (2025) In: Computer-Human Interaction Research and Applications. 8th International Conference, CHIRA 2024, Porto, Portugal, November 21–22, 2024, Proceedings, Part II. Plácido da Silva H, Cipresso P (Eds); Communications in Computer and Information Science, 2371. Cham: Springer Nature Switzerland: 359-381.
Rüttgers S, Kuhl U, Paaßen B (2024) In: Proceedings of the 17th International Conference on Educational Data Mining. Paaßen B, Demmans Epp C (Eds); International Educational Data Mining Society: 458--468.
Kuhl U, Artelt A, Hammer B (2023) In: Explainable Artificial Intelligence. First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III. Longo L (Ed); Communications in Computer and Information Science. Cham: Springer Nature Switzerland: 280-300.
Hammer B, Hüllermeier E, Lohweg V, Schneider A, Schenck W, Kuhl U, Braun M, Pfeifer A, Holst C-A, Schmidt M, Schomaker G, et al. (2022) Bielefeld: Univ. Bielefeld, Forschungsinstitut für Kognition und Robotik.