Please use this identifier to cite or link to this item: https://repository.isls.org//handle/1/10040
Title: Elaborateness of Explanations to Understand AI Recommendations
Authors: Harbarth, Lydia
Bodemer, Daniel
Schnaubert, Lenka
Keywords: Learning Sciences
Issue Date: 2023
Publisher: International Society of the Learning Sciences
Citation: Harbarth, L., Bodemer, D., & Schnaubert, L. (2023). Elaborateness of explanations to understand AI recommendations. In Blikstein, P., Van Aalst, J., Kizito, R., & Brennan, K. (Eds.), Proceedings of the 17th International Conference of the Learning Sciences - ICLS 2023 (pp. 1827-1828). International Society of the Learning Sciences.
Abstract: Successful human-AI collaboration requires an understanding of the AI system which can be achieved by human-interpretable explanations. In an experimental study (N = 109), we found more elaborated explanations to foster causability and trust, whereas adaptability of elaborateness provides no further benefit. Although cognitive load was not affected by explanations, its causal role requires further investigation. Future research should integrate learning sciences with explainable AI research to consider system and human aspects of understandable AI.
Description: Poster
URI: https://doi.org/10.22318/icls2023.373839
https://repository.isls.org//handle/1/10040
Appears in Collections:ISLS Annual Meeting 2023

Files in This Item:
File SizeFormat 
ICLS2023_1827-1828.pdf84.55 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.