Please use this identifier to cite or link to this item:
|Title:||Scoring Qualitative Informal Learning Dialogue: The SQuILD Method for Measuring Museum Learning Talk|
|Publisher:||Philadelphia, PA: International Society of the Learning Sciences.|
|Citation:||Roberts, J. & Lyons, L. (2017). Scoring Qualitative Informal Learning Dialogue: The SQuILD Method for Measuring Museum Learning Talk In Smith, B. K., Borge, M., Mercier, E., and Lim, K. Y. (Eds.). (2017). Making a Difference: Prioritizing Equity and Access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL) 2017, Volume 1. Philadelphia, PA: International Society of the Learning Sciences.|
|Abstract:||Museums are increasingly developing computer-supported collaborative learning experiences and are in need of methods for evaluating the educational value of such exhibits. Exhibit designers, like web interaction designers, have long been employing A/B testing of exhibit elements in order to understand the affordances of competing designs in situ. When the exhibit elements being tested are intended to support open-ended group exploration and dialogue, existing knowledge-based metrics like measuring the amount of content recalled don’t quite apply. Visitor groups can explore the educational content in idiosyncratic ways, meaning that not all groups have the same exposure to content, and the learning arises from the visitors’ conversations. In order to evaluate learning outcomes for CSCL exhibits, we present a method for quantifying idiosyncratic social learning, Scoring Qualitative Informal Learning Dialogue (SQuILD), and demonstrate how it was applied to A/B testing of a collaborative data visualization exhibit in a metropolitan museum.|
|Appears in Collections:||CSCL 2017|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.