Please use this identifier to cite or link to this item:
Title: The Binary Replicate Test: Determining the Sensitivity of CSCL Models to Coding Error
Authors: Eagan, Brendan
Swiecki, Zachari
Farrell, Cayley
Shaffer, D.W.
Issue Date: Jun-2019
Publisher: International Society of the Learning Sciences (ISLS)
Citation: Eagan, B., Swiecki, Z., Farrell, C., & Shaffer, D. (2019). The Binary Replicate Test: Determining the Sensitivity of CSCL Models to Coding Error. In Lund, K., Niccolai, G. P., Lavoué, E., Hmelo-Silver, C., Gweon, G., & Baker, M. (Eds.), A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings, 13th International Conference on Computer Supported Collaborative Learning (CSCL) 2019, Volume 1 (pp. 328-335). Lyon, France: International Society of the Learning Sciences.
Abstract: The process of labeling, categorizing, or otherwise annotating data--or coding in the computer-supported collaborative learning (CSCL) literature--is a fundamental process in CSCL research. It is the process by which researchers identify salient properties about segments of CSCL data: what they are, what they contain, or what they mean. Coding, like all processes in research, is subject to error. To reduce the potential impact of coding error, CSCL researchers typically measure inter-rater reliability (IRR). However, there is no extant method to determine what level of IRR would invalidate a CSCL result or model. One way of assessing the potential impact of such inaccuracies is by conducting sensitivity analyses, which measure the level of error that would need to be present in the data to invalidate a given inference. This paper introduces a new method for conducting sensitivity analyses in CSCL: the Binary Replicate Test.
Appears in Collections:CSCL 2019

Files in This Item:
File SizeFormat 
328-335.pdf369.38 kBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.