Please use this identifier to cite or link to this item: https://repository.isls.org//handle/1/275
Title: Can We Rely on IRR? Testing the Assumptions of Inter-Rater Reliability
Authors: Eagan, Brendan R.
Rogers, Bradley
Serlin, Ronald
Ruis, Andrew R.
Irgens, Golnaz Arastoopour
Shaffer, David Williamson
Issue Date: Jul-2017
Publisher: Philadelphia, PA: International Society of the Learning Sciences.
Citation: Eagan, B. R., Rogers, B., Serlin, R., Ruis, A. R., Irgens, G. A., & Shaffer, D. W. (2017). Can We Rely on IRR? Testing the Assumptions of Inter-Rater Reliability In Smith, B. K., Borge, M., Mercier, E., and Lim, K. Y. (Eds.). (2017). Making a Difference: Prioritizing Equity and Access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL) 2017, Volume 2. Philadelphia, PA: International Society of the Learning Sciences.
Abstract: Researchers use Inter-Rater Reliability (IRR) to measure whether two processes—people and/or machines—identify the same properties in data. There are many IRR measures, but regardless of the measure used, however, there is a common method for estimating IRR. To assess the validity of this common method, we conducted Monte Carlo simulation studies examining the most widely used measure of IRR: Cohen’s kappa. Our results show that the method commonly used by researchers to assess IRR produces unacceptable Type I error rates.
URI: https:dx.doi.org/10.22318/cscl2017.70
https://repository.isls.org/handle/1/275
Appears in Collections:CSCL 2017

Files in This Item:
File SizeFormat 
70.pdf729.36 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.