Please use this identifier to cite or link to this item:
|Title:||Can We Rely on IRR? Testing the Assumptions of Inter-Rater Reliability|
|Authors:||Eagan, Brendan R.|
Ruis, Andrew R.
Irgens, Golnaz Arastoopour
Shaffer, David Williamson
|Publisher:||Philadelphia, PA: International Society of the Learning Sciences.|
|Citation:||Eagan, B. R., Rogers, B., Serlin, R., Ruis, A. R., Irgens, G. A., & Shaffer, D. W. (2017). Can We Rely on IRR? Testing the Assumptions of Inter-Rater Reliability In Smith, B. K., Borge, M., Mercier, E., and Lim, K. Y. (Eds.). (2017). Making a Difference: Prioritizing Equity and Access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL) 2017, Volume 2. Philadelphia, PA: International Society of the Learning Sciences.|
|Abstract:||Researchers use Inter-Rater Reliability (IRR) to measure whether two processes—people and/or machines—identify the same properties in data. There are many IRR measures, but regardless of the measure used, however, there is a common method for estimating IRR. To assess the validity of this common method, we conducted Monte Carlo simulation studies examining the most widely used measure of IRR: Cohen’s kappa. Our results show that the method commonly used by researchers to assess IRR produces unacceptable Type I error rates.|
|Appears in Collections:||CSCL 2017|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.