Inter-rater Reliability
Inter-rater reliability refers to the level of agreement between raters (e.g., observers collecting data from participants) in a research study that indicates the level of consistency in how observations are interpreted.
Category: Methodology
Citation: Lange, R.T. (2011). Inter-rater reliability. Encyclopedia of Clinical Neuropsychology. https://doi.org/10.1007/978-0-387-79948-3_1203