Systematic and Random Disagreement and the Reliability of Nominal Data

Loading...
Thumbnail Image

Embargo Date

Related Collections

Degree type

Discipline

Subject

Communication
Social and Behavioral Sciences

Funder

Grant number

License

Copyright date

Distributor

Related resources

Contributor

Abstract

Reliability is an important bottleneck for content analysis and similar methods for generating analyzable data. This is because the analysis of complex qualitative phenomena such as texts, social interactions, and media images easily escape physical measurement and call for human coders to describe what they read or observe. Owing to the individuality of coders, the data they generate for subsequent analysis are prone to errors not typically found in mechanical measuring devices. However, most measures that are designed to indicate whether data are sufficiently reliable to warrant analysis do not differentiate among kinds of disagreement that prevent data from being reliable. This paper distinguishes two kinds of disagreement, systematic disagreement and random disagreement, and suggests measures of them in conjunction with the agreement coefficient α (alpha) (Krippendorff, 2004a, pp. 211-256). These measures, previously proposed for interval data (Krippendorff, 1970), are here developed for nominal data. Their importance lies in their ability to not only aid the development of reliable coding instructions but also warn researchers about two kinds of errors they face when using imperfect data.

Advisor

Date Range for Data Collection (Start Date)

Date Range for Data Collection (End Date)

Digital Object Identifier

Series name and number

Publication date

2008-02-10

Journal title

Communication Methods and Measure

Volume number

Issue number

Publisher

Publisher DOI

Journal Issues

Comments

Recommended citation

Collection