Inter-rater reliability (IRR) within the scope of qualitative research is a measure of or conversation around the “consistency or repeatability” of how codes are applied to qualitative data by multiple coders (William M.K. Trochim, Reliability). In qualitative coding, IRR is measured primarily to assess the degree of consistency in how a code system is applied. However, it also provides an assessment that informs the process of identifying where acceptable consistency is found—and what that looks like—and where unacceptable levels of consistency are found and, thus providing guidance on what steps may be taken to increase consistency.
. |
To learn more about inter-Rater reliability please check out our official Dedoose Inter-Reliability Article.
. |
We’d love to know what you think, comments, suggestions and questions can all be sent to support@dedoose.com and our friendly support staff will do everything they can to help. After all, we do build this application for you and anything you’d like to make us aware of or any features you think our application would benefit from are all things we’d love to hear about.
. |
References
. |
Trochim, W. M. (2006), Reliability. Retrieved December 21, 2016, from http://www.socialresearchmethods.net/kb/reliable.php
;