WEIGHTED AGREEMENT WITH CATEGORICAL DATA
Cohen (1968) presented two measures of weighted agreement for the two-judges case. One measure, which we denote as weighted kappa I, incorporates weights that reflect disagreement and the other measure, which we denote as weighted kappa II, incorporates weights that reflect agreement. The asymptotic variances of the estimates of weighted kappa I and II were derived by Cohen (1968) for the two-judges case under the assumption of known marginal totals. In this dissertation, weighted kappa I and II are generalized to the r-judges case (where r is a positive integer greater than two). The asymptotic variances associated with weighted kappa I and II are derived, under the assumption of a multinomial sampling scheme. Very little attention has been given in the literature for assigning the disagreement weights incorporated in weighted kappa I or the agreement weights incorporated in weighted kappa II. There is a need for a formula or method to obtain these weights, especially for cases where there are more than two judges. One such formula for obtaining the disagreement weights incorporated in weighted kappa I is presented. For the two-judges case, the effect on weighted kappa I and its associated variances due to different sample sizes, disagreement weights and sums of diagonal probabilities is studied under three models of tables of joint probabilities. Finally, a method is presented for randomly simulating missing ratings, so that all selected objects will have the ratings of all r judges.