Get access to all handy features included in the IVIS website
- Get unlimited access to books, proceedings and journals.
- Get access to a global catalogue of meetings, on-site and online courses, webinars and educational videos.
- Bookmark your favorite articles in My Library for future reading.
- Save future meetings and courses in My Calendar and My e-Learning.
- Ask authors questions and read what others have to say.
Inter-observer agreement of digital dermatitis M-scores for photographs of hind feet from standing dairy cattle
A. Vanhoudt
Get access to all handy features included in the IVIS website
- Get unlimited access to books, proceedings and journals.
- Get access to a global catalogue of meetings, on-site and online courses, webinars and educational videos.
- Bookmark your favorite articles in My Library for future reading.
- Save future meetings and courses in My Calendar and My e-Learning.
- Ask authors questions and read what others have to say.
Read
Introduction
Digital dermatitis (DD) is the leading infectious cause of lameness in dairy cattle that affects their welfare and productivity worldwide. At the herd level, DD is often assessed while cows are standing in a milking parlor, and lesions are most commonly evaluated using the M-score (Dopfer et al. 1997, Berry et al. 2012). The objective of this study was to examine the interobserver agreement for M-scores of the feet of standing cattle, based on digital color photographs of dairy cattle hind feet.
Materials and methods
A total of 88 photographs, and written descriptors of the M-score were sent to 11 scorers from 5 countries, working in 10 different institutions. The scorers were given no formal training immediately before scoring the photographs; however, all have regularly used the M-score to score DD. The answers for 36 photographs were excluded from the analysis because the photograph either had more than 1 M-stage as mode or not all scorers assigned an M-score to it. The M-scores of the 11 scorers from 52 photographs were available for analysis. With the mode assumed correct, inter-observer agreement was calculated using several agreement analyses, i.e. percentage raw agreement (number of exact agreements / total number of 194 observations × 100, PAo), Fleiss’ kappa (κ, Fleiss 1971) and Gwet’s agreement coefficient (AC1, Gwet 2008). Recalculation of Gwet’s AC1 for overall agreement was done following condensing of several M-stages into different groups (Table 1). Statistical analysis was done using R (R Core Team 2014). For all measures of agreement, the guidance provided by Landisand Koch (1977) for the interpretation of κ was used: <0.00 poor, 0.00 – 0.20 slight, 0.21 – 0.40 fair, 0.41 – 0.60 moderate, 0.61 – 0.80 substantial, and 0.81 – 1.00 almost perfect. […]
Get access to all handy features included in the IVIS website
- Get unlimited access to books, proceedings and journals.
- Get access to a global catalogue of meetings, on-site and online courses, webinars and educational videos.
- Bookmark your favorite articles in My Library for future reading.
- Save future meetings and courses in My Calendar and My e-Learning.
- Ask authors questions and read what others have to say.
Comments (0)
Ask the author
0 comments