Four distal radial fracture classification systems tested amongst a large panel of Dutch trauma surgeons
Standard
Four distal radial fracture classification systems tested amongst a large panel of Dutch trauma surgeons. / Ploegmakers, Joris J W; Mader, Konrad; Pennig, Dietmar; Verheyen, Cees C P M.
In: INJURY, Vol. 38, No. 11, 11.2007, p. 1268-72.Research output: SCORING: Contribution to journal › SCORING: Journal article › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Four distal radial fracture classification systems tested amongst a large panel of Dutch trauma surgeons
AU - Ploegmakers, Joris J W
AU - Mader, Konrad
AU - Pennig, Dietmar
AU - Verheyen, Cees C P M
PY - 2007/11
Y1 - 2007/11
N2 - Five different radiographs of distal radial fractures were classified according to the AO/ASIF, Frykman, Fernandez and Older systems by 45 observers (trauma surgeons and residents). The same panel classified the same radiographs in a different order 4 months later. Mean interobserver correlation for all cases was fair to moderate according to the Spearman rank test. However, these classifications showed poor correlation with the gold standard as classified by the senior author. All intraobserver agreements demonstrated a moderate kappa agreement (K(w)=0.52) for the AO/ASIF classification and fair for the Frykman (K(w)=0.26), Fernandez (K(w)=0.24) and Older (K(w)=0.27) classifications. When the group was divided according to years of clinical experience (<6 years; >or=6 years), there was poor correlation between experience and consistency amongst all four classifications. In view of these findings, we do not recommend use of these classifications for clinical application because of their questionable reproducibility and reliability.
AB - Five different radiographs of distal radial fractures were classified according to the AO/ASIF, Frykman, Fernandez and Older systems by 45 observers (trauma surgeons and residents). The same panel classified the same radiographs in a different order 4 months later. Mean interobserver correlation for all cases was fair to moderate according to the Spearman rank test. However, these classifications showed poor correlation with the gold standard as classified by the senior author. All intraobserver agreements demonstrated a moderate kappa agreement (K(w)=0.52) for the AO/ASIF classification and fair for the Frykman (K(w)=0.26), Fernandez (K(w)=0.24) and Older (K(w)=0.27) classifications. When the group was divided according to years of clinical experience (<6 years; >or=6 years), there was poor correlation between experience and consistency amongst all four classifications. In view of these findings, we do not recommend use of these classifications for clinical application because of their questionable reproducibility and reliability.
KW - Follow-Up Studies
KW - General Surgery
KW - Humans
KW - Observer Variation
KW - Professional Practice
KW - Radiography
KW - Radius Fractures
KW - Journal Article
U2 - 10.1016/j.injury.2007.03.032
DO - 10.1016/j.injury.2007.03.032
M3 - SCORING: Journal article
C2 - 17643439
VL - 38
SP - 1268
EP - 1272
JO - INJURY
JF - INJURY
SN - 0020-1383
IS - 11
ER -