A multiple testing framework for diagnostic accuracy studies with co-primary endpoints

Standard

A multiple testing framework for diagnostic accuracy studies with co-primary endpoints. / Westphal, Max; Zapf, Antonia; Brannath, Werner.

In: STAT MED, Vol. 41, No. 5, 28.02.2022, p. 891-909.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

APA

Vancouver

Bibtex

@article{e95b2b9b00744f63852e63bf71186425,
title = "A multiple testing framework for diagnostic accuracy studies with co-primary endpoints",
abstract = "Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high-dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family-wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold-out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed.",
author = "Max Westphal and Antonia Zapf and Werner Brannath",
note = "{\textcopyright} 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.",
year = "2022",
month = feb,
day = "28",
doi = "10.1002/sim.9308",
language = "English",
volume = "41",
pages = "891--909",
journal = "STAT MED",
issn = "0277-6715",
publisher = "John Wiley and Sons Ltd",
number = "5",

}

RIS

TY - JOUR

T1 - A multiple testing framework for diagnostic accuracy studies with co-primary endpoints

AU - Westphal, Max

AU - Zapf, Antonia

AU - Brannath, Werner

N1 - © 2022 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

PY - 2022/2/28

Y1 - 2022/2/28

N2 - Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high-dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family-wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold-out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed.

AB - Major advances have been made regarding the utilization of machine learning techniques for disease diagnosis and prognosis based on complex and high-dimensional data. Despite all justified enthusiasm, overoptimistic assessments of predictive performance are still common in this area. However, predictive models and medical devices based on such models should undergo a throughout evaluation before being implemented into clinical practice. In this work, we propose a multiple testing framework for (comparative) phase III diagnostic accuracy studies with sensitivity and specificity as co-primary endpoints. Our approach challenges the frequent recommendation to strictly separate model selection and evaluation, that is, to only assess a single diagnostic model in the evaluation study. We show that our parametric simultaneous test procedure asymptotically allows strong control of the family-wise error rate. A multiplicity correction is also available for point and interval estimates. Moreover, we demonstrate in an extensive simulation study that our multiple testing strategy on average leads to a better final diagnostic model and increased statistical power. To plan such studies, we propose a Bayesian approach to determine the optimal number of models to evaluate simultaneously. For this purpose, our algorithm optimizes the expected final model performance given previous (hold-out) data from the model development phase. We conclude that an assessment of multiple promising diagnostic models in the same evaluation study has several advantages when suitable adjustments for multiple comparisons are employed.

U2 - 10.1002/sim.9308

DO - 10.1002/sim.9308

M3 - SCORING: Journal article

C2 - 35075684

VL - 41

SP - 891

EP - 909

JO - STAT MED

JF - STAT MED

SN - 0277-6715

IS - 5

ER -