Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).

Standard

Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi). / Elwyn, Glyn; O'Connor, Annette M; Bennett, Carol; Newcombe, Robert G; Politi, Mary; Durand, Marie-Anne; Drake, Elizabeth; Joseph-Williams, Natalie; Khangura, Sara; Saarimaki, Anton; Sivell, Stephanie; Stiel, Mareike; Bernstein, Steven J; Col, Nananda; Coulter, Angela; Eden, Karen; Härter, Martin; Rovner, Margaret Holmes; Moumjid, Nora; Stacey, Dawn; Thomson, Richard; Whelan, Tim; van der Weijden, Trudy; Edwards, Adrian.

in: PLOS ONE, Jahrgang 4, Nr. 3, 3, 2009, S. 4705.

Publikationen: SCORING: Beitrag in Fachzeitschrift/ZeitungSCORING: ZeitschriftenaufsatzForschungBegutachtung

Harvard

Elwyn, G, O'Connor, AM, Bennett, C, Newcombe, RG, Politi, M, Durand, M-A, Drake, E, Joseph-Williams, N, Khangura, S, Saarimaki, A, Sivell, S, Stiel, M, Bernstein, SJ, Col, N, Coulter, A, Eden, K, Härter, M, Rovner, MH, Moumjid, N, Stacey, D, Thomson, R, Whelan, T, van der Weijden, T & Edwards, A 2009, 'Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).', PLOS ONE, Jg. 4, Nr. 3, 3, S. 4705. https://doi.org/10.1371/journal.pone.0004705

APA

Elwyn, G., O'Connor, A. M., Bennett, C., Newcombe, R. G., Politi, M., Durand, M-A., Drake, E., Joseph-Williams, N., Khangura, S., Saarimaki, A., Sivell, S., Stiel, M., Bernstein, S. J., Col, N., Coulter, A., Eden, K., Härter, M., Rovner, M. H., Moumjid, N., ... Edwards, A. (2009). Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi). PLOS ONE, 4(3), 4705. [3]. https://doi.org/10.1371/journal.pone.0004705

Vancouver

Bibtex

@article{fa06360351c147f89b7126e037a69f82,
title = "Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).",
abstract = "OBJECTIVES: To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). DESIGN: Scale development study, involving construct, item and scale development, validation and reliability testing. SETTING: There has been increasing use of decision support technologies--adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. METHODS: Scale development study, involving construct, item and scale development, validation and reliability testing. PARTICIPANTS: Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. RESULTS: IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). CONCLUSIONS: This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.",
author = "Glyn Elwyn and O'Connor, {Annette M} and Carol Bennett and Newcombe, {Robert G} and Mary Politi and Marie-Anne Durand and Elizabeth Drake and Natalie Joseph-Williams and Sara Khangura and Anton Saarimaki and Stephanie Sivell and Mareike Stiel and Bernstein, {Steven J} and Nananda Col and Angela Coulter and Karen Eden and Martin H{\"a}rter and Rovner, {Margaret Holmes} and Nora Moumjid and Dawn Stacey and Richard Thomson and Tim Whelan and {van der Weijden}, Trudy and Adrian Edwards",
year = "2009",
doi = "10.1371/journal.pone.0004705",
language = "Deutsch",
volume = "4",
pages = "4705",
journal = "PLOS ONE",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "3",

}

RIS

TY - JOUR

T1 - Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).

AU - Elwyn, Glyn

AU - O'Connor, Annette M

AU - Bennett, Carol

AU - Newcombe, Robert G

AU - Politi, Mary

AU - Durand, Marie-Anne

AU - Drake, Elizabeth

AU - Joseph-Williams, Natalie

AU - Khangura, Sara

AU - Saarimaki, Anton

AU - Sivell, Stephanie

AU - Stiel, Mareike

AU - Bernstein, Steven J

AU - Col, Nananda

AU - Coulter, Angela

AU - Eden, Karen

AU - Härter, Martin

AU - Rovner, Margaret Holmes

AU - Moumjid, Nora

AU - Stacey, Dawn

AU - Thomson, Richard

AU - Whelan, Tim

AU - van der Weijden, Trudy

AU - Edwards, Adrian

PY - 2009

Y1 - 2009

N2 - OBJECTIVES: To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). DESIGN: Scale development study, involving construct, item and scale development, validation and reliability testing. SETTING: There has been increasing use of decision support technologies--adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. METHODS: Scale development study, involving construct, item and scale development, validation and reliability testing. PARTICIPANTS: Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. RESULTS: IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). CONCLUSIONS: This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.

AB - OBJECTIVES: To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). DESIGN: Scale development study, involving construct, item and scale development, validation and reliability testing. SETTING: There has been increasing use of decision support technologies--adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. METHODS: Scale development study, involving construct, item and scale development, validation and reliability testing. PARTICIPANTS: Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. RESULTS: IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). CONCLUSIONS: This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.

U2 - 10.1371/journal.pone.0004705

DO - 10.1371/journal.pone.0004705

M3 - SCORING: Zeitschriftenaufsatz

VL - 4

SP - 4705

JO - PLOS ONE

JF - PLOS ONE

SN - 1932-6203

IS - 3

M1 - 3

ER -