Validation of the Mobile Application Rating Scale (MARS)

Standard

Validation of the Mobile Application Rating Scale (MARS). / Terhorst, Yannik; Philippi, Paula; Sander, Lasse B; Schultchen, Dana; Paganini, Sarah; Bardus, Marco; Santo, Karla; Knitza, Johannes; Machado, Gustavo C; Schoeppe, Stephanie; Bauereiß, Natalie; Portenhauser, Alexandra; Domhardt, Matthias; Walter, Benjamin; Krusche, Martin; Baumeister, Harald; Messner, Eva-Maria.

In: PLOS ONE, Vol. 15, No. 11, e0241480, 2020.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

Terhorst, Y, Philippi, P, Sander, LB, Schultchen, D, Paganini, S, Bardus, M, Santo, K, Knitza, J, Machado, GC, Schoeppe, S, Bauereiß, N, Portenhauser, A, Domhardt, M, Walter, B, Krusche, M, Baumeister, H & Messner, E-M 2020, 'Validation of the Mobile Application Rating Scale (MARS)', PLOS ONE, vol. 15, no. 11, e0241480. https://doi.org/10.1371/journal.pone.0241480

APA

Terhorst, Y., Philippi, P., Sander, L. B., Schultchen, D., Paganini, S., Bardus, M., Santo, K., Knitza, J., Machado, G. C., Schoeppe, S., Bauereiß, N., Portenhauser, A., Domhardt, M., Walter, B., Krusche, M., Baumeister, H., & Messner, E-M. (2020). Validation of the Mobile Application Rating Scale (MARS). PLOS ONE, 15(11), [e0241480]. https://doi.org/10.1371/journal.pone.0241480

Vancouver

Terhorst Y, Philippi P, Sander LB, Schultchen D, Paganini S, Bardus M et al. Validation of the Mobile Application Rating Scale (MARS). PLOS ONE. 2020;15(11). e0241480. https://doi.org/10.1371/journal.pone.0241480

Bibtex

@article{ed45cc00694348a8b36b71c294561142,
title = "Validation of the Mobile Application Rating Scale (MARS)",
abstract = "BACKGROUND: Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity.OBJECTIVE: This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS.METHODS: Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation.RESULTS: In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05).CONCLUSION: The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.",
keywords = "Factor Analysis, Statistical, Humans, Mobile Applications/standards, Models, Theoretical, Reproducibility of Results, Telemedicine",
author = "Yannik Terhorst and Paula Philippi and Sander, {Lasse B} and Dana Schultchen and Sarah Paganini and Marco Bardus and Karla Santo and Johannes Knitza and Machado, {Gustavo C} and Stephanie Schoeppe and Natalie Bauerei{\ss} and Alexandra Portenhauser and Matthias Domhardt and Benjamin Walter and Martin Krusche and Harald Baumeister and Eva-Maria Messner",
year = "2020",
doi = "10.1371/journal.pone.0241480",
language = "English",
volume = "15",
journal = "PLOS ONE",
issn = "1932-6203",
publisher = "Public Library of Science",
number = "11",

}

RIS

TY - JOUR

T1 - Validation of the Mobile Application Rating Scale (MARS)

AU - Terhorst, Yannik

AU - Philippi, Paula

AU - Sander, Lasse B

AU - Schultchen, Dana

AU - Paganini, Sarah

AU - Bardus, Marco

AU - Santo, Karla

AU - Knitza, Johannes

AU - Machado, Gustavo C

AU - Schoeppe, Stephanie

AU - Bauereiß, Natalie

AU - Portenhauser, Alexandra

AU - Domhardt, Matthias

AU - Walter, Benjamin

AU - Krusche, Martin

AU - Baumeister, Harald

AU - Messner, Eva-Maria

PY - 2020

Y1 - 2020

N2 - BACKGROUND: Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity.OBJECTIVE: This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS.METHODS: Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation.RESULTS: In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05).CONCLUSION: The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.

AB - BACKGROUND: Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity.OBJECTIVE: This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS.METHODS: Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation.RESULTS: In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05).CONCLUSION: The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.

KW - Factor Analysis, Statistical

KW - Humans

KW - Mobile Applications/standards

KW - Models, Theoretical

KW - Reproducibility of Results

KW - Telemedicine

U2 - 10.1371/journal.pone.0241480

DO - 10.1371/journal.pone.0241480

M3 - SCORING: Journal article

C2 - 33137123

VL - 15

JO - PLOS ONE

JF - PLOS ONE

SN - 1932-6203

IS - 11

M1 - e0241480

ER -