Resolving challenges in deep learning-based analyses of histopathological images using explanation methods

Standard

Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. / Hägele, Miriam; Seegerer, Philipp; Lapuschkin, Sebastian; Bockmayr, Michael; Samek, Wojciech; Klauschen, Frederick; Müller, Klaus-Robert; Binder, Alexander.

In: SCI REP-UK, Vol. 10, No. 1, 14.04.2020, p. 6423.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

Hägele, M, Seegerer, P, Lapuschkin, S, Bockmayr, M, Samek, W, Klauschen, F, Müller, K-R & Binder, A 2020, 'Resolving challenges in deep learning-based analyses of histopathological images using explanation methods', SCI REP-UK, vol. 10, no. 1, pp. 6423. https://doi.org/10.1038/s41598-020-62724-2

APA

Hägele, M., Seegerer, P., Lapuschkin, S., Bockmayr, M., Samek, W., Klauschen, F., Müller, K-R., & Binder, A. (2020). Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. SCI REP-UK, 10(1), 6423. https://doi.org/10.1038/s41598-020-62724-2

Vancouver

Bibtex

@article{c36c9e72d46e4d05b56e53e7ef0090f6,
title = "Resolving challenges in deep learning-based analyses of histopathological images using explanation methods",
abstract = "Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, many explanation methods have emerged. This work shows how heatmaps generated by these explanation methods allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. We elaborate on biases which are typically inherent in histopathological image data. In the binary classification task of tumour tissue discrimination in publicly available haematoxylin-eosin-stained images of various tumour entities, we investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument. This insight is shown to not only be helpful to detect but also to remove the effects of common hidden biases, which improves generalisation within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic (ROC) curve by 5% when reducing a labelling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology.",
author = "Miriam H{\"a}gele and Philipp Seegerer and Sebastian Lapuschkin and Michael Bockmayr and Wojciech Samek and Frederick Klauschen and Klaus-Robert M{\"u}ller and Alexander Binder",
year = "2020",
month = apr,
day = "14",
doi = "10.1038/s41598-020-62724-2",
language = "English",
volume = "10",
pages = "6423",
journal = "SCI REP-UK",
issn = "2045-2322",
publisher = "NATURE PUBLISHING GROUP",
number = "1",

}

RIS

TY - JOUR

T1 - Resolving challenges in deep learning-based analyses of histopathological images using explanation methods

AU - Hägele, Miriam

AU - Seegerer, Philipp

AU - Lapuschkin, Sebastian

AU - Bockmayr, Michael

AU - Samek, Wojciech

AU - Klauschen, Frederick

AU - Müller, Klaus-Robert

AU - Binder, Alexander

PY - 2020/4/14

Y1 - 2020/4/14

N2 - Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, many explanation methods have emerged. This work shows how heatmaps generated by these explanation methods allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. We elaborate on biases which are typically inherent in histopathological image data. In the binary classification task of tumour tissue discrimination in publicly available haematoxylin-eosin-stained images of various tumour entities, we investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument. This insight is shown to not only be helpful to detect but also to remove the effects of common hidden biases, which improves generalisation within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic (ROC) curve by 5% when reducing a labelling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology.

AB - Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, many explanation methods have emerged. This work shows how heatmaps generated by these explanation methods allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. We elaborate on biases which are typically inherent in histopathological image data. In the binary classification task of tumour tissue discrimination in publicly available haematoxylin-eosin-stained images of various tumour entities, we investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument. This insight is shown to not only be helpful to detect but also to remove the effects of common hidden biases, which improves generalisation within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic (ROC) curve by 5% when reducing a labelling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology.

U2 - 10.1038/s41598-020-62724-2

DO - 10.1038/s41598-020-62724-2

M3 - SCORING: Journal article

C2 - 32286358

VL - 10

SP - 6423

JO - SCI REP-UK

JF - SCI REP-UK

SN - 2045-2322

IS - 1

ER -