Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 May;14(3):e1548.
doi: 10.1002/wsbm.1548. Epub 2022 Jan 17.

Explainable deep learning in healthcare: A methodological survey from an attribution view

Affiliations
Review

Explainable deep learning in healthcare: A methodological survey from an attribution view

Di Jin et al. WIREs Mech Dis. 2022 May.

Abstract

The increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL, which allows end users to evaluate the model decision making to know whether to accept or reject predictions and recommendations before an action is taken. In this review, we focus on the interpretability of the DL models in healthcare. We start by introducing the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners in this field. Besides the methods' details, we also include a discussion of advantages and disadvantages of these methods and which scenarios each of them is suitable for, so that interested readers can know how to compare and choose among them for use. Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies. Overall, we hope this survey can help researchers and practitioners in both artificial intelligence and clinical fields understand what methods we have for enhancing the interpretability of their DL models and choose the optimal one accordingly. This article is categorized under: Cancer > Computational Models.

Keywords: deep learning in medicine; interpretable deep learning.

PubMed Disclaimer

References

REFERENCES

    1. Ahern, I., Noack, A., Guzman-Nateras, L., Dou, D., Li, B., & Huan, J. (2019). Normlime: A new feature importance metric for explaining deep neural networks. arXiv Preprint, arXiv:1909.04200.
    1. Ahmad, M. A., Eckert, C., & Teredesai, A. (2018). Interpretable machine learning in healthcare. In Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics, pp. 559-560.
    1. Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.
    1. Alaa, A. M., & van der Schaar, M. (2019). Attentive state-space modeling of disease progression. In M. J. Kearns, S. A. Solla, & D. A. Cohn (Eds.), Advances in neural information processing systems (pp. 11338-11348). MIT Press.
    1. Alsentzer, E., Murphy, J. R., Boag, W., Weng, W.-H., Jin, D., Naumann, T., Redmond, W., & McDermott, M. B. (2019). Proceedings of the 2nd clinical natural language processing workshop, Minneapolis, pp. 72-78.

Publication types

LinkOut - more resources