Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective
|Full article title||Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|
|Journal||BMC Medical Informatics and Decision Making|
|Author(s)||Amann, Julia; Blasimme Allesandro; Vayena, Effy; Frey, Dietmar; Madai, Vince I.; Precise4Q Consortium|
|Author affiliation(s)||ETH Zürich, Charité – Universitätsmedizin Berlin, Birmingham City University|
|Primary contact||Online contact form|
|Volume and issue||20|
|Distribution license||Creative Commons Attribution 4.0 International|
Background: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.
Methods: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' Principles of Biomedical Ethics (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.
Results: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms of how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective, we identified informed consent, certification, and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.
Conclusions: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Keywords: artificial intelligence, machine learning, explainability, interpretability, clinical decision support
All over the world, healthcare costs are skyrocketing. Increasing life expectancy, soaring rates of chronic diseases, and the continuous development of costly new therapies contribute to this trend. Thus, it comes as no surprise that scholars predict a grim future for the sustainability of healthcare systems throughout the world. Artificial intelligence (AI) promises to alleviate the impact of these developments by improving healthcare and making it more cost-effective. In clinical practice, AI often comes in the form of clinical decision support systems (CDSSs), assisting clinicians in diagnosis of disease and treatment decisions. Where conventional CDSSs match the characteristics of individual patients to an existing knowledge base, AI-based CDSSs apply artificial intelligence models trained on data from patients matching the use-case at hand. Yet, despite its undeniable potential, AI is not a universal solution. As history has shown, technological progress always goes hand in hand with novel questions and significant challenges. Some of these challenges are tied to the technical properties of AI, while others relate to the legal, medical, and patient perspectives, making it necessary to adopt a multidisciplinary perspective.
In this paper, we take such a multidisciplinary view on a major medical AI challenge: explainability. In its essence, explainability can be understood as a characteristic of an AI-driven system allowing a person to reconstruct why a certain AI came up with the predictions it offered. An important point to note here is that explainability has many facets and, unfortunately, the terminology of explainability is not well defined. Other terms such as interpretability and/or transparency are often used synonymously. We thus simply refer to explainability or explainable AI throughout the manuscript and add the necessary context for understanding.
Explainability is a heavily debated topic with far-reaching implications that extend beyond the technical properties of AI. Even though research indicates that AI algorithms can outperform humans in certain analytical tasks (e.g., pattern recognition in imaging), the lack of explainability for AI in the medical domain has been criticized. Legal and ethical uncertainties surrounding this issue may impede progress and prevent novel technologies from fulfilling their potential to improve patient and population health. Yet, without thorough consideration of the role of explainability in medical AI, these technologies may forgo core ethical and professional principles, disregard regulatory issues, and cause considerable harm.
To contribute to the discourse on explainable AI in medicine, this paper seeks to draw attention to the interdisciplinary nature of explainability and its implications for the future of healthcare. In particular, our work focuses on the relevance of explainability for a CDSS. The originality of our work lies in the fact that we look at explainability from multiple perspectives that are often regarded as independent and separable from each other. This paper has two central aims: (1) to provide a comprehensive assessment of the role of explainability in CDSSs for use in clinical practice and; (2) to make an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.
Taking AI-based CDSSs as a case in point, we discuss the relevance of explainability for medical AI from the technological, legal, medical, and patient perspective. To this end, we performed a conceptual analysis of the pertinent literature on explainable AI in these domains. In our analysis, we aimed to identify aspects relevant to determining the necessity and role of explainability for each domain, respectively. Drawing on these different perspectives, we then conclude by distilling the ethical implications of explainability for the future use of AI in the healthcare setting. We do the latter by examining explainability against the four ethical principles of autonomy, beneficence, non-maleficence, and justice.
From the technological perspective, we will explore two issues. First, what explainability methods are, and second, where they are applied in medical AI development.
With regards to methodology, explainability can either be an inherent characteristic of an algorithm or can be approximated by other methods. The latter is highly important for methods that have until recently been labeled as “black-box models,” such as artificial neural network (ANN) models. To explain their predictions, however, numerous methods exist today. Importantly, however, inherent explainability will, in general, be more accurate than methods that only approximate explainability. This can be attributed to the complex characteristics of many modern machine learning methods. In ANNs, for example, the inner workings of sometimes millions of weights between artificial neurons need to be interpreted in a way that humans can understand. Thus, contrasting methods with inherent explainability have a crucial advantage. However, these methods are usually also traditional methods, such as linear or logistic regression. For many use cases, there is an inferiority of these traditional methods in performance compared to modern state-of-the-art methods such as ANNs. Thus, there is a trade-off between performance and explainability, and this trade-off is a big challenge for the developers of CDSSs. It should be noted that some assume that this trade-off does not exist in reality, but it is a mere artifact of suboptimal modelling approaches, as pointed out by Rudin et al. While the work of Rudin et al. is important to raise attention to the shortcomings of approximating explainability methods, it is likely that some approximating methods, in contrast to the notion of Rudin et al. , have value given the complex nature of explaining machine learning models. Additionally, while we can make the qualitative assessment that inherent explainability is likely better than approximated explainability, there exist only initial exploratory attempts to rank explainability methods quantitatively. Notwithstanding, for many applications—and generally in AI product development—there is a de facto preference for modern algorithms such as ANNs. Additionally, it cannot be ruled out that for some applications, such modern methods do exhibit actual higher performance. This necessitates to critically assess explainability methods further, both with regards to technical development, e.g., for methods ranking and optimization of methods for certain inputs, and with regards to the role of explainability from a multiple stakeholder view, as done in the current work.
From the development point-of-view, explainability will regularly be helpful for developers to sanity check their AI models beyond mere performance. For example, it is highly beneficial to rule out that the prediction performance is based on metadata rather than the data itself. A famous non-medical example was the classification task to discern between huskies and wolves, where the prediction was solely driven by the identification of a snowy background rather than real differences between huskies and wolves. This phenomenon is also called a “Clever Hans” phenomenon. Clever Hans phenomena are also found in medicine. An example is the model developed by researchers from Mount Sinai Health System which performed very well in distinguishing high-risk patients from non-high-risk patients based on x-ray imaging. However, when the tool was applied outside of Mount Sinai, the performance plummeted. As it turned out, the AI model did not learn clinically relevant information from the images. In analogy to the snowy background in the example introduced above, the prediction was based on hardware related metadata tied to the specific x-ray machine that was used to image the high-risk ICU patients exclusively at Mount Sinai. Thus, the system was able to distinguish only which machine was used for imaging and not the risk of the patients. Explainability methods allow developers to identify these types of errors before AI tools go into clinical validation and the certification process, as the Clever Hans predictors (snowy background, hardware information) would be identified as prediction relevant by the explainability methods rather than meaningful features from a domain perspective. This saves time and development costs. It should be noted that explainability methods aimed at developers to provide insight into their models have different prerequisites than systems aimed at technologically unsavvy end users such as clinical doctors and patients. For developers, these methods can be more complex in their approach and visualization.
From the legal perspective, the question arises if and—if yes—to what extent explainability in AI is legally required. Taking the cue from other fields such as public administration, transparency and traceability have to meet even higher standards when it comes to health care and the individual patient. As shown above, artificial intelligence approaches such as machine learning and deep learning have the potential to significantly advance the quality of health care. Identifying patterns in diagnostics, anomaly detection and, in the end, providing decision support are already changing standards of care and clinical practice. To fully exploit these opportunities for improving patients’ outcomes and saving lives by advancing detection, prevention, and treatment of diseases, the sensitive issues of data privacy and security, patient consent, and autonomy have to be fully considered. This means that from a legal perspective, data—across the cycle of acquisition, storage, transfer, processing, and analysis—will have to comply with all laws, regulations and further legal requirements. In addition, the law and its interpretation and implementation have to constantly adapt to the evolving state-of-the-art in technology. Even when fulfilling all of these rather obvious requirements, the question remains if the application of AI-driven solutions and tools demand explainability. In other words, do doctors and patients need information not only about the results that are provided but also about the characteristics and features these results are based upon, and the respective underlying assumptions? And might the necessary inclusion of other stakeholders require an understanding and explainability of algorithms and models?
From a Western legal point-of-view, we identified three core fields for explainability: (1) informed consent, (2) certification and approval as medical devices (according to U.S. Food and Drug Administration [FDA] and Medical Device Reporting [MDR] regulations), and (3) liability.
Personal health data may only be processed by law after the individual consents to its use. In the absence of general laws facilitating the use of personal data and information, this informed consent is the standard for today’s use of patient data in AI applications. This is particularly challenging since the consent has to be specified in advance, i.e., the purpose of the given project and its aims have to be outlined. The natural advantage of AI is that it does not necessitate pre-selection of features and can identify novel patterns or find new biomarkers. If restricted to specific purposes—as required for informed consent—this unique advantage might not be fully exploitable. For obtaining informed consent for diagnostic procedures or interventions, the law requires individual and comprehensive information about and understanding of these processes. In the case of AI-based decision support, the underlying processes and algorithms therefore have to be explained to the individual patient. Just like in the case of obtaining consent for undergoing a magnetic resonance imaging (MRI) procedure, the patient might not necessarily need to know every detail but certainly has to be informed about core principles, and especially the risks. Yet, contrary to an MRI procedure, physicians are unable to provide this type of information for an opaque CDSS. What physicians should at least be able to provide are explanations around two principles: (1) the agent view of AI, i.e., what it takes as input, what it does with the environment, and what it produces as output; and (2) explaining the training of the mapping which produces the output by letting it learn from examples, which encompasses unsupervised, supervised, and reinforcement learning. Yet, it is important to note that for AI-based CDSSs, the extent of the information is a priori highly difficult to define, has to be adjusted to the respective use case, and will certainly need clarification from the legislative bodies. For this, a framework for defining the "right" level of explainability, as Maxwell et al. put it, should be developed. Clearly, this also raises important questions about the role and tasks of physicians, underscoring the need for tailored training and professional development in the area of medical AI.
With regard to certification and approval as medical devices, the respective bodies have been slow to introduce requirements for explainable AI and its implications on the development and marketing of products. In a recent discussion paper, the FDA facilitates in its total product lifecycle approach (TPLC) the constant development and improvement of AI-based medical products. Explainability is not mentioned, but an "appropriate level of transparency (clarity) of the output and the algorithm aimed at users" is required. This is mainly aimed at the functions of the software and its modifications over time. The MDR regulation does not specifically regulate the need for explainability with regard to medical devices that use artificial intelligence and machine learning in particular. However, also here, the need for accountability and transparency are set and the evolution of xAI might lead the legislative and the notified bodies to change the regulations and their interpretation accordingly.
In conclusion, both FDA and MDR are currently vaguely requiring explainability, i.e., information for traceability, transparency, and explainability of development of ML/DL models that inform medical treatment. Most certainly, these requirements will be defined more precisely in the future mandating producers of AI-based medical devices/software to provide insight into the training and testing of the models, the data, and the overall development processes. We would also like to mention that there is a current debate on whether the European Union's General Data Protection Regulation (GDPR) requires the use of explainable AI in tools working with patient data. Also here, it cannot be ruled out that the currently ambiguous phrasings will be amended in favor of one that promotes explainability in the future.
Finally, the question arises in regards to what extent the patient has to be made aware that treatment decisions such as those derived by a CDSS might rely on AI and the legal and litigation question if the physician adhered to the recommendation or overruled the machine. For the U.S., as Cohen laid out, there is currently no clear-cut answer to what extent the integration of ML/DL into clinical decision-making has to be disclosed with regard to liability. Hacker et al. argue that legally it is likely that explainability will be a prerequisite from a contract and tort law perspective where doctors may have to use a certain tool to avoid the threat of a medical malpractice lawsuit. The final answer to this lies with the courts, however, and will be given rather sooner than later as an increasing number of AI-based systems will be in use.
Taken together, the legal implications of introducing AI technologies into health care are significant, and the constant conflict between innovation and regulation needs careful orchestration. Though AI-based decision support is, similar to new cancer medications or antibiotics, potentially livesaving, it requires guidelines and legal crash barriers to avoid existential infringement on patients’ rights and autonomy. Explainability is an essential quality in this context, and we argue that performance is only sufficient in cases where it is not possible to provide explainability. Overall, there is a strong need for explainability in regards to the legal aspects of AI implementation, and opening the black box is essential in what will likely prove to be the watershed moment for the application of AI in medicine.
From the medical perspective, the first consideration is what distinguishes AI-based clinical decision support from established diagnostic tools, such as advanced laboratory testing, for example? This question is important as the two exhibit considerable overlaps: both can provide results used for CDSSs and provide documentable test results, though for both overall performance is also a key issue. We also understand the inner working of laboratory testing, as it is often the case with other diagnostic tests, such as imaging, so such testing would not be considered a black box method. On the other hand, for these methods we cannot explain the result of any individual test. This makes it evident that from a medical perspective, we need to distinguish two levels of explainability. The first level of explainability allows us to understand how the system arrives at conclusions in general. As an analogy to laboratory testing, where we know which biological and biochemical reactions lead to the final results, we can provide importance rankings that explain which inputs are important for the AI-based CDSSs. The second level of explainability allows us to identify which features were important for an individual prediction. Individual predictions can be safe-checked for patterns that might indicate a false prediction, e.g., in the case of unusual feature distribution in an out-of-spec case. This second level explainability will regularly be available for AI-based CDSS but not for other diagnostic tests. This also has implications for the presentation of explainability results to doctors (and patients). Depending on the clinical use case and the risk attributed to that particular use case, first-level explanations might be sufficient, whereas other use cases will regularly require second-level explanations to safeguard patients.
To date, clinical validation is currently the first widely discussed requirement for a medical AI system. Explainability is often only considered on secondary consideration. The reason for this seems obvious: medical AI systems—especially CDSSs—whether AI-powered or not, have to undergo a rigorous validation process to meet regulatory standards and achieve medical certification. Once this process is completed successfully, there is proof that the system can perform in the highly heterogeneous real-world clinical setting. Here, however, it is important to understand how clinical validation is measured. A common performance indicator is prediction performance, often referred to as prediction accuracy. Different measures exist for prediction accuracy, tailored to certain use-cases, but their common characteristic is that they reflect the prediction quality and thus general clinical usefulness of a model. Thus, one of the main goals of model development is to increase prediction performance and provide low error rates. And, indeed, AI-powered systems have been shown to produce overall lower error rates than traditional methods.
Despite all efforts, however, AI systems cannot provide perfect accuracy, owing to different sources of error. For one, because of naturally imperfect datasets in medicine (e.g., due to noise or recording errors), it is basically impossible to develop a model without any errors. These errors are random errors. Thus, there will always be certain cases of false positive and false negative predictions. For another, a particularly important source of error is AI bias. AI bias leads to systematic errors, a systematic deviation from the expected prediction behavior of the AI tool. Ideally, the data used for training fully represent the population in which the AI tool is later applied. A major goal of AI in healthcare product development is to approximate this ideal state via thorough clinical validation and development on heterogeneous data source. While this ensures that AI bias can be reduced to a minimum, it will still be almost impossible to generate AI tools without any trace of bias. If bias is present, then there will be prediction errors in patients not representing the training sample. Taken together, both random and systematic errors sum up to the total number of errors that physicians and patients will encounter in the clinical setting, even when a fully validated high-performing AI system is used.
This is why, from a medical point-of-view, not only clinical validation but also explainability play an instrumental role in the clinical setting. Explainability enables the resolution of disagreement between an AI system and human experts, no matter on which side the error in judgment is situated. It should be noted that this will succeed mostly in cases of systematic error, of AI bias, rather than in cases of random error. Random errors are much harder to identify and will likely go unnoticed in case of agreement between the tool and the physician or will lead to situations of disagreement between the tool and the physician. (This situation is discussed in the "Ethical implications" subsection.) Explainability results are usually represented visually or through natural language explanations. Both show the clinicians how different factors contributed to the final recommendation. In other words, explainability can assist clinicians in evaluating the recommendations provided by a system based on their experience and clinical judgment. This allows them to make an informed decision whether or not to rely on the system’s recommendations and can, consequently, strengthen their trust in the system. Particularly in cases where the CDSS produces recommendations that are strongly out of line with a clinician's expectations, explainability allows verification whether the parameters taken into account by the system make sense from a clinical point-of-view. By laying open the inner workings of the CDSS, explainability can, thus, assist clinicians in identifying false positives and false negatives more easily. As clinicians identify instances in which the system performs poorly, they can report these cases back to developers to foster quality assurance and product improvement. Given these considerations, explainability may be a key driver for the uptake of AI-driven CDSS in clinical practice, as trust in these systems is not yet established. Here, it is important to note that any use of AI-based CDSS may influence a physician in reaching a decision. It will, therefore, be of critical importance to establish transparent documentation on how recommendations were derived.
Looking at the issue of explainability from the patient perspective raises the question of whether the use of AI-powered decision aids is compatible with the inherent values of patient-centered care. Patient-centered care aims to be responsive to and respectful of individual patients’ values and needs. It considers patients as active partners in the care process, emphasizing their right to choice and control over medical decisions. A key component of patient-centered care is shared decision-making aimed at identifying the treatment best suited to the individual patients’ situation. It involves an open conversation between the patient and the clinician, where the clinician informs the patient about the potential risks and benefits of available courses of action and the patient discusses their values and priorities.
Several evidence-based tools have been developed to facilitate shared decision-making, among them, so-called conversation aids. Unlike patient decision aids (which are used by the patient in preparation prior to the clinical encounter), conversation aids are designed for use within the clinical encounter to guide the patient and clinician through the shared decision-making process. They incorporate established medical facts about their conditions and, by synthesizing available information, they can help patients to understand their individual risks and outcomes, to explore the available options, and to determine which course of action best fits their goals and priorities. So, what if individual risk was not calculated using established risk prediction models but instead relied on a validated, yet not explainable, data-driven approach? Would it make a difference from the patient’s perspective? Seeking to address these questions, it was recently argued that so-called "black-box medicine" conflicts with core ideals of patient-centered medicine. Since clinicians are no longer able to fully comprehend the inner workings and calculations of the decision aid, they are not able to explain to the patient how certain outcomes or recommendations were derived.
Explainability can address this issue by providing clinicians and patients with a personalized conversation aid that is based on the patient’s individual characteristics and risk factors. By simulating the impact of different treatment or lifestyle interventions, an explainable AI decision aid could help to raise patients’ choice awareness and support clinicians in eliciting patient values and preferences. As described previously, explainability provides a visual representation or natural language explanation of how different factors contributed to the final risk assessment. Yet, to interpret system-derived explanations and probabilities, patients rely on the clinician’s ability to understand and convey these explanations in a way that is accurate and understandable. If used appropriately, explainable AI decision support systems may not only contribute to patients feeling more knowledgeable and better informed but could also promote more accurate risk perceptions. This may, in turn, boost patients’ motivation to engage in shared decision-making and to act upon risk-relevant information.
With the increasing penetration of AI-powered systems in healthcare, there is a necessity to explore the ethical issues accompanying this imminent paradigm shift. A commonly applied and well-fitting ethical framework when assessing biomedical ethical challenges comes from Beauchamp and Childress' Principles of Biomedical Ethics, which introduces four key principles: autonomy, beneficence, nonmaleficence, and justice. While principlism is not the only available bioethical framework, it is a very useful basic practical framework with high acceptance both in research and medical settings. Thus, in the following, we assess explainability with regards to the aforementioned four principles.
Concerning autonomy, explainability has implications for patients and physicians alike. One of the major safeguards of patients’ autonomy is represented by informed consent, that is an autonomous, generally written authorization with which the patient grants a doctor his or her permission to perform a given medical act. Proper informed consent is premised upon exhaustive and understandable information regarding the nature and risks of a medical procedure, and lack of undue interference with the patient’s voluntary decision to undergo the procedure. At the moment, an ethical consensus has not yet emerged as to whether disclosing the use of an opaque medical AI algorithm should be a mandatory requirement of informed consent. A failure to disclose the use of an opaque AI system may undermine patients’ autonomy and negatively impact the doctor-patient relationship, jeopardizing patients’ trust, and might violate the compliance with clinical recommendations. If the patient were to find out in hindsight that a clinician’s recommendation was derived from an opaque AI system, this may lead the patient to not only challenge the recommendation but might also lead to a justified request for explanation, which the clinician would not be able to provide in the case of an opaque system. Opaque medical AI can, therefore, represent an obstacle to the provision of accurate information and thus potentially jeopardize informed consent. Appropriate ethical and explainability standards are therefore important to safeguard the autonomy-preserving function of informed consent.
Attention should be paid to the risk that the introduction of opaque AI into medical decision making may foster paternalism by limiting opportunities for patients to express their expectations and preferences regarding medical procedures. A necessary prerequisite for shared decision making is full autonomy of the patient, but full autonomy can only be achieved if the patient is presented with a range of meaningful options to choose from. In this respect, patients’ opportunities to exert their autonomy regarding medical procedures get reduced as opaque AI becomes more central to medical decision making. In particular, the challenge that arises with an opaque CDSS is that it remains unclear whether and how patient values and preferences are accounted for by the model. This state of affairs could be addressed by means of “value-flexible” AI that provides different options for the patient. We further argue that explainability is a necessary step towards value-flexible AI. The patient needs to be able to understand which variables play an important role in the inner workings of the AI system to determine—with the aid of the doctor—whether the goals and weighting of the AI system align with their values or not. For example, AI systems primed for “survival” as the outcome might not be aligned with the value of patients for whom a “reduction of suffering” is more important. Lastly, when a choice is made, patients need to be able to trust an AI system to decide with confidence and autonomy to follow its guidance. This is not possible when the AI model is opaque. Therefore, explainability is—both from the physician’s and patient’s point-of-view—an ethical prerequisite for systems supporting critical medical decision making.
Beneficence and nonmaleficence
While the principles of beneficence and nonmaleficence are related, they nonetheless shed light on different aspects, also with regards to explainability. Beneficence urges physicians to maximize patient benefits. When applying AI-based systems, physicians are thus expected to use the tools in a manner that promotes the optimal outcome for the respective patient. Yet, to provide patients with the most appropriate options to promote their health and wellbeing, physicians need to be able to use the full capabilities of the system. This implies that physicians have knowledge of the system beyond a robotic application in a certain clinical use case, allowing them to reflect on the system’s output. For physicians, explainability in the form of visualizations or natural language explanations enables confident clinical decisions instead of having to simply trust an automated output. They can critically assess the system-derived outcomes and make their own judgments whether the results seem trustworthy or not. This allows them to adapt predictions and recommendations to individual circumstances where necessary. As such, clinicians can not only reduce the risk of eliciting false hope or creating false despair but can also flag potentially inappropriate interventions using their clinical judgment.
This is especially important when we imagine a situation where a physician and an AI system are in disagreement, a situation that is not easily resolved. Fundamentally, this is a question of epistemic authority, and it is unclear how physicians should decide whether they can trust the epistemic authority of a black box model enough to defer to its decision. Grote et al. argue that in the case of opaque AI there is not enough epistemic support for deference. Moreover, they further argue that confronted with a black-box system, clinical decision support might not enhance the capabilities of physicians, but rather limit them. Here, physicians might be forced into “defensive medicine,” where they dogmatically follow the output of the machine to avoid being questioned or held accountable. Such a situation would cause a serious threat to physician autonomy. Additionally, physicians will rarely have the time to perform an in-depth analysis of why their clinical judgement is in disagreement with the AI system. Thus, looking merely at a performance output is not sufficient in the clinical context. The optimal outcome for all patients can only be expected with healthcare staff that can make informed decisions when to apply an AI-powered CDSS and how to interpret its results. It is thus hard to imagine how beneficence in the context of medical AI can be fulfilled with any “black box” application.
The need for explainability is also evident when assessing the principle of nonmaleficence in the context of medical AI. Nonmaleficence states that physicians have a fundamental duty not to harm their patients either intentionally or through excessive or inappropriate use of medical means. Why is performance not enough? It has been argued that a black box medical AI-based only on validated maximized performance is ethically justifiable even if the causal mechanisms behind a given AI-prescribed intervention remain opaque to the clinician. Reliance on anecdotal or purely experiential evidence about the efficacy of a given treatment is indeed still quite common in medicine. Yet this is no excuse to forego explanations as a major requirement of sound clinical judgment when such an explanation is indeed possible. Recent progress in elucidating at least the principal features of AI models, while not providing full mechanistic explanations of AI-decisions, creates a prima facie ethical obligation to reduce opacity and increase the interpretability of medical AI. Failure to do so would mean intentionally undermining a physician’s capacity to control for possible misclassifications of individual clinical cases due to, for instance, excessive bias or variance in training datasets. We thus conclude that also with regards to beneficence and nonmaleficence, explainability is a necessary characteristic of clinically applied AI systems.
The principle of justice postulates that people should have equal access to the benefits of medical progress without ethically unjustified discrimination of any particular individuals or social group. Some AI systems, however, violate this principle. Recently, for example, Obermeyer et al. reported on a medical AI system discriminating against people of color. Explainability can support developers and clinicians to detect and correct such biases—a major potential source for injustice—ideally at the early stage of AI development and validation, e.g., by identification of important features indicating a bias in the model. However, for explainability to fulfill this purpose, the relevant stakeholder groups must be sensitized to the risk of bias and its potential consequences for individuals’ health and wellbeing. At times, it might be tempting to prioritize accuracy and simply refrain from investing resources into developing explainable AI. Yet to ensure that AI-powered decision support systems realize their potential, developers and clinicians need to be attentive to the potential flaws and limitations of these new tools. Thus, also from the justice perspective, explainability becomes an ethical prerequisite for the development and application of AI-based clinical decision support.
In this paper, we explored the role of explainable AI in clinical decision support systems from the technological, legal, medical, and patient perspectives. In doing so, we have shown that explainability is a multifaceted concept that has far-reaching implications for the various stakeholder groups involved. Medical AI poses challenges to developers, medical professionals, and legislators, as it requires a reconsideration of roles and responsibilities. Based on our analysis, we consider explainability a necessary requirement to address these challenges in a sustainable manner that is compatible with professional norms and values.
Notably, a move towards opaque algorithms in CDSSs may inadvertently lead to a revival of paternalistic concepts of care that relegate patients to passive spectators in the medical decision-making process. It might also bring forward a new type of medicine where physicians become slaves to the tool’s output to avoid legal and medical repercussions. And, last but not least, opaque systems might provoke a faulty allocation of resources, violating their just distribution. In this paper, we have argued that explainability can help to ensure that patients remain at the center of care and that together with clinicians they can make informed and autonomous decisions about their health. Moreover, explainability can promote the just distribution of available resources.
We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Further work is needed to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration to tackle these challenges.
AI: Artificial intelligence
ANN: Artificial neural network
CDSS: Clinical decision support system
FDA: Food and Drug Administration
GDPR: General Data Protection Regulation
ICU: Intensive care unit
MDR: Medical Device Reporting
TPLC: Total product lifecycle approach
The authors would like to thank Dr. Nora A. Tahy for review of the manuscript.
JA: Conceptualization; analysis, writing—original draft; writing—review and editing. AB: analysis; writing—original draft; writing—review and editing. EV: analysis; writing—original draft; writing—review and editing. DF: analysis; writing—original draft; writing—review and editing. VIM: conceptualization; analysis, writing—original draft; Writing—review and editing. All authors read and approved the final manuscript.
This research has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 777107 (PRECISE4Q). The funding body had no role in the study design, the collection, analysis, and interpretation of the data, nor the preparation of the manuscript.
The authors declare no competing interests.
- Higgins, D.; Madai, V.I. (2020). "From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare". Advanced Intelligent Systems 2 (10): 2000052. doi:10.1002/aisy.202000052.
- Rudin, C. (2019). "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead". Nature Machine Intelligence 1: 206–15. doi:10.1038/s42256-019-0048-x.
- Doran, D.; Schulz, S.; Besold, T.R. (2017). "What Does Explainable AI Really Mean? A New Conceptualization of Perspectives". arXiv. https://arxiv.org/abs/1710.00794v1.
- Shortliffe E.H.; Sepúlveda, M.J. (2018). "Clinical Decision Support in the Era of Artificial Intelligence". JAMA 320 (21): 2199–2200. doi:10.1001/jama.2018.17163.
- Obermeyer, Z.; Powers, B.; Vogeli, C. et al. (2019). "Dissecting racial bias in an algorithm used to manage the health of populations". Science 366 (6464): 447-453. doi:10.1126/science.aax2342.
- Samek, W.; Montavon, G.; Vedaldi, A. et al., ed. (2019). Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer Nature. doi:10.1007/978-3-030-28954-6. ISBN 9783030289546.
- Esteva, A.; Robicquet, A.; Ramsundar, B. et al. (2019). "A guide to deep learning in healthcare". Nature Medicine 25 (1): 24-29. doi:10.1038/s41591-018-0316-z. PMID 30617335.
- Islam, S.R.; Eberle, W.; Ghafoor, S.K. (2019). "Towards Quantification of Explainability in Explainable Artificial Intelligence Methods". arXiv. https://arxiv.org/abs/1911.10104v1.
- Samek, W.; Montavon, G.; Lapuschkin, S. et al. (2020). "Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond". arXiv. https://arxiv.org/abs/2003.07631v1.
- Lapuschkin, S.; Wäldchen, S.; Binder, A. et al. (2019). "Unmasking Clever Hans predictors and assessing what machines really learn". Nature Communications 10 (1): 1096. doi:10.1038/s41467-019-08987-4. PMC PMC6411769. PMID 30858366. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6411769.
- Zech, J.R.; Badgeley, M.A.; Liu, M. et al. (2018). "Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study". PLoS Medicine 15 (11): e1002683. doi:10.1371/journal.pmed.1002683. PMC PMC6219764. PMID 30399157. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6219764.
- Olsen, H.P.; Slosser, J.L.; Hildebrandt, T.T. et al. (2019). "What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration". iCourts Working Paper Series No. 162. SSRN. doi:10.2139/ssrn.3402974.
- Hörnle, J. (2019). "Juggling more than three balls at once: multilevel jurisdictional challenges in EU Data Protection Regulation". International Journal of Law and Information Technology 27 (2): 142–170. doi:10.1093/ijlit/eaz002.
- Cohen, I.G. (2020). "Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?". Georgetown Law Journal 108: 1425–69. doi:10.2139/ssrn.3529576.
- Maxwell, W.; Beaudouin, V.; Bloch, I. et al. (2020). "Identifying the 'Right' Level of Explanation in a Given Situation". CEUR Workshop Proceedings 2659: 63. doi:10.2139/ssrn.3604924.
- U.S. Food and Drug Administration (2020). "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-based Software as a Medical Device (SaMD)" (PDF). pp. 20. https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf. Retrieved 05 July 2020.
- Hacker, P.; Krestel, R.; Grundmann, S. et al. (2020). "Explainable AI under contract and tort law: Legal incentives and technical challenges". Artificial Intelligence and Law 28: 415–39. doi:10.1007/s10506-020-09260-6.
- Ferretti, A.; Schneider, M.; Blasime, A. (2018). "Machine Learning in Medicine: Opening the New Data Protection Black Box". European Data Preotection Law Review 104 (3): 320–32. doi:10.21552/edpl/2018/3/10.
- Weng, S.F.; Reps, J.; Kai, J. et al. (2017). "Can machine-learning improve cardiovascular risk prediction using routine clinical data?". PLoS One 12 (4): e0174944. doi:10.1371/journal.pone.0174944. PMC PMC5380334. PMID 28376093. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC5380334.
- Kakadiaris, I.A.; Vrigkas, M.; Yen, A.A. et al. (2018). "Machine Learning Outperforms ACC / AHA CVD Risk Calculator in MESA". Journal of the American Heart Association 7 (22): e009476. doi:10.1161/JAHA.118.009476. PMC PMC6404456. PMID 30571498. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6404456.
- Liu, T.; Fan, W.; Wu, C. (2019). "A hybrid machine learning approach to cerebral stroke prediction based on imbalanced medical dataset". Artificial Intelligence in Medicine 101: 101723. doi:10.1016/j.artmed.2019.101723. PMID 31813482.
- Cutillo, C.M.; Sharma, K.R.; Fischini, L. et al. (2020). "Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency". NPJ Digital Medicine 3: 47. doi:10.1038/s41746-020-0254-2. PMC PMC7099019. PMID 32258429. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC7099019.
- Tonekaboni, S.; Joshi, S.; McCradden, M.D. et al. (2019). "What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use". arXiv. https://arxiv.org/abs/1905.05134v2.
- Institute of Medicine (US) Committee on Quality of Health Care in America (2001). Crossing the Quality Chasm: A New Health System for the 21st Century. National Academies Press. doi:10.17226/10027. ISBN 0309072808.
- Kunneman, M.; Montori, V.M.; Casaneda-Guarderas, A. et al. (2016). "What Is Shared Decision Making? (and What It Is Not)". Academic Emergency Medicine 23 (12): 1320–24. doi:10.1111/acem.13065. PMID 27770514.
- O'Neill, E.S.; Grande, S.W.; Sherman, A. et al. (2017). "Availability of patient decision aids for stroke prevention in atrial fibrillation: A systematic review". American Heart Journal 191: 1–11. doi:10.1016/j.ahj.2017.05.014. PMID 28888264.
- Dobler, C.C.; Sanchez, M.; Gionfriddo, M.R. et al. (2019). "Impact of decision aids used during clinical encounters on clinician outcomes and consultation length: A systematic review". BMJ Quality and Safety 28 (6): 499–510. doi:10.1136/bmjqs-2018-008022. PMC PMC6561726. PMID 30301874. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6561726.
- Noseworthy, P.A.; Kaufman, E.S.; Chen, L.Y. et al. (2019). "Subclinical and Device-Detected Atrial Fibrillation: Pondering the Knowledge Gap: A Scientific Statement From the American Heart Association". Circulation 140 (25): e944–e963. doi:10.1161/CIR.0000000000000740. PMID 31694402.
- Spencer-Bonilla, G.; Thota, A.; Organick, P. et al. (2020). "Normalization of a conversation tool to promote shared decision making about anticoagulation in patients with atrial fibrillation within a practical randomized trial of its effectiveness: A cross-sectional study". Trials 21 (1): 395. doi:10.1186/s13063-020-04305-2. PMC PMC7218532. PMID 32398149. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC7218532.
- Bonner, C.; Bell, K.; Jansen, J. et al. (2018). "Should heart age calculators be used alongside absolute cardiovascular disease risk assessment?". BMC Cardiovascular Disorders 18 (1): 19. doi:10.1186/s12872-018-0760-1. PMC PMC5801811. PMID 29409444. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC5801811.
- Bjerring, J.C.; Busch, J. (2020). "Artificial Intelligence and Patient-Centered Decision-Making". Philosophy & Technology. doi:10.1007/s13347-019-00391-6.
- Politi, M.C.; Dizon, D.S.; Frosch, D.L. et al. (2013). "Importance of clarifying patients' desired role in shared decision making to match their level of engagement with their preferences". BMJ 347: f7066. doi:10.1136/bmj.f7066. PMID 24297974.
- Stacey, D.; Légaré, F.; Lewis, K. et al. (2017). "Decision aids for people facing health treatment or screening decisions". Cochrane Database of Systematic Reviews 4 (4): CD001431. doi:10.1002/14651858.CD001431.pub5. PMC PMC6478132. PMID 28402085. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6478132.
- Beauchamp, T.L.; Childress, J.F. (2008). Principles of Biomedical Ethics (6th ed.). Oxford University Press. ISBN 9780195335705.
- Gillon, R. (2015). "Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics". Journal of Medical Ethics 41 (1): 111–6. doi:10.1136/medethics-2014-102282. PMID 25516950.
- Mittelstadt, B. (2019). "Principles alone cannot guarantee ethical AI". Nature Machine Intelligence 1: 501–07. doi:10.1038/s42256-019-0114-4.
- Faden, R.R.; Beauchamp, T.L.; King, N.M.P. (1986). A history and theory of informed consent. Oxford University Press. ISBN 9781423763529.
- Raz, J. (1988). The Morality of Freedom. Oxford University Press. ISBN 9780198248071.
- McDougall, R.J. (2019). "Computer knows best? The need for value-flexibility in medical AI". Journal of Medical Ethics 45 (3): 156–60. doi:10.1136/medethics-2018-105118. PMID 30467198.
- Grote, T.; Berens, P. (2020). "On the ethics of algorithmic decision-making in healthcare". Journal of Medical Ethics 46 (3): 205–11. doi:10.1136/medethics-2019-105586. PMC PMC7042960. PMID 31748206. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC7042960.
- Beil, M.; Proft, I.; van Heerden, D. et al. (2019). "Ethical considerations about artificial intelligence for prognostication in intensive care". Intensive Care Medicine Experimental 7 (1): 70. doi:10.1186/s40635-019-0286-6. PMC PMC6904702. PMID 31823128. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=PMC6904702.
- London, A.J. (2019). "Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability". Hastings Center Report 49 (1): 15–21. doi:10.1002/hast.973. PMID 30790315.
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added.