Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig9 Brown JMIRMedInfo2020 8-9.png|240px]]</div>
<!--<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig9 Brown JMIRMedInfo2020 8-9.png|240px]]</div>//-->
'''"[[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Secure record linkage of large health data sets: Evaluation of a hybrid cloud model]]"'''
'''"[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Explainability for artificial intelligence in healthcare: A multidisciplinary perspective]]"'''


The [[Linked data|linking]] of administrative data across agencies provides the capability to investigate many health and social issues, with the potential to deliver significant public benefit. Despite its advantages, the use of [[cloud computing]] resources for linkage purposes is scarce, with the storage of identifiable [[information]] on cloud infrastructure assessed as high-risk by data custodians. This study aims to present a model for record linkage that utilizes cloud computing capabilities while assuring custodians that identifiable data sets remain secure and local. A new hybrid cloud model was developed, including [[Information privacy|privacy-preserving]] record linkage techniques and container-based batch processing. An evaluation of this model was conducted with a prototype implementation using large synthetic data sets representative of administrative health data. ('''[[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Full article...]]''')<br />
Explainability is one of the most heavily debated topics when it comes to the application of [[artificial intelligence]] (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Taking AI-based [[clinical decision support system]]s as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' ''Principles of Biomedical Ethics'' (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. ('''[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Full article...]]''')<br />
<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Secure record linkage of large health data sets: Evaluation of a hybrid cloud model]]
* [[Journal:Risk assessment for scientific data|Risk assessment for scientific data]]
* [[Journal:Risk assessment for scientific data|Risk assessment for scientific data]]
* [[Journal:Methods for quantification of cannabinoids: A narrative review|Methods for quantification of cannabinoids: A narrative review]]
* [[Journal:Methods for quantification of cannabinoids: A narrative review|Methods for quantification of cannabinoids: A narrative review]]
* [[Journal:Utilizing connectivity and data management systems for effective quality management and regulatory compliance in point-of-care testing|Utilizing connectivity and data management systems for effective quality management and regulatory compliance in point-of-care testing]]
}}
}}

Revision as of 17:05, 22 November 2021

"Explainability for artificial intelligence in healthcare: A multidisciplinary perspective"

Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' Principles of Biomedical Ethics (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. (Full article...)

Recently featured: