Journal:Critical analysis of the impact of AI on the patient–physician relationship: A multi-stakeholder qualitative study

From LIMSWiki
Revision as of 00:15, 7 March 2024 by Shawndouglas (talk | contribs) (Saving and adding more.)
Jump to navigationJump to search
Full article title Critical analysis of the impact of AI on the patient–physician relationship: A multi-stakeholder qualitative study
Journal Digital Health
Author(s) Čartolovni, Anto; Malešević, Anamaria; Poslon, Luka
Author affiliation(s) Catholic University of Croatia
Primary contact Email: anto dot cartolovni at unicath dot hr
Year published 2023
Volume and issue 9
Article # 231220833
DOI 10.1177/20552076231220833
ISSN 2055-2076
Distribution license Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International
Website https://journals.sagepub.com/doi/10.1177/20552076231220833
Download https://journals.sagepub.com/doi/reader/10.1177/20552076231220833 (PDF)

Abstract

Objective: This qualitative study aims to present the aspirations, expectations, and critical analysis of the potential for artificial intelligence (AI) to transform the patient–physician relationship, according to multi-stakeholder insight.

Methods: This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders, namely physicians (n = 12), patients (n = 15), and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38).

Results: In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient–physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the center of the caring process; and (3) the potential risk to the holistic approach by neglecting humanness in healthcare.

Conclusions: This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of the patient–physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians’ knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient–physician relationship—such as trust and honesty, conveyed through open and sincere communication—are preserved.

Keywords: artificial intelligence, patient-physician relationship, ethics, bioethics, qualitative research, multi-stakeholder approach

Introduction

Recent developments in large language models (LLM) have attracted public attention in regard to artificial intelligence (AI) development, raising many hopes among the wider public as well as healthcare professionals. After ChatGPT was launched in November 2022, producing human-like responses, it reached 100 million users in the following two months. [1] Many suggestions for its potential applications in healthcare have appeared on social media. These have ranged from using AI to write outpatient clinic letters to insurance companies, thereby saving time for the practicing physician, to offering advice to physicians on how to diagnose a patient. [2] Such an AI-enabled chatbot-based symptom checker can be used as a self-triaging and patient monitoring tool, or AI can be used for translating and explaining medical notes or making diagnoses in a patient-friendly way. [3] Therefore, the introduction of ChatGPT represented a potential benefit not only for healthcare professionals but also for patients themselves, particularly with the improved version of GPT-4. In addition to ChatGPT, various other LLMs are at different stages of development, for example, BioGPT (Massachusetts Institute of Technology, Boston, MA, USA), LaMDA (Google, Mountainview, CA, USA), Sparrow (Deepmind AI, London, UK), Pangu Alpha (Huawei, Shenzen, China), OPT-IML (Meta, Menlo Park, CA, USA), and Megatron Turing MLG (Nvidia, Santa Clara, CA, USA). [4]

However, despite the wealth of potential applications for LLM, including cost-saving and time-saving benefits that can be used to increase productivity, there has been widespread acknowledgement that it must be used wisely. [3] Therefore, the critical awareness approach mostly relates to underlying ethical issues such as transparency, accountability, and fairness. [5] Critical thinking is essential for physicians to avoid blindly relying only on the recommendations of AI algorithms, without applying clinical reasoning or reviewing current best practices, which could lead to compromising the ethical principles of beneficence and non-maleficence. [6] Moreover, when using LLM in the healthcare context, the provision of sensitive health information by feeding up the algorithmic black box might be met with a lack of transparency in terms of the ways in which the commercial companies will use or store such information. In other words, such information might be made available to company employees or potential hackers. [4] In addition, from a public health perspective, using ChatGPT could potentially lead to an "AI-driven infodemic," producing a vast amount of scientific articles, fake news, and misinformation. [5] Therefore, all of these challenges [7] necessitate further regulation of LLM in healthcare in order to minimize the potential harms and foster trust in AI among patients and healthcare providers. [1]

Interestingly, healthcare professionals have demonstrated openness and readiness to adopt generative AI, mostly because they are excessively burdened by administrative tasks [8] and are desperately seeking a practical solution. Several medical specializations have been identified as benefiting from the use of medical AI, including general practitioners [9], nephrologists [10], nuclear medicine practitioners [11], and pathologists [12], with the technology reportedly having a direct impact on physicians’ roles, responsibilities, and competencies. [12–14] Although the above-mentioned potential has been recognized, various studies have noted that the implementation of medical AI would bring about certain challenges [15] and barriers [16], such as physicians’ trust in the AI, user-friendliness [17], or tensions between the human-centric model and technology-centric model, that is, upskilling and deskilling [18], which will further impact on the (non-)acceptance of AI-based tools. [17]

Aims

This study seeks to present the aspirations, fears, expectations, and critical analysis of the ability of AI to transform healthcare. Therefore, this qualitative study aims to provide multi-stakeholder insights, with a particular focus on the perspectives of patients, healthcare professionals, and managers regarding the current state of healthcare, the ways in which AI should be implemented, the expectations of AI, the synergistic effect between physicians and AI, and its impact on the patient–physician relationship. These results will provide some clarification regarding questions that have been raised about openness towards embracing AI, and critical awareness of AI's potential limitations in clinical practice.

Methods

This study was conducted from June to December 2021 as a multi-stakeholder (n = 75) qualitative study. It employs an anticipatory ethics approach, an innovative form of ethical reasoning that is applied to the analysis of potential mid-term to long-term implications and outcomes of technological innovation [19], and sociology of expectations, focusing on the role of expectations in shaping scientific and technological change. [20,21] These are the theoretical frameworks underpinning the design of the qualitative study, in which the questions were followed by two scenarios set in 2030 and 2023 to stimulate discussions. Although both referred to the digital health context, the first scenario focused on the use of an AI-based virtual assistant, while the second focused on self-monitoring devices. This article focuses only on the first scenario (see Appendix I) as it was embedded in the clinical setting and depicts the future care provision and transformation of healthcare. The study follows the consolidated criteria for reporting qualitative research (COREQ) guidelines [22] (see Appendix II). Furthermore, it was approved by the Catholic University of Croatia Ethics Committee n. 498-03-01-04/2-19-02.

Participants and recruitment

A purposeful random sampling method was employed. The inclusion criteria included that they belonged to specific key stakeholder groups (physicians, patients, or hospital managers), while people who were under 18 years of age or who did not fall into any of the specified key stakeholder groups were excluded. The participants were recruited using the snowballing technique until data saturation was reached, respecting and ensuring data heterogeneity and aiming for maximum variation in variables such as stakeholder category, age, gender, and location. Participants (n = 75) were identified as stakeholders in the healthcare context: patients, physicians, IT engineers, jurists, hospital managers, and policymakers (Figure 1). Initially, an email was sent to invite participation in the research. Some of the invitees did not respond to the email, though no one provided reasons for declining. Of those who agreed to participate, some participants opted to postpone the interviews due to other commitments, and some were ultimately not conducted.


Fig1 Čartolovni DigitalHealth2023 9.jpeg

Figure 1. Identified stakeholders in the healthcare context.

Considering the context, with the recent introduction of ChatGPT and outlined aim, it was decided to focus mainly on three groups that were directly affected by the adoption of AI in medicine (n = 38), which were physicians (n = 12), patients (n = 15), and healthcare managers (n = 11). All participation was voluntary and, prior to the interview, participants received all of the information they needed to provide informed consent.

Data collection and analysis

Semi-structured interviews were conducted both in-person (at locations convenient for the participants or at the research group's work office) and online, using the Zoom platform, by researchers experienced in qualitative research. Only the participant and the researcher attended the interviews. The initial interview guide was based on the authors’ previous desk research on recognized ethical, legal, and social issues in the development and deployment of AI in medicine. [23] It was inspired by similar studies [24–26] and was pilot-tested on a group of 23 stakeholders. Later, the interview guide was adapted as the study continued to take account of emerging themes until data saturation was reached. All interviews were recorded using a portable audio recorder and later transcribed; the average length of interviews was 47 minutes. Transcripts were not provided to participants for comments or corrections. The transcribed interviews were entered into the NVivo qualitative data analysis software. Researchers familiarized themselves with the material by reading the transcripts and taking notes to gain deeper insights into the data. Next, a thematic analysis was conducted. [27] Following that, an open coding process was initiated for the interviews (n = 11). Based on the initial codes, the researchers agreed on thematic categories [28], leading to the development of the final codebook, which was then used to code the remaining interviews. Finally, the researchers combined and discussed themes for comparison and reached a consensus on how to define and use them. All interviews were analyzed in the original language (Croatian), and the quotes presented in this article have been translated into English.

Results

Participant demographics

This study focuses on 38 conducted interviews with patients (comprising 40% of the sample; 15/38), followed by physicians (31% of the sample; 12/38), and health managers (29% of the sample; 11/38). In terms of gender, 53% of the participants were female (20/38), while 47% (18/38) were male. The participants’ ages ranged from the 18 to 24 age group to the 65 and older category. Regarding the geographical distribution, most respondents 74% (28/38) hailed from urban centers, and nine participants, representing 23% of respondents (9/38), were from the urban periphery, whereas one participant resided in a rural periphery (Table 1). A minority of patients (33%; 5/15) regularly used technology for health monitoring, such as applications and smart devices (e.g., smartwatches), in their daily routines.

Table 1. Participants’ socio-demographic characteristics.
Socio-demographic characteristics Patients Physicians Hospital managers Σ
Gender M 5 5 6 16
F 10 5 5 20
Σ 15 12 11 38
Age 18–24 2 0 0 2
25–34 4 6 0 10
35–44 1 0 4 5
45–54 4 4 5 13
55–64 3 2 2 7
65+ 1 0 0 1
Σ 15 12 11 38
Location Urban center 8 12 8 28
Urban periphery 6 0 3 9
Rural periphery 1 0 0 1
Σ 15 12 11 38
Regular use of technology for health monitoring Yes 5
No 10

Thematic analysis

References

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, grammar, and punctuation. In some cases important information was missing from the references, and that information was added. The original lists references in alphabetical order; this version lists them in order of appearance, by design.