Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated for the week)
(Updated article of the week text)
 
(155 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig5 Jalali JofMedIntRes2019 21-2.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Health care and cybersecurity: Bibliometric analysis of the literature|Health care and cybersecurity: Bibliometric analysis of the literature]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Over the past decade, clinical care has become globally dependent on information technology. The [[cybersecurity]] of [[health informatics|health care information systems]] is now an essential component of safe, reliable, and effective health care delivery. The objective of this study was to provide an overview of the literature at the intersection of cybersecurity and health care delivery. A comprehensive search was conducted using PubMed and Web of Science for English-language peer-reviewed articles. We carried out chronological analysis, domain clustering analysis, and text analysis of the included articles to generate a high-level concept map composed of specific words and the connections between them. Our final sample included 472 English-language journal articles. Our review results revealed that a majority of the articles were focused on technology. Technology–focused articles made up more than half of all the clusters, whereas managerial articles accounted for only 32 percent of all clusters. ('''[[Journal:Health care and cybersecurity: Bibliometric analysis of the literature|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
: ▪ [[Journal:Epidemiological data challenges: Planning for a more robust future through data standards|Epidemiological data challenges: Planning for a more robust future through data standards]]
{{flowlist |
: ▪ [[Journal:Wrangling environmental exposure data: Guidance for getting the best information from your laboratory measurements|Wrangling environmental exposure data: Guidance for getting the best information from your laboratory measurements]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
: ▪ [[Journal:One tool to find them all: A case of data integration and querying in a distributed LIMS platform|One tool to find them all: A case of data integration and querying in a distributed LIMS platform]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: