Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
(176 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Wang BMCMedInfoDecMak2019 19-1.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory|Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


n autoverification system for coagulation consists of a series of rules that allows normal data to be released without manual verification. With new advances in [[medical informatics]], the [[laboratory information system]] (LIS) has growing potential for the use of autoverification, allowing rapid and accurate verification of [[clinical laboratory]] tests. The purpose of the study is to develop and evaluate a LIS-based autoverification system for validation and efficiency.
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
 
Autoverification decision rules—including quality control, analytical error flag, critical value, limited range check, delta check, and logical check rules, as well as patient’s historical information—were integrated into the LIS. Autoverification limit ranges was constructed based on 5% and 95% percentiles. The four most commonly used coagulation assays—prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time (TT), and fibrinogen (FBG)—were followed by the autoverification protocols. ('''[[Journal:Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
: ▪ [[Journal:CyberMaster: An expert system to guide the development of cybersecurity curricula|CyberMaster: An expert system to guide the development of cybersecurity curricula]]
{{flowlist |
: ▪ [[Journal:Costs of mandatory cannabis testing in California|Costs of mandatory cannabis testing in California]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
: ▪ [[Journal:An integrated data analytics platform|An integrated data analytics platform]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}

Revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: