Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(132 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Cassim AfricanJLabMed2020 9-2.jpg|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Timely delivery of laboratory efficiency information, Part I: Developing an interactive turnaround time dashboard at a high-volume laboratory|Timely delivery of laboratory efficiency information, Part I: Developing an interactive turnaround time dashboard at a high-volume laboratory]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Mean [[wikipedia:Turnaround time|turnaround time]] (TAT) [[reporting]] for testing [[Laboratory|laboratories]] in a national network is typically static and not immediately available for meaningful corrective action and does not allow for test-by-test or site-by-site interrogation of individual laboratory performance. The aim of this study was to develop an easy-to-use, visual dashboard to report interactive graphical TAT data to provide a weekly snapshot of TAT efficiency. An interactive dashboard was developed by staff from the National Priority Programme and Central Data Warehouse of the National Health Laboratory Service in Johannesburg, South Africa, during 2018. Steps required to develop the dashboard were summarized in a flowchart. To illustrate the dashboard, one week of data from a busy laboratory for a specific set of tests was analyzed using annual performance plan TAT cutoffs. Data were extracted and prepared to deliver an aggregate extract, with statistical measures provided, including test volumes, global percentage of tests that were within TAT cutoffs, and percentile statistics. ('''[[Journal:Timely delivery of laboratory efficiency information, Part I: Developing an interactive turnaround time dashboard at a high-volume laboratory|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Advanced engineering informatics: Philosophical and methodological foundations with examples from civil and construction engineering|Advanced engineering informatics: Philosophical and methodological foundations with examples from civil and construction engineering]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Explainability for artificial intelligence in healthcare: A multidisciplinary perspective]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Secure record linkage of large health data sets: Evaluation of a hybrid cloud model]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
 
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: