Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
(52 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig4 Fernandes AQUA22 71-3.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Avoidance of operational sampling errors in drinking water analysis|Avoidance of operational sampling errors in drinking water analysis]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


The internal audits carried out in the first half of 2019 in Portuguese water [[Laboratory|laboratories]] as part of [[Quality (business)|quality]] accreditation in accordance with [[ISO/IEC 17025|ISO/IEC 17025:2017]] showed a high frequency of adverse events in connection with [[Sample (material)|sampling]]. These faults can be a consequence of a wide range of causes, and in some cases, the [[information]] about them can be insufficient or unclear. Considering that sampling has a major influence on the quality of the analytical results provided by water laboratories, this work presents a system for reporting and learning from adverse events. Its aim is to record nonconformities, errors, and adverse events, making possible automatic [[data analysis]] to better ensure [[Continual improvement process|continuous improvement]] in operational sampling. The system is based on the Eindhoven Classification Model and enables automatic data analysis and reporting to identify the main causes of failure ... ('''[[Journal:Avoidance of operational sampling errors in drinking water analysis|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:ISO/IEC 17025: History and introduction of concepts|ISO/IEC 17025: History and introduction of concepts]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Practical considerations for laboratories: Implementing a holistic quality management system|Practical considerations for laboratories: Implementing a holistic quality management system]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Precision nutrition: Maintaining scientific integrity while realizing market potential|Precision nutrition: Maintaining scientific integrity while realizing market potential]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: