Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
(78 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Mwambe IntJofAdvSciResEng22 8-4.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Development of a smart laboratory information management system: A case study of NM-AIST Arusha of Tanzania|Development of a smart laboratory information management system: A case study of NM-AIST Arusha of Tanzania]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Testing laboratories in higher learning institutions of science, technology, and engineering are used by institutional staff, researchers, and external stakeholders in conducting research experiments, [[Sample (material)|sample]] analysis, and result dissemination. However, there exists a challenge in the management of [[laboratory]] operations and processing of laboratory-based data. Operations carried out in the laboratory at Nelson Mandela African Institution of Science and Technology (NM-AIST), in Arusha, Tanzania—where this case study was carried out—are paper-based. There is no automated way of sample registration and identification, and researchers are prone to making errors when handling sensitive reagents. Users have to physically visit the laboratory to enquire about available equipment or reagents before borrowing or reserving those resources. Additionally, paper-based forms have to be filled out and handed to the laboratory manager for approval ... ('''[[Journal:Development of a smart laboratory information management system: A case study of NM-AIST Arusha of Tanzania|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Health informatics: Engaging modern healthcare units: A brief overview|Health informatics: Engaging modern healthcare units: A brief overview]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:A roadmap for LIMS at NIST Material Measurement Laboratory|A roadmap for LIMS at NIST Material Measurement Laboratory]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:A model for design and implementation of a laboratory information management system specific to molecular pathology laboratory operations|A model for design and implementation of a laboratory information management system specific to molecular pathology laboratory operations]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: