Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(35 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 He IntJofMedInfo2023 170.jpg|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Development and national scale implementation of an open-source electronic laboratory information system (OpenELIS) in Côte d’Ivoire: Sustainability lessons from the first 13 years|Development and national scale implementation of an open-source electronic laboratory information system (OpenELIS) in Côte d’Ivoire: Sustainability lessons from the first 13 years]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Côte d'Ivoire has a tiered [[public health laboratory]] system of nine [[Reference laboratory|reference laboratories]], 77 [[Laboratory|laboratories]] at regional and general [[hospital]]s, and 100 laboratories among 1,486 district health centers. Prior to 2009, nearly all of these laboratories used paper registers and reports to collect and report laboratory data to clinicians and national disease monitoring programs. Since 2009 the Ministry of Health (MOH) in Côte d'Ivoire has sought to implement a comprehensive set of activities aimed at strengthening the laboratory system. One of these activities is the sustainable development, expansion, and technical support of an open-source electronic [[laboratory information system]] (LIS) called [[OpenELIS]], with the long-term goal of Ivorian technical support and managerial sustainment of the system ... ('''[[Journal:Development and national scale implementation of an open-source electronic laboratory information system (OpenELIS) in Côte d’Ivoire: Sustainability lessons from the first 13 years|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:AI4Green: An open-source ELN for green and sustainable chemistry|AI4Green: An open-source ELN for green and sustainable chemistry]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:The modeling of laboratory information systems in higher education based on enterprise architecture planning (EAP) for optimizing monitoring and equipment maintenance|The modeling of laboratory information systems in higher education based on enterprise architecture planning (EAP) for optimizing monitoring and equipment maintenance]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Identifying risk management challenges in laboratories|Identifying risk management challenges in laboratories]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: