Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(36 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig3 Boobier JofChemInfoModel2023 63-10.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:AI4Green: An open-source ELN for green and sustainable chemistry|AI4Green: An open-source ELN for green and sustainable chemistry]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


This paper presents the [[Free and open-source software|free and open-source]], web-based [[electronic laboratory notebook]] (ELN) [[AI4Green]], which combines features such as data archiving, collaboration tools, and green and sustainability metrics for organic [[chemistry]]. AI4Green offers the core functionality of an ELN, namely, the ability to store reactions securely and [[Data sharing|share]] them among different members of a research team. As users plan their reactions and record them in the ELN, green and sustainable chemistry is encouraged by automatically calculating green metrics and color-coding hazards, solvents, and reaction conditions. The interface links a database constructed from data extracted from PubChem, enabling the automatic collation of [[information]] for reactions. The application’s design facilitates the development of auxiliary sustainability applications, such as our Solvent Guide module. As more reaction data are captured, subsequent work will focus on providing “intelligent” sustainability suggestions to the user. ('''[[Journal:AI4Green: An open-source ELN for green and sustainable chemistry|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:The modeling of laboratory information systems in higher education based on enterprise architecture planning (EAP) for optimizing monitoring and equipment maintenance|The modeling of laboratory information systems in higher education based on enterprise architecture planning (EAP) for optimizing monitoring and equipment maintenance]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Identifying risk management challenges in laboratories|Identifying risk management challenges in laboratories]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:An overview of data warehouse and data lake in modern enterprise data management|An overview of data warehouse and data lake in modern enterprise data management]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: