Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(56 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 daSilva Sustain22 14-22.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Construction of control charts to help in the stability and reliability of results in an accredited water quality control laboratory|Construction of control charts to help in the stability and reliability of results in an accredited water quality control laboratory]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Overall, [[laboratory]] water [[Quality (business)|quality]] analysis must have stability in their results, especially in laboratories accredited by [[ISO/IEC 17025]]. Accredited parameters should be strictly reliable. Using [[control chart]]s to ascertain divergences between results is thus very useful. The present work applied a methodology of [[Data analysis|analysis of results]] through control charts to accurately monitor the results for a wastewater treatment plant. The parameters analyzed were pH, biological oxygen demand for five days (BOD<sub>5</sub>), chemical oxygen demand (COD), total suspended solids (TSS), and total phosphorus (TP). The stability of the results was analyzed from the control charts and 30 analyses performed in the last 12 months. From the results, it was possible to observe whether the results were stable, according to the rehabilitation factor, which cannot exceed WN = 1.00, and the efficiency of removal of pollutants, which remained above 70% for all parameters ... ('''[[Journal:Construction of control charts to help in the stability and reliability of results in an accredited water quality control laboratory|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Application of informatics in cancer research and clinical practice: Opportunities and challenges|Application of informatics in cancer research and clinical practice: Opportunities and challenges]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Recommendations for achieving interoperable and shareable medical data in the USA|Recommendations for achieving interoperable and shareable medical data in the USA]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Can a byte improve our bite? An analysis of digital twins in the food industry|Can a byte improve our bite? An analysis of digital twins in the food industry]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: