Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text.)
(Updated article of the week text)
 
(58 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Box1 Szarfman CommMed22 2.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Recommendations for achieving interoperable and shareable medical data in the USA|Recommendations for achieving interoperable and shareable medical data in the USA]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Easy access to large quantities of accurate health data is required to understand medical and scientific [[information]] in real time; evaluate public health measures before, during, and after times of crisis; and prevent medical errors. Introducing a system in the United States of America that allows for efficient access to such health data and ensures auditability of data facts, while avoiding data silos, will require fundamental changes in current practices. Here, we recommend the implementation of standardized data collection and transmission systems, universal identifiers for individual patients and end users, a reference standard infrastructure to support calibration and integration of [[laboratory]] results from equivalent tests, and modernized working practices. Requiring comprehensive and binding [[Technical standard|standards]], rather than incentivizing voluntary and often piecemeal efforts for [[data exchange]], will allow us to achieve the analytical information environment that patients need ... ('''[[Journal:Recommendations for achieving interoperable and shareable medical data in the USA|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Can a byte improve our bite? An analysis of digital twins in the food industry|Can a byte improve our bite? An analysis of digital twins in the food industry]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Exploration of organic superionic glassy conductors by process and materials informatics with lossless graph database|Exploration of organic superionic glassy conductors by process and materials informatics with lossless graph database]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Elegancy: Digitizing the wisdom from laboratories to the cloud with free no-code platform|Elegancy: Digitizing the wisdom from laboratories to the cloud with free no-code platform]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: