Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(50 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig3 Snyder PLOSDigHlth22 1-11.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:From months to minutes: Creating Hyperion, a novel data management system expediting data insights for oncology research and patient care|From months to minutes: Creating Hyperion, a novel data management system expediting data insights for oncology research and patient care]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Ensuring timely access to accurate data is critical for the functioning of a [[cancer]] center. Despite overlapping data needs, data are often fragmented and sequestered across multiple systems (such as the [[electronic health record]] [EHR], state and federal registries, and research [[database]]s), creating high barriers to data access for clinicians, researchers, administrators, quality officers, and patients. The creation of [[System integration|integrated data systems]] also faces technical, leadership, cost, and human resource barriers, among others. The University of Rochester's James P. Wilmot Cancer Institute (WCI) hired a small team of individuals with both technical and clinical expertise to develop a custom [[Information management|data management]] software platform—Hyperion— addressing five challenges: lowering the skill level required to maintain the system, reducing costs, allowing users to access data autonomously, optimizing [[Information security|data security]] and utilization, and shifting technological team structure to encourage rapid innovation ... ('''[[Journal:From months to minutes: Creating Hyperion, a novel data management system expediting data insights for oncology research and patient care|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Health data privacy through homomorphic encryption and distributed ledger computing: An ethical-legal qualitative expert assessment study|Health data privacy through homomorphic encryption and distributed ledger computing: An ethical-legal qualitative expert assessment study]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Avoidance of operational sampling errors in drinking water analysis|Avoidance of operational sampling errors in drinking water analysis]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:ISO/IEC 17025: History and introduction of concepts|ISO/IEC 17025: History and introduction of concepts]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: