Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text.)
(Updated article of the week text)
(262 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig2 Kindler F1000Res2017 5.gif|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Moving ERP systems to the cloud: Data security issues|Moving ERP systems to the cloud: Data security issues]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


This paper brings to light data security issues and concerns for organizations by moving their [[enterprise resource planning]] (ERP) systems to the cloud. [[Cloud computing]] has become the new trend of how organizations conduct business and has enabled them to innovate and compete in a dynamic environment through new and innovative business models. The growing popularity and success of the cloud has led to the emergence of cloud-based [[software as a service]] (SaaS) ERP systems, a new alternative approach to traditional on-premise ERP systems. Cloud-based ERP has a myriad of benefits for organizations. However, infrastructure engineers need to address [[Cloud computing security|data security]] issues before moving their enterprise applications to the cloud. Cloud-based ERP raises specific concerns about the confidentiality and [[Data integrity|integrity]] of the data stored in the cloud. Such concerns that affect the adoption of cloud-based ERP are based on the size of the organization. ('''[[Journal:Moving ERP systems to the cloud: Data security issues|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':  
{{flowlist |
: ▪ [[Journal:Method-centered digital communities on protocols.io for fast-paced scientific innovation|Method-centered digital communities on protocols.io for fast-paced scientific innovation]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
: ▪ [[Journal:Information management for enabling systems medicine|Information management for enabling systems medicine]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
: ▪ [[Journal:Developing a customized approach for strengthening tuberculosis laboratory quality management systems toward accreditation|Developing a customized approach for strengthening tuberculosis laboratory quality management systems toward accreditation]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}

Revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: