Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text.)
(Updated article of the week text)
 
(111 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Industry 4.0.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Cybersecurity impacts for artificial intelligence use within Industry 4.0|Cybersecurity impacts for artificial intelligence use within Industry 4.0]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


In today’s modern digital manufacturing landscape, new and emerging technologies can shape how an organization can compete, while others will view those technologies as a necessity to survive, as manufacturing has been identified as a critical infrastructure. Universities struggle to hire university professors that are adequately trained or willing to enter academia due to competitive salary offers in the industry. Meanwhile, the demand for people knowledgeable in fields such as [[artificial intelligence]], [[Informatics (academic field)|data science]], and [[cybersecurity]] continuously rises, with no foreseeable drop in demand in the next several years. This results in organizations deploying technologies with a staff that inadequately understands what new cybersecurity risks they are introducing into the company. This work examines how organizations can potentially mitigate some of the risk associated with integrating these new technologies and developing their workforce to be better prepared for looming changes in technological skill need. ('''[[Journal:Cybersecurity impacts for artificial intelligence use within Industry 4.0|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Cross-border data transfer regulation in China|Cross-border data transfer regulation in China]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Data and information systems management for urban water infrastructure condition assessment|Data and information systems management for urban water infrastructure condition assessment]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Diagnostic informatics: The role of digital health in diagnostic stewardship and the achievement of excellence, safety, and value|Diagnostic informatics: The role of digital health in diagnostic stewardship and the achievement of excellence, safety, and value]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: