Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text)
 
(105 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig2 Gonzalez-Granadillo Sensors21 21-16.png|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Automated cyber and privacy risk management toolkit|Automated cyber and privacy risk management toolkit]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Addressing [[Cybersecurity|cyber]] and [[Information privacy|privacy]] risks has never been more critical for organizations. While a number of [[risk assessment]] methodologies and software tools are available, it is most often the case that one must, at least, integrate them into a holistic approach that combines several appropriate risk sources as input to risk mitigation tools. In addition, cyber risk assessment primarily investigates cyber risks as the consequence of vulnerabilities and threats that threaten assets of the investigated infrastructure. In fact, cyber risk assessment is decoupled from privacy impact assessment, which aims to detect privacy-specific threats and assess the degree of compliance with data protection legislation. Furthermore, a privacy impact assessment (PIA) is conducted in a proactive manner during the design phase of a system, combining processing activities and their inter-dependencies with assets, vulnerabilities, real-time threats and personally identifiable information (PII) that may occur during the dynamic lifecycle of systems. ('''[[Journal:Automated cyber and privacy risk management toolkit|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Design of generalized search interfaces for health informatics|Design of generalized search interfaces for health informatics]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Cybersecurity and privacy risk assessment of point-of-care systems in healthcare: A use case approach|Cybersecurity and privacy risk assessment of point-of-care systems in healthcare: A use case approach]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Fostering reproducibility, reusability, and technology transfer in health informatics|Fostering reproducibility, reusability, and technology transfer in health informatics]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: