Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text.)
(Updated article of the week text)
 
(55 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig2 Berciano FrontNutr2022 9.jpg|240px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Precision nutrition: Maintaining scientific integrity while realizing market potential|Precision nutrition: Maintaining scientific integrity while realizing market potential]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Precision nutrition (PN) is an approach to developing comprehensive and dynamic [[Nutritional science|nutritional]] recommendations based on individual variables, including [[genetics]], [[microbiome]], [[Basic metabolic panel|metabolic profile]], health status, physical activity, dietary pattern, and food environment, as well as socioeconomic and psychosocial characteristics. PN can help answer the question “what should I eat to be healthy?”, recognizing that what is healthful for one individual may not be the same for another, and understanding that health and responses to diet change over time. The growth of the PN market has been driven by increasing consumer interest in individualized products and services coupled with advances in technology, analytics, and [[Omics|omic sciences]]. However, important concerns are evident regarding the adequacy of scientific substantiation supporting claims for current products and services. An additional limitation to accessing PN is the current cost of [[Medical test|diagnostic tests]] and wearable [[Medical device|devices]] ... ('''[[Journal:Precision nutrition: Maintaining scientific integrity while realizing market potential|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Construction of control charts to help in the stability and reliability of results in an accredited water quality control laboratory|Construction of control charts to help in the stability and reliability of results in an accredited water quality control laboratory]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Application of informatics in cancer research and clinical practice: Opportunities and challenges|Application of informatics in cancer research and clinical practice: Opportunities and challenges]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Recommendations for achieving interoperable and shareable medical data in the USA|Recommendations for achieving interoperable and shareable medical data in the USA]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}
}}

Latest revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: