Difference between revisions of "Journal:Evaluating health information systems using ontologies"

From LIMSWiki
Jump to navigationJump to search
(Created stub. Saving and adding more.)
 
(Added content. Saving and adding more.)
Line 39: Line 39:
==Introduction==
==Introduction==
In one aspect at least, the evaluation of health information systems matches well with their implementation: they both fail very often.<ref name="LittlejohnsEval03">{{cite journal |title=Evaluating computerised health information systems: Hard lessons still to be learnt |journal=BMJ |author=Littlejohn, P.; Wyatt, J.C.; Garvican, L. |volume=326 |issue=7394 |pages=860–3 |year=2003 |doi=10.1136/bmj.326.7394.860 |pmid=12702622 |pmc=PMC153476}}</ref><ref name="KrepsIS07">{{cite journal |title=IS Success and Failure — The Problem of Scale |journal=The Political Quarterly |author=Kreps, D.; Richardson, H. |volume=78 |issue=3 |pages=439–46 |year=2007 |doi=10.1111/j.1467-923X.2007.00871.x}}</ref><ref name="GreenhalghWhy10">{{cite journal |title=Why do evaluations of eHealth programs fail? An alternative set of guiding principles |journal=PLoS Medicine |author=Greenhalgh, T.; Russell, J. |volume=7 |issue=11 |pages=e1000360 |year=2010 |doi=10.1371/journal.pmed.1000360 |pmid=21072245 |pmc=PMC2970573}}</ref> Consequently, in the absence of an evaluation that could deliver insight about the impacts, an implementation cannot gain the necessary accreditation to join the club of successful implementations. Beyond the reports in the literature on the frequent accounts of this kind of failure<ref name="GreenhalghWhy10" />, the reported gaps in the literature<ref name="ChaudhrySys06">{{cite journal |title=Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care |journal=Annals of Internal Medicine |author=Chaudhry, B.; Wang, J.; Wu, S. et al. |volume=144 |issue=10 |pages=742-52 |year=2006 |doi=10.7326/0003-4819-144-10-200605160-00125 |pmid=16702590}}</ref>, and newly emerging papers that introduce new ways of doing health information system evaluation<ref name="YusofAnEval08">{{cite journal |title=An evaluation framework for health information systems: Human, organization and technology-fit factors (HOT-fit) |journal=International Journal of Medical Informatics |author=Yusof, M.M.; Kuljis, J.; Papazafeiropoulou, A.; Stergioulas, L.K. |volume=77 |issue=6 |pages=386-98 |year=2008 |doi=10.1016/j.ijmedinf.2007.08.011 |pmid=17964851}}</ref>, including this paper, can be interpreted as a supporting indicator that the attrition war on the complexity and failure-proneness of health information systems is still ongoing.<ref name="YusofInvest08">{{cite journal |title=Investigating evaluation frameworks for health information systems |journal=International Journal of Medical Informatics |author=Yusof, M.M.; Papazafeiropoulou, A.; Paul, R.J.; Stergioulas, L.K. |volume=77 |issue=6 |pages=377-85 |year=2008 |doi=10.1016/j.ijmedinf.2007.08.004 |pmid=17904898}}</ref> Doing battle with the complexity and failure-proneness of evaluation are models, methods, and frameworks that try to address what to evaluate, how to evaluate, or how to report the result of an evaluation. On this front, this paper tries to contribute to the answer of what to evaluate.
In one aspect at least, the evaluation of health information systems matches well with their implementation: they both fail very often.<ref name="LittlejohnsEval03">{{cite journal |title=Evaluating computerised health information systems: Hard lessons still to be learnt |journal=BMJ |author=Littlejohn, P.; Wyatt, J.C.; Garvican, L. |volume=326 |issue=7394 |pages=860–3 |year=2003 |doi=10.1136/bmj.326.7394.860 |pmid=12702622 |pmc=PMC153476}}</ref><ref name="KrepsIS07">{{cite journal |title=IS Success and Failure — The Problem of Scale |journal=The Political Quarterly |author=Kreps, D.; Richardson, H. |volume=78 |issue=3 |pages=439–46 |year=2007 |doi=10.1111/j.1467-923X.2007.00871.x}}</ref><ref name="GreenhalghWhy10">{{cite journal |title=Why do evaluations of eHealth programs fail? An alternative set of guiding principles |journal=PLoS Medicine |author=Greenhalgh, T.; Russell, J. |volume=7 |issue=11 |pages=e1000360 |year=2010 |doi=10.1371/journal.pmed.1000360 |pmid=21072245 |pmc=PMC2970573}}</ref> Consequently, in the absence of an evaluation that could deliver insight about the impacts, an implementation cannot gain the necessary accreditation to join the club of successful implementations. Beyond the reports in the literature on the frequent accounts of this kind of failure<ref name="GreenhalghWhy10" />, the reported gaps in the literature<ref name="ChaudhrySys06">{{cite journal |title=Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care |journal=Annals of Internal Medicine |author=Chaudhry, B.; Wang, J.; Wu, S. et al. |volume=144 |issue=10 |pages=742-52 |year=2006 |doi=10.7326/0003-4819-144-10-200605160-00125 |pmid=16702590}}</ref>, and newly emerging papers that introduce new ways of doing health information system evaluation<ref name="YusofAnEval08">{{cite journal |title=An evaluation framework for health information systems: Human, organization and technology-fit factors (HOT-fit) |journal=International Journal of Medical Informatics |author=Yusof, M.M.; Kuljis, J.; Papazafeiropoulou, A.; Stergioulas, L.K. |volume=77 |issue=6 |pages=386-98 |year=2008 |doi=10.1016/j.ijmedinf.2007.08.011 |pmid=17964851}}</ref>, including this paper, can be interpreted as a supporting indicator that the attrition war on the complexity and failure-proneness of health information systems is still ongoing.<ref name="YusofInvest08">{{cite journal |title=Investigating evaluation frameworks for health information systems |journal=International Journal of Medical Informatics |author=Yusof, M.M.; Papazafeiropoulou, A.; Paul, R.J.; Stergioulas, L.K. |volume=77 |issue=6 |pages=377-85 |year=2008 |doi=10.1016/j.ijmedinf.2007.08.004 |pmid=17904898}}</ref> Doing battle with the complexity and failure-proneness of evaluation are models, methods, and frameworks that try to address what to evaluate, how to evaluate, or how to report the result of an evaluation. On this front, this paper tries to contribute to the answer of what to evaluate.
Standing as a cornerstone for evaluation is our interpretation of what things constitute success in health information systems. A body of literature has developed concerning the definition and criteria of a successful health technology, in which the criteria for success go beyond the functionalities of the system.<ref name="HoldenTheTech10">{{cite journal |title=The technology acceptance model: Its past and its future in health care |journal=Journal of Biomedical Informatics |author=Holden, R.J.; Karsh, B.T. |volume=43 |issue=1 |pages=159–72 |year=2010 |doi=10.1016/j.jbi.2009.07.002 |pmid=19615467 |pmc=PMC2814963}}</ref><ref name="BergImp01">{{cite journal |title=Implementing information systems in health care organizations: Myths and challenges |journal=International Journal of Medical Informatics |author=Berg, M. |volume=64 |issue=2–3 |pages=143–56 |year=2001 |doi=10.1016/S1386-5056(01)00200-3 |pmid=11734382}}</ref> Models similar to the Technology Acceptance Model (TAM), when applied to health technology context, define this success as the end-users’ acceptance of a health technology system.<ref name="HuExam99">{{cite journal |title=Examining the Technology Acceptance Model Using Physician Acceptance of Telemedicine Technology |journal=Journal of Management Information Systems |author=Hu, P.J.; Chau, P.Y.K.; Liu Sheng, O.R.; Tam, K.Y. |volume=16 |issue=2 |pages=91–112 |year=1999 |doi=10.1080/07421222.1999.11518247}}</ref> The success of a system, and hence, the acceptance of a health information system, can be considered the use of that system when using it is voluntary or it can be considered the overall user acceptance when using it is mandatory.<ref name="GoodhueTask95">{{cite journal |title=Task-Technology Fit and Individual Performance |journal=MIS Quarterly |author=Goodhue, D.L.; Thompson, R.L. |volume=19 |issue=2 |pages=213–236 |year=1995 |doi=10.2307/249689}}</ref><ref name="AmmenwerthIT06">{{cite journal |title=IT-adoption and the interaction of task, technology and individuals: A fit framework and a case study |journal=BMC Medical Informatics and Decision Making |author=Ammenwerth, W.; Iller, C.; Mahler, C. |volume=6 |pages=3 |year=2006 |doi=10.1186/1472-6947-6-3 |pmid=16401336 |pmc=PMC1352353}}</ref>
To map the definition of success of health information systems onto real-world cases, certain evaluation frameworks have emerged.<ref name="EkelandMeth12">{{cite journal |title=Methodologies for assessing telemedicine: A systematic review of reviews |journal=International Journal of Medical Informatics |author=Ekeland, A.G.; Bowes, A.; Flottorp, S. |volume=81 |issue=1 |pages=1–11 |year=2012 |doi=10.1016/j.ijmedinf.2011.10.009 |pmid=22104370}}</ref><ref name="YusofInvest08" /> These frameworks, with their models, methods, taxonomies, and guidelines, are intended to capture parts of our knowledge about health information systems. This knowledge enables us to evaluate those systems, and it allows for the enlisting and highlighting of the elements of evaluation processes that are more effective, more efficient, or less prone to failure. Evaluation frameworks, specifically in their summative approach, might address what to evaluate, when to evaluate, or how to evaluate.<ref name="YusofInvest08" /> These frameworks might also elaborate on evaluation design, the way to measure the evaluation aspects, or how to compile, interpret, and report the results.<ref name="TalmonSTARE09">{{cite journal |title=STARE-HI—Statement on reporting of evaluation studies in Health Informatics |journal=International Journal of Medical Informatics |author=Talmon, J.; Ammenwerth, E.; Brender, J. et al. |volume=78 |issue=1 |pages=1–9 |year=2009 |doi=10.1016/j.ijmedinf.2008.09.002 |pmid=18930696}}</ref>
Evaluation frameworks offer a wide range of components for designing, implementing, and reporting an evaluation, among which are suggestions or guidelines for finding out the answer to "what to evaluate." The answer to what to evaluate can range from the impact on structural or procedural qualities to more direct outcomes such as the overall impact on patient care.<ref name="AmmenwerthVisions04">{{cite journal |title=Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck |journal=International Journal of Medical Informatics |author=Ammenwerth, E.; Brender, J.; Nykänen, P. et al. |volume=73 |issue=6 |pages=479–91 |year=2004 |doi=10.1016/j.ijmedinf.2004.04.004 |pmid=15171977}}</ref> For example, in the STARE-HI statement, which provides guidelines for the components of a final evaluation report of [[health informatics]], the "outcome measures or evaluation criteria" parallel the what to evaluate question.<ref name="TalmonSTARE09" />
To identify evaluation aspects, evaluation frameworks can take two approaches: top-down or bottom-up. Frameworks that take a top-down approach try to specify the evaluation aspects through instantiating a model in the context of an evaluation case. Frameworks that focus on finding, selecting, and aggregating evaluation aspects through interacting with users, that is, so-called user-centered frameworks, take a bottom-up approach.
In the model-based category, TAM and TAM2 have wide application in different disciplines including health care.<ref name="HoldenTheTech10" /> Beginning from a unique dimension of behavioral intention to use (acceptance), as a determinant of success or failure, the models go on to expand it to perceived usefulness and perceived ease of use<ref name="DavisPerc89">{{cite journal |title=Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology |journal=MIS Quarterly |author=Davis, F.D. |volume=13 |issue=3 |pages=319–340 |year=1989 |doi=10.2307/249008}}</ref><ref name="HoldenTheTech10" />, where these two latter dimensions can become the basic constructs of the evaluation aspects. The Unified Theory of Acceptance and Use of Technology (UTAUT) framework introduces 4 other determinants: performance expectancy, effort expectancy, social influence, and facilitating conditions.<ref name="HoldenTheTech10" /> Of these, the first two can become basic elements for evaluation aspects, but the last two might need more adaptation to be considered as aspects of evaluation for a health information system.
Some model-based frameworks extend further by taking into consideration the relations between the elements in the model. The Fit between Individuals, Task and Technology model includes the "task" element beside the "technology" and "individual" elements. It then goes on to create a triangle of "fitting" relations between these three elements. In this triangle, each of the elements or the interaction between each pair of elements is a determinant of success or failure<ref name="AmmenwerthIT06" />; therefore, each of those six can construct an aspect for evaluation. The Human, Organization, and Technology Fit (HOT-fit) model builds upon the DeLone and McLean Information Systems Success Model<ref name="DeLoneMeas04">{{cite journal |title=Measuring e-Commerce Success: Applying the DeLone & McLean Information Systems Success Model |journal=International Journal of Electronic Commerce |author=DeLone, W.H.; McLean, E.R. |volume=9 |issue=1 |pages=31–47 |year=2004 |doi=10.1080/10864415.2004.11044317}}</ref> and extends further by including the "organization" element beside the "technology" and "human" elements.<ref name="YusofAnEval08" /> This model also creates a triangle of "fitting" relations between those three elements.
Outcome-based evaluation models, such as the Health IT Evaluation Toolkit provided by the Agency for Healthcare Research and Quality, consider very specific evaluation measures for evaluation. For example, in the previously mentioned toolkit, measures are grouped in domains, such as "efficiency," and there are suggestions or examples for possible measures for each domain, such as "percent of practices or patient units that have gone paperless."<ref name="CusackHealth09">{{cite web |url=https://healthit.ahrq.gov/sites/default/files/docs/page/health-information-technology-evaluation-toolkit-2009-update.pdf |title=Health Information Technology Evaluation Toolkit: 2009 Update |format=PDF |author=Cusack, C.M.; Byrne, C.M.; Hook, J.M. et al. |publisher=Agency for Healthcare Research and Quality, HHS |date=June 2009 |accessdate=01 April 2016}}</ref>
In contrast to model-based approaches, bottom-up approaches are less detailed on about the evaluation aspects landscape; instead, they form this landscape by what they elicit from stakeholders. Requirement engineering, as a practice in system engineering and software engineering disciplines, is expected to capture and document, in a systematic way, user needs for a to-be-produced system.<ref name="ChengRes07">{{cite journal |title=Research Directions in Requirements Engineering |journal=FOSE '07: Future of Software Engineering |author=Cheng, B.H.C.; Atlee, J.M. |pages=285–383 |year=2007 |doi=10.1109/FOSE.2007.17}}</ref> The requirements specified by requirement documents, as a reflection of user needs, determine to a considerable extent what things need to be evaluated at the end of the system deployment and usage phase, in a summative evaluation approach. Some requirement engineering strategies apply generic patterns and models to extract requirements<ref name="ChengRes07" />, thereby showing some similarity, in this regard, to model-based methods.


==References==
==References==
Line 73: Line 90:


==Notes==
==Notes==
This presentation is faithful to the original, with only a few minor changes to presentation. In several cases the PubMed ID was missing and was added to make the reference more useful.  
This presentation is faithful to the original, with only a few minor changes to presentation. In several cases the PubMed ID was missing and was added to make the reference more useful. The URL to the Health Information Technology Evaluation Toolkit was dead and not archived; an alternative version of it was found on the AHRQ site and the URL substituted.


Per the distribution agreement, the following copyright information is also being added:  
Per the distribution agreement, the following copyright information is also being added:  

Revision as of 19:01, 20 June 2016

Full article title Evaluating health information systems using ontologies
Journal JMIR Medical Informatics
Author(s) Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C.; Fricker, Samuel A.; Berglund, Johan
Author affiliation(s) Blekinge Institute of Technology; University of Applied Sciences and Arts Northwestern Switzerland
Primary contact Email: shahryar.eivazzadeh [at] bth.se; Phone: 46 765628829
Editors Eysenbach, G.
Year published 2016
Volume and issue 4 (2)
Page(s) e20
DOI 10.2196/medinform.5185
ISSN 2291-9694
Distribution license Creative Commons Attribution 2.0
Website http://medinform.jmir.org/2016/2/e20/
Download http://medinform.jmir.org/2016/2/e20/pdf (PDF)

Abstract

Background: There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems.

Objective: The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems — whether similar or heterogeneous — by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework.

Methods: On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of seven cloud-based eHealth applications that were developed and deployed across European Union countries.

Results: The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the sevem eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project.

Conclusions: The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context-sensitive, and relevant across a heterogeneous set of health information systems.

Keywords: health information systems; ontologies; evaluation; technology assessment; biomedical

Introduction

In one aspect at least, the evaluation of health information systems matches well with their implementation: they both fail very often.[1][2][3] Consequently, in the absence of an evaluation that could deliver insight about the impacts, an implementation cannot gain the necessary accreditation to join the club of successful implementations. Beyond the reports in the literature on the frequent accounts of this kind of failure[3], the reported gaps in the literature[4], and newly emerging papers that introduce new ways of doing health information system evaluation[5], including this paper, can be interpreted as a supporting indicator that the attrition war on the complexity and failure-proneness of health information systems is still ongoing.[6] Doing battle with the complexity and failure-proneness of evaluation are models, methods, and frameworks that try to address what to evaluate, how to evaluate, or how to report the result of an evaluation. On this front, this paper tries to contribute to the answer of what to evaluate.

Standing as a cornerstone for evaluation is our interpretation of what things constitute success in health information systems. A body of literature has developed concerning the definition and criteria of a successful health technology, in which the criteria for success go beyond the functionalities of the system.[7][8] Models similar to the Technology Acceptance Model (TAM), when applied to health technology context, define this success as the end-users’ acceptance of a health technology system.[9] The success of a system, and hence, the acceptance of a health information system, can be considered the use of that system when using it is voluntary or it can be considered the overall user acceptance when using it is mandatory.[10][11]

To map the definition of success of health information systems onto real-world cases, certain evaluation frameworks have emerged.[12][6] These frameworks, with their models, methods, taxonomies, and guidelines, are intended to capture parts of our knowledge about health information systems. This knowledge enables us to evaluate those systems, and it allows for the enlisting and highlighting of the elements of evaluation processes that are more effective, more efficient, or less prone to failure. Evaluation frameworks, specifically in their summative approach, might address what to evaluate, when to evaluate, or how to evaluate.[6] These frameworks might also elaborate on evaluation design, the way to measure the evaluation aspects, or how to compile, interpret, and report the results.[13]

Evaluation frameworks offer a wide range of components for designing, implementing, and reporting an evaluation, among which are suggestions or guidelines for finding out the answer to "what to evaluate." The answer to what to evaluate can range from the impact on structural or procedural qualities to more direct outcomes such as the overall impact on patient care.[14] For example, in the STARE-HI statement, which provides guidelines for the components of a final evaluation report of health informatics, the "outcome measures or evaluation criteria" parallel the what to evaluate question.[13]

To identify evaluation aspects, evaluation frameworks can take two approaches: top-down or bottom-up. Frameworks that take a top-down approach try to specify the evaluation aspects through instantiating a model in the context of an evaluation case. Frameworks that focus on finding, selecting, and aggregating evaluation aspects through interacting with users, that is, so-called user-centered frameworks, take a bottom-up approach.

In the model-based category, TAM and TAM2 have wide application in different disciplines including health care.[7] Beginning from a unique dimension of behavioral intention to use (acceptance), as a determinant of success or failure, the models go on to expand it to perceived usefulness and perceived ease of use[15][7], where these two latter dimensions can become the basic constructs of the evaluation aspects. The Unified Theory of Acceptance and Use of Technology (UTAUT) framework introduces 4 other determinants: performance expectancy, effort expectancy, social influence, and facilitating conditions.[7] Of these, the first two can become basic elements for evaluation aspects, but the last two might need more adaptation to be considered as aspects of evaluation for a health information system.

Some model-based frameworks extend further by taking into consideration the relations between the elements in the model. The Fit between Individuals, Task and Technology model includes the "task" element beside the "technology" and "individual" elements. It then goes on to create a triangle of "fitting" relations between these three elements. In this triangle, each of the elements or the interaction between each pair of elements is a determinant of success or failure[11]; therefore, each of those six can construct an aspect for evaluation. The Human, Organization, and Technology Fit (HOT-fit) model builds upon the DeLone and McLean Information Systems Success Model[16] and extends further by including the "organization" element beside the "technology" and "human" elements.[5] This model also creates a triangle of "fitting" relations between those three elements.

Outcome-based evaluation models, such as the Health IT Evaluation Toolkit provided by the Agency for Healthcare Research and Quality, consider very specific evaluation measures for evaluation. For example, in the previously mentioned toolkit, measures are grouped in domains, such as "efficiency," and there are suggestions or examples for possible measures for each domain, such as "percent of practices or patient units that have gone paperless."[17]

In contrast to model-based approaches, bottom-up approaches are less detailed on about the evaluation aspects landscape; instead, they form this landscape by what they elicit from stakeholders. Requirement engineering, as a practice in system engineering and software engineering disciplines, is expected to capture and document, in a systematic way, user needs for a to-be-produced system.[18] The requirements specified by requirement documents, as a reflection of user needs, determine to a considerable extent what things need to be evaluated at the end of the system deployment and usage phase, in a summative evaluation approach. Some requirement engineering strategies apply generic patterns and models to extract requirements[18], thereby showing some similarity, in this regard, to model-based methods.


References

  1. Littlejohn, P.; Wyatt, J.C.; Garvican, L. (2003). "Evaluating computerised health information systems: Hard lessons still to be learnt". BMJ 326 (7394): 860–3. doi:10.1136/bmj.326.7394.860. PMC PMC153476. PMID 12702622. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC153476. 
  2. Kreps, D.; Richardson, H. (2007). "IS Success and Failure — The Problem of Scale". The Political Quarterly 78 (3): 439–46. doi:10.1111/j.1467-923X.2007.00871.x. 
  3. 3.0 3.1 Greenhalgh, T.; Russell, J. (2010). "Why do evaluations of eHealth programs fail? An alternative set of guiding principles". PLoS Medicine 7 (11): e1000360. doi:10.1371/journal.pmed.1000360. PMC PMC2970573. PMID 21072245. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2970573. 
  4. Chaudhry, B.; Wang, J.; Wu, S. et al. (2006). "Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care". Annals of Internal Medicine 144 (10): 742-52. doi:10.7326/0003-4819-144-10-200605160-00125. PMID 16702590. 
  5. 5.0 5.1 Yusof, M.M.; Kuljis, J.; Papazafeiropoulou, A.; Stergioulas, L.K. (2008). "An evaluation framework for health information systems: Human, organization and technology-fit factors (HOT-fit)". International Journal of Medical Informatics 77 (6): 386-98. doi:10.1016/j.ijmedinf.2007.08.011. PMID 17964851. 
  6. 6.0 6.1 6.2 Yusof, M.M.; Papazafeiropoulou, A.; Paul, R.J.; Stergioulas, L.K. (2008). "Investigating evaluation frameworks for health information systems". International Journal of Medical Informatics 77 (6): 377-85. doi:10.1016/j.ijmedinf.2007.08.004. PMID 17904898. 
  7. 7.0 7.1 7.2 7.3 Holden, R.J.; Karsh, B.T. (2010). "The technology acceptance model: Its past and its future in health care". Journal of Biomedical Informatics 43 (1): 159–72. doi:10.1016/j.jbi.2009.07.002. PMC PMC2814963. PMID 19615467. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2814963. 
  8. Berg, M. (2001). "Implementing information systems in health care organizations: Myths and challenges". International Journal of Medical Informatics 64 (2–3): 143–56. doi:10.1016/S1386-5056(01)00200-3. PMID 11734382. 
  9. Hu, P.J.; Chau, P.Y.K.; Liu Sheng, O.R.; Tam, K.Y. (1999). "Examining the Technology Acceptance Model Using Physician Acceptance of Telemedicine Technology". Journal of Management Information Systems 16 (2): 91–112. doi:10.1080/07421222.1999.11518247. 
  10. Goodhue, D.L.; Thompson, R.L. (1995). "Task-Technology Fit and Individual Performance". MIS Quarterly 19 (2): 213–236. doi:10.2307/249689. 
  11. 11.0 11.1 Ammenwerth, W.; Iller, C.; Mahler, C. (2006). "IT-adoption and the interaction of task, technology and individuals: A fit framework and a case study". BMC Medical Informatics and Decision Making 6: 3. doi:10.1186/1472-6947-6-3. PMC PMC1352353. PMID 16401336. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1352353. 
  12. Ekeland, A.G.; Bowes, A.; Flottorp, S. (2012). "Methodologies for assessing telemedicine: A systematic review of reviews". International Journal of Medical Informatics 81 (1): 1–11. doi:10.1016/j.ijmedinf.2011.10.009. PMID 22104370. 
  13. 13.0 13.1 Talmon, J.; Ammenwerth, E.; Brender, J. et al. (2009). "STARE-HI—Statement on reporting of evaluation studies in Health Informatics". International Journal of Medical Informatics 78 (1): 1–9. doi:10.1016/j.ijmedinf.2008.09.002. PMID 18930696. 
  14. Ammenwerth, E.; Brender, J.; Nykänen, P. et al. (2004). "Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck". International Journal of Medical Informatics 73 (6): 479–91. doi:10.1016/j.ijmedinf.2004.04.004. PMID 15171977. 
  15. Davis, F.D. (1989). "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology". MIS Quarterly 13 (3): 319–340. doi:10.2307/249008. 
  16. DeLone, W.H.; McLean, E.R. (2004). "Measuring e-Commerce Success: Applying the DeLone & McLean Information Systems Success Model". International Journal of Electronic Commerce 9 (1): 31–47. doi:10.1080/10864415.2004.11044317. 
  17. Cusack, C.M.; Byrne, C.M.; Hook, J.M. et al. (June 2009). "Health Information Technology Evaluation Toolkit: 2009 Update" (PDF). Agency for Healthcare Research and Quality, HHS. https://healthit.ahrq.gov/sites/default/files/docs/page/health-information-technology-evaluation-toolkit-2009-update.pdf. Retrieved 01 April 2016. 
  18. 18.0 18.1 Cheng, B.H.C.; Atlee, J.M. (2007). "Research Directions in Requirements Engineering". FOSE '07: Future of Software Engineering: 285–383. doi:10.1109/FOSE.2007.17. 

Abbreviations

EU: European Union

FI: Future Internet

FI-STAR: Future Internet Social and Technological Alignment Research

FI-PPP: Future Internet Public-Private Partnership Programme

FITT: Fit between Individuals, Task and Technology

HOT-fit: Human, Organization, and Technology Fit

INAHTA: International Network of Agencies for Health Technology Assessment

MAST: Model for Assessment of Telemedicine applications

OWL: Web Ontology Language

STARE-HI: Statement on the Reporting of Evaluation studies in Health Informatics

TAM: Technology Acceptance Model

TAM2: Technology Acceptance Model 2

UTAUT: Unified Theory of Acceptance and Use of Technology

UVON: Unified eValuation using Ontology

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In several cases the PubMed ID was missing and was added to make the reference more useful. The URL to the Health Information Technology Evaluation Toolkit was dead and not archived; an alternative version of it was found on the AHRQ site and the URL substituted.

Per the distribution agreement, the following copyright information is also being added:

©Shahryar Eivazzadeh, Peter Anderberg, Tobias C. Larsson, Samuel A. Fricker, Johan Berglund. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.06.2016.