Clinical decision support system

From LIMSWiki
Jump to navigationJump to search
This diagram demonstrates the interconnections among how knowledge is generated and validated, how knowledge is managed and disseminated, and how it finds its way into clinical decision support system (CDSS).

A clinical decision support system (CDSS) is a "computer [system] designed to impact clinician decision making about individual patients at the point in time these decisions are made."[1] As such, it can be viewed as a knowledge management tool used to further clinical advice for patient care based on multiple items of patient data.

Characteristics

Purpose

In the early days, CDSSs were conceived of as being used to literally make decisions for the clinician. The clinician would input the information and wait for the CDSS to output the "right" choice, and the clinician would simply act on that output. In April 1963, a forward-looking doctor Roger Truesdail imagined a future 1985 where such a process would be a reality:

The year is 1985 when a middle-aged man enters a physician's office, suffering from a critical ailment. The doctor feeds into a small electronic computer the patient's symptoms, medical history, and other pertinent data. The computer transmits the information to a giant central electronic computer in a remote city. Seconds later the computer will transmit back to the doctor the combined medical diagnosis of the world's best medical minds. The man is given the proper treatment and his life is saved. The very latest medical information, often inaccessible to doctors, will be stored in the giant computer. This computer, linked with small computers in doctors' offices and hospitals all over the world, will place vital medical information at doctor's fingertips.[2]

However, the modern methodology involves the clinician interacting with the CDSS at the point of care, utilizing both their own knowledge and the CDSS to produce the best diagnosis from the test data. Typically, a CDSS suggests avenues for the physician to explore, and the physician is expected to use their own knowledge and judgement to narrow down possibilities.

Types of CDSS

CDSSs can be roughly divided into two types: those with knowledge bases and those without. The knowledge-based approach typically covers the diagnosis of many different diseases, while the non-knowledge-based approach often focuses on a narrow list of symptoms, such as symptoms for a single disease.

Knowledge-based CDSS

Most CDSSs contain a knowledge base as well as an inference engine and a mechanism to communicate. The knowledge base contains the rules and associations of compiled data, which most often take the form of IF-THEN rules. If this were a system for determining drug interactions, for example, then a rule might be that IF drug X is taken AND drug Y is taken, THEN alert the user. Using another interface, an advanced user can update the knowledge base with new drug information. The inference engine combines the rules from the knowledge base with the patient's data, while the communication mechanism allows the system to show the results and allow user input into the system.[1]

Non-knowledge-based CDSS

CDSSs that do not use a knowledge base use a form of artificial intelligence called machine learning, which allow computers to learn from past experiences and/or find patterns in clinical data. This eliminates the need for writing rules and for expert input.[3] However, since systems based on machine learning cannot explain the reasons for their conclusions (neural networks and other machine learning systems are often referred to as "black boxes" because no meaningful information about how they work can be discerned by human inspection[4]), most clinicians do not use them directly for diagnoses due to reliability and accountability reasons.[1] Nevertheless, they can be useful as post-diagnostic systems that suggest data patterns for further investigation.

Two of the major types of non-knowledge-based systems are artificial neural networks and genetic algorithms. Artificial neural networks use nodes and weighted connections between them to analyze the patterns found in patient data to derive associations between symptoms and a diagnosis. Genetic algorithms are based on simplified evolutionary processes using directed selection to achieve optimal CDSS results. The selection algorithms evaluate components of random sets of solutions to a problem. The solutions that come out on top are then recombined and mutated and run through the process again. This happens over and over until the proper solution is discovered. They are functionally similar to neural networks in that they are also "black boxes" that attempt to derive knowledge from patient data.[5][1]

Regulations

United States

With the enactment of the American Recovery and Reinvestment Act of 2009 (ARRA), the U.S. government and medical professionals alike have been pushing for greater widespread adoption of health information technology. As such, more hospitals and clinics are integrating electronic health records (EHRs) and computerized physician order entry (CPOE) systems within their infrastructure. In fact, the National Academy of Sciences' Institute of Medicine had been actively promoting the use of health information technology — including the CDSS — to advance quality of patient care well before the ARRA was even enacted.[6]

Currently there are "no national standards for the specific evidence-based guidelines or rules that should be built into CDS[6]," though standards organizations like Health Level Seven and its Clinical Decision Support Work Group continue to make headway on this front.[7] Despite the absence of laws, several CDSS vendors have expressed both a desire to work together to provide a useful product to improve health outcomes and a need to express neutrality liability wise, stating

...[t]he ultimate end user is responsible for how it influences patient care. This neutral stance on the part of the content vendors is also due to the legal situation. Some content vendor representatives spoke strongly about how the legal system in this country influences what they can provide. There are many legal, regulatory, antitrust, and fiduciary constraints that content vendors must navigate while still providing a useful and usable product for all their customers. Sometimes, depending on what is being sold, these constraints result in sub-optimal products for clinician end-users.[8]

Effectiveness

The evidence of the effectiveness of CDSS is mixed. A 2005 systematic review by medical researchers concluded that CDSSs improved practitioner performance in 64 percent and improved patient outcomes in 13 percent of 97 selected studies.[9] Another 2005 systematic review found "[d]ecision support systems significantly improved clinical practice in 68 percent of trials" (70 selected studies). That research team found four features associated with successful CDSSs:[10]

  • The CDSS is integrated into the clinical workflow rather than as a separate log-in or screen.
  • The CDSS offers electronic output rather than only paper-based output.
  • The CDSS provides decision support at the time and location of care rather than prior to or after the patient encounter.
  • The CDSS provides actionable recommendations for care, not just assessments.

However, other systematic reviews are less optimistic about the effects of a CDDS, with one from 2011 stating "[t]here is a large gap between the postulated and empirically demonstrated benefits of [CDSS and other] eHealth technologies ... [and] their cost-effectiveness has yet to be demonstrated."[11] A 2014 systematic review by public health researchers did not find a benefit in terms of risk of death when the CDSS was combined with the electronic health record. However, there may be some benefits to morbidity outcomes.[12]

Challenges to adoption and implementation

Clinical challenges

Many medical institutions and software companies have tried to produce viable CDSSs to support all aspects of clinical tasks. However, with significant staff time demands and complex clinical workflows, care must be taken by the institution deploying the support system to ensure the system becomes a fluid and integral part of the clinical workflow. Yet despite the wide range of efforts by institutions to produce and use these systems, widespread adoption and acceptance has still not yet been achieved. One large roadblock to acceptance has historically been workflow integration. A tendency to focus only on the functional decision making core of the CDSS existed, causing a deficiency in planning for how the clinician will actually use the product in situ.[13]

Often CDSSs were stand-alone applications, requiring the clinician to cease working on their current system, switch to the CDSS, input the necessary data (even if it had already been inputted into another system), and examine the results produced. The additional steps break the flow from the clinician's perspective and cost precious time. As such, CDS technologies have gradually integrated with other systems like EHRs and computerized physician order entry (CPOE) systems.[14]

Technical challenges

Clinical decision support systems face steep technical challenges in a number of areas. Biological systems are profoundly complicated, and a clinical decision may utilize an enormous range of potentially relevant data. For example, an electronic evidence-based medicine system may potentially consider a patient's symptoms, medical history, family history, and genetics, as well as historical and geographical trends of disease occurrence and published clinical data on medicinal effectiveness when recommending a patient's course of treatment.[13]

Another source of contention with many medical support systems is that they produce a massive number of alerts. When systems produce high volume of warnings (especially those that do not require escalation), aside from the annoyance, clinicians may pay less attention to warnings, causing potentially critical alerts to be missed.[8]

Evaluation

In order for a CDSS to offer value, it must demonstrably improve clinical workflow or outcome. Its value must be quantified to better improve the system's quality and measure its effectiveness. Evaluating a CDSS isn't straightforward, however. Because different CDSSs serve different purposes, there is no generic metric which applies to all such systems. Attributes like "consistency" (with itself and with experts), applied across a wide spectrum of systems, tend to be a useful starting point.[15]

The evaluation benchmark for a CDSS depends on the system's goal. For example, a diagnostic decision support system may be rated based upon the consistency and accuracy of its classification of disease (as compared to physicians or other decision support systems). An evidence-based medicine system might be rated based upon a high incidence of patient improvement or higher financial reimbursement for care providers. More generally speaking, studies evaluating a CDSS's effectiveness give some clues: look for connections between the CDDS and whether short-term outcomes are improved, errors are reduced, costs are decreased, and readmission rates are reduced.[16]

Maintenance

One of the core challenges facing CDSS is difficulty in incorporating the extensive quantity of clinical research being published on an ongoing basis. In a given year, tens of thousands of clinical trials are published.[17] Currently, each one of these studies must be manually read, evaluated for scientific legitimacy, and incorporated into the CDSS in an accurate way. In 2004, the process of gathering clinical data and medical knowledge and putting it all into a form that computers can manipulate to assist in clinical decision-support was "still in its infancy."[18] In addition to being laborious, integration of new data can sometimes be difficult to quantify or incorporate into the existing decision support schema, particularly in instances where different clinical papers may appear conflicting. Ten years later, however, that process had improved somewhat. Developers and maintainers of knowledge bases now have access to special tools like knowledge acquisitions systems "that allow trained individuals to enter new knowledge, and maintain or 'curate' what is already there" as well as systems that allow direct knowledge acquisition with experts.[16]

Integration of CDSS with other systems

In 2012, researchers Sen et al. examined all the various CDSS architectures and proposed several benefits to an integrated architecture. They noted the following concerning independent and integrated models:

In the standalone category, the CDS system is separate from any other system, that is, there is no coupling. Such systems do not need standardization, require relatively low clinical knowledge, and do not need real patient data. However, these systems are quite slow and are not very practical. The integrated category, on the other hand, requires that CDS needs to be strongly coupled with other clinical information systems such as EHR and CPOE. In such systems, no new patient data need to be re-entered and alerts can be initiated. The major downside of the integrated architecture is that there is no easy way to share the systems or reuse their content.[14]

Even though the benefits of an integrated system can be seen, implementing a CDSS that is integrated with an EHR has historically required significant planning by healthcare organizations in order for the implementation of the CDSS to be successful and effective. As mentioned previously, this effectiveness can be measured, for example, by improved short-term outcomes, reduced errors, decreased costs, and reduced readmission rates.[16] As EHR adoption continues to be pushed, it also becomes more obvious that EHR functionality like e-prescribing, computerized physician order entry (CPOE), and reporting fit well with CDDS' rule base, alert, and trigger functionality.[14]

Barriers to integration

Implementing the EHR in healthcare settings incurs challenges, none more important than maintaining efficiency and safety during roll out[19], but in order for the implementation process to be effective, an understanding of the EHR user's perspectives is key to the success of EHR implementation projects.[20]

Furthermore, adoption needs to be actively fostered through a bottom-up, clinical-needs-first approach.[21] The same can be said for CDSS.

The main areas of concern with moving into a fully integrated EHR-CDSS are[1]:

  1. Privacy
  2. Confidentiality
  3. User-friendliness
  4. Document accuracy and completeness
  5. Integration
  6. Uniformity
  7. Acceptance
  8. Alert desensitization

Additionally, key aspects of data entry need to be addressed when implementing a CDSS to avoid potential adverse events from occurring. These aspects include whether[6][16]:

  • correct data is being used
  • all the data has been entered into the system
  • current best practice is being followed
  • the data is evidence-based

A service oriented architecture has been proposed as a technical means to address some of these barriers.[22]

See also

Notes

This article reuses several elements from the Wikipedia article.

Further reading



References

  1. 1.0 1.1 1.2 1.3 1.4 Berner, Eta S. (ed.) (2007). Clinical Decision Support Systems: Theory and Practice (2nd ed.). Springer Science & Business Media. pp. 270. ISBN 9780387383194. https://books.google.com/books?id=t4laP7U4a-AC&pg=PA3. Retrieved 19 June 2015. 
  2. Truesdail, Roger (April 1963). "Peeps at Things to Come". The Rotarian 102 (4). https://books.google.com/books?id=EjcEAAAAMBAJ&pg=PA54. Retrieved 19 June 2015. 
  3. Syeda-Mahmood, Tanveer (March 2015). "Tanveer Syeda-Mahmood plenary talk: The Role of Machine Learning in Clinical Decision Support". SPIE Newsroom. doi:10.1117/2.3201503.29. http://spie.org/x112958.xml. Retrieved 20 June 2015. 
  4. Twain, Jack (14 April 2014). "Meaning of a neural network as a black-box?". Cross Validated. Stack Exchange, Inc. http://stats.stackexchange.com/questions/93705/meaning-of-a-neural-network-as-a-black-box. Retrieved 20 June 2015. 
  5. Wagholikar, Kavishwar; Sundararajan, V.; Deshpande, Ashok (October 2012). "Modeling Paradigms for Medical Diagnostic Decision Support: A Survey and Future Directions". Journal of Medical Systems 35 (5): 3029–49. doi:10.1007/s10916-011-9780-4. PMID 21964969. http://www.ncbi.nlm.nih.gov/pubmed/21964969. Retrieved 20 June 2015. 
  6. 6.0 6.1 6.2 Berner, Ets S. (June 2009). "Clinical Decision Support Systems: State of the Art" (PDF). Agency for Healthcare Research and Quality. pp. 26. http://healthit.ahrq.gov/sites/default/files/docs/page/09-0069-EF_1.pdf. Retrieved 20 June 2015. 
  7. "Clinical Decision Support". Health Level Seven International. 2015. http://www.hl7.org/Special/committees/dss/index.cfm. Retrieved 20 June 2015. 
  8. 8.0 8.1 Ash, Joan S. et al. (2011). "Studying the Vendor Perspective on Clinical Decision Support". AMIA Annual Symposium Proceedings Archive 2011: 80–87. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3243293/. Retrieved 20 June 2015. 
  9. Garg, Amit et al. (2005). "Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes: A Systematic Review". JAMA 293 (10): 1223–38. doi:10.1001/jama.293.10.1223. PMID 15755945. http://jama.jamanetwork.com/article.aspx?articleid=200503. Retrieved 22 June 2015. 
  10. Kawamoto, Kensaku; Houlihan, Caitlin A.; Balas, E. Andrew; Lobach, David F. (2005). "Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success". BMJ 330 (7494): 765–773. doi:10.1136/bmj.38398.500764.8F. PMC 555881. PMID 15767266. http://www.bmj.com/content/330/7494/765.full.pdf+html. Retrieved 22 June 2015. 
  11. Black, A.D. et al. (2011). "The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview". PLoS Medicine 8 (1). doi:10.1371/journal.pmed.1000387. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1000387. Retrieved 22 June 2015. 
  12. Moja, Lorenzo et al. (December 2014). "Effectiveness of Computerized Decision Support Systems Linked to Electronic Health Records: A Systematic Review and Meta-Analysis". American Journal of Public Health 104 (12): e12-22. doi:10.2105/ajph.2014.302164. PMID 25322302. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4232126/. Retrieved 22 June 2015. 
  13. 13.0 13.1 Jao, Chiang S.; Hier, Daniel B. (2010). "Chapter 8: Clinical Decision Support Systems: An Effective Pathway to Reduce Medical Errors and Improve Patient Safety". Decision Support Systems. InTech. doi:10.5772/3453. http://www.intechopen.com/books/decision-support-systems/clinical-decision-support-systems-an-effective-pathway-to-reduce-medical-errors-and-improve-patient-#. Retrieved 22 June 2015. 
  14. 14.0 14.1 14.2 Sen, Arun; Banerjee, Amaranth; Sinha, Atish P.; Bansal, Manish (2012). "Clinical decision support: Converging toward an integrated architecture". Journal of Biomedical Informatics 45 (5): 1009–1017. doi:10.1016/j.jbi.2012.07.001. http://www.j-biomed-inform.com/article/S1532-0464%2812%2900098-6/fulltext. Retrieved 22 June 2015. 
  15. Wagholikar, Kavishwar B. et al. (July 2013). "Formative evaluation of the accuracy of a clinical decision support system for cervical cancer screening". Journal of American Medical Informatics Association 20 (4): 749–757. doi:10.1136/amiajnl-2013-001613. http://jamia.oxfordjournals.org/content/20/4/749. Retrieved 22 June 2015. 
  16. 16.0 16.1 16.2 16.3 Greenes, Robert A. (ed.) (2014). Clinical Decision Support: The Road to Broad Adoption (2nd ed.). Academic Press. pp. 930. ISBN 9780128005422. https://books.google.com/books?id=rwrUAgAAQBAJ&printsec=frontcover. Retrieved 22 June 2015. 
  17. Gluud, Christian; Nikolova, Dimitrinka (2007). "Likely country of origin in publications on randomised controlled trials and controlled clinical trials during the last 60 years". Trials 8: 7. doi:10.1186/1745-6215-8-7. PMC 1808475. PMID 17326823. http://www.trialsjournal.com/content/8/1/7. Retrieved 22 June 2015. 
  18. Gardner, Reed M. (April 2004). "Computerized Clinical Decision-Support in Respiratory Care". Respiratory Care 49 (4): 378–388. PMID 15030611. http://rc.rcjournal.com/content/49/4/378.short. Retrieved 22 June 2015. 
  19. Spellman, Stephanie K.; Timm, Nathan; Farrell, Michael K.; Spooner, S. Andrew (May–June 2012). "Impact of electronic health record implementation on patient flow metrics in a pediatric emergency department". Journal of the American Medical Informatics Association 19 (3): 443–447. doi:10.1136/amiajnl-2011-000462. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3341791/. Retrieved 22 June 2015. 
  20. McGinn, Carrie A. et al. (September 2012). "Users' perspectives of key factors to implementing electronic health records in Canada: a Delphi study". BMC Medical Informatics & Decision Making 12 (105). doi:10.1186/1472-6947-12-105. http://www.biomedcentral.com/1472-6947/12/105. Retrieved 22 June 2015. 
  21. Rozenblum, Ronen et al. (March 2011). "A qualitative study of Canada's experience with the implementation of electronic health information technology". Canadian Medical Association Journal 183 (5): E281–E288. doi:10.1503/cmaj.100856. http://www.cmaj.ca/content/183/5/E281.abstract. Retrieved 22 June 2015. 
  22. Loya, Salvador R.; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech (December 2014). "Service Oriented Architecture for Clinical Decision Support: A Systematic Review and Future Directions". Journal of Medical Systems 38 (12). doi:10.1007/s10916-014-0140-z. PMID 25325996. http://www.ncbi.nlm.nih.gov/pubmed/25325996. Retrieved 22 June 2015.