Difference between revisions of "Journal:Infrastructure tools to support an effective radiation oncology learning health system"

From LIMSWiki
Jump to navigationJump to search
(Saving and adding more.)
(Finished adding rest of content)
 
(8 intermediate revisions by the same user not shown)
Line 19: Line 19:
|download    = [https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14127?download=true https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14127] (PDF)
|download    = [https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14127?download=true https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14127] (PDF)
}}
}}
 
{{Ombox math}}
{{ombox
| type      = notice
| image    = [[Image:Emblem-important-yellow.svg|40px]]
| style    = width: 500px;
| text      = This article should be considered a work in progress and incomplete. Consider this article incomplete until this notice is removed.
}}
 
==Abstract==
==Abstract==
'''Purpose''': The concept of the [[Radiation oncologist|radiation oncology]] [[Learning health systems|learning health system]] (RO‐LHS) represents a promising approach to improving the [[Quality (business)|quality]] of care by integrating clinical, dosimetry, treatment delivery, and [[research]] data in real‐time. This paper describes a novel set of tools to support the development of an RO‐LHS and the current challenges they can address.
'''Purpose''': The concept of the [[Radiation oncologist|radiation oncology]] [[Learning health systems|learning health system]] (RO‐LHS) represents a promising approach to improving the [[Quality (business)|quality]] of care by integrating clinical, dosimetry, treatment delivery, and [[research]] data in real‐time. This paper describes a novel set of tools to support the development of an RO‐LHS and the current challenges they can address.
Line 39: Line 32:


==Background and significance==
==Background and significance==
For the past three decades, there is a growing interest in building [[learning organization]]s to address the most pressing and complex business, social, and economic challenges facing society today. [1] For healthcare, the National Academy of Medicine has defined the concept of a [[Learning health systems|learning health system]] (LHS) as an entity where science, incentive, culture, and [[Informatics (academic field)|informatics]] are aligned for continuous innovation, with new knowledge capture and discovery as an integral part for practicing evidence-based medicine. [2] The current dependency on [[Medical research|randomized controlled clinical trials]] that use a controlled environment for scientific evidence creation with only a small percent (<3%) of patient [[Sample (material)|samples]] is inadequate now and may be irrelevant in the future since these trials take too much time, are too expensive, and are fraught with questions of generalizability. The Agency for Healthcare Research and Quality has also been promoting the development of LHSs as part of a key strategy for healthcare organizations to make transformational changes to improve healthcare [[Quality (business)|quality]] and value. Large-scale healthcare systems are now recognizing the need to build infrastructure capable of [[Continual improvement process|continuous learning and improvement]] in delivering care to patients and address critical population health issues. [3] In an LHS, data collection should be performed from various sources such as [[electronic health record]]s (EHRs), treatment delivery records, [[imaging]] records, patient-generated data records, and administrative and claims data, which then allows for this aggregated data to be analyzed for generating new insights and knowledge that can be used to improve patient care and outcomes.
For the past three decades, there is a growing interest in building [[learning organization]]s to address the most pressing and complex business, social, and economic challenges facing society today.<ref>{{Cite book |last=Senge |first=Peter M. |date=2006 |title=The fifth discipline: the art and practice of the learning organization |url=https://www.worldcat.org/title/mediawiki/oclc/ocm65166960 |edition=Rev. and updated |publisher=Doubleday/Currency |place=New York |isbn=978-0-385-51725-6 |oclc=ocm65166960}}</ref> For healthcare, the National Academy of Medicine has defined the concept of a [[Learning health systems|learning health system]] (LHS) as an entity where science, incentive, culture, and [[Informatics (academic field)|informatics]] are aligned for continuous innovation, with new knowledge capture and discovery as an integral part for practicing evidence-based medicine.<ref>{{Cite book |date=2007-06-01 |title=The Learning Healthcare System: Workshop Summary (IOM Roundtable on Evidence-Based Medicine) |url=http://www.nap.edu/catalog/11903 |publisher=National Academies Press |place=Washington, D.C. |doi=10.17226/11903 |isbn=978-0-309-10300-8}}</ref> The current dependency on [[Medical research|randomized controlled clinical trials]] that use a controlled environment for scientific evidence creation with only a small percent (<3%) of patient [[Sample (material)|samples]] is inadequate now and may be irrelevant in the future since these trials take too much time, are too expensive, and are fraught with questions of generalizability. The Agency for Healthcare Research and Quality has also been promoting the development of LHSs as part of a key strategy for healthcare organizations to make transformational changes to improve healthcare [[Quality (business)|quality]] and value. Large-scale healthcare systems are now recognizing the need to build infrastructure capable of [[Continual improvement process|continuous learning and improvement]] in delivering care to patients and address critical population health issues.<ref>{{Cite journal |last=Budrionis |first=Andrius |last2=Bellika |first2=Johan Gustav |date=2016-12 |title=The Learning Healthcare System: Where are we now? A systematic review |url=https://linkinghub.elsevier.com/retrieve/pii/S1532046416301319 |journal=Journal of Biomedical Informatics |language=en |volume=64 |pages=87–92 |doi=10.1016/j.jbi.2016.09.018}}</ref> In an LHS, data collection should be performed from various sources such as [[electronic health record]]s (EHRs), treatment delivery records, [[imaging]] records, patient-generated data records, and administrative and claims data, which then allows for this aggregated data to be analyzed for generating new insights and knowledge that can be used to improve patient care and outcomes.


However, only a few attempts at leveraging existing infrastructure tools used in routine clinical practice to transform the healthcare domain into an LHS have been made. [5, 6] Some examples of actual implementation have emerged, but by and large these concepts have been mostly discussed as conceptual ideas and strategies in the literature. There are several data organization and [[Information management|management]] challenges that must be addressed in order to effectively implement a [[Radiation oncologist|radiation oncology]] LHS:
However, only a few attempts at leveraging existing infrastructure tools used in routine clinical practice to transform the healthcare domain into an LHS have been made.<ref name=":0">{{Cite journal |last=Matuszak |first=Martha M. |last2=Fuller |first2=Clifton D. |last3=Yock |first3=Torunn I. |last4=Hess |first4=Clayton B. |last5=McNutt |first5=Todd |last6=Jolly |first6=Shruti |last7=Gabriel |first7=Peter |last8=Mayo |first8=Charles S. |last9=Thor |first9=Maria |last10=Caissie |first10=Amanda |last11=Rao |first11=Arvind |date=2018-10 |title=Performance/outcomes data and physician process challenges for practical big data efforts in radiation oncology |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.13136 |journal=Medical Physics |language=en |volume=45 |issue=10 |doi=10.1002/mp.13136 |issn=0094-2405 |pmc=PMC6679351 |pmid=30229946}}</ref><ref>{{Cite journal |last=Mayo |first=Charles S. |last2=Kessler |first2=Marc L. |last3=Eisbruch |first3=Avraham |last4=Weyburne |first4=Grant |last5=Feng |first5=Mary |last6=Hayman |first6=James A. |last7=Jolly |first7=Shruti |last8=El Naqa |first8=Issam |last9=Moran |first9=Jean M. |last10=Matuszak |first10=Martha M. |last11=Anderson |first11=Carlos J. |date=2016-10 |title=The big data effort in radiation oncology: Data mining or data farming? |url=https://linkinghub.elsevier.com/retrieve/pii/S2452109416300550 |journal=Advances in Radiation Oncology |language=en |volume=1 |issue=4 |pages=260–271 |doi=10.1016/j.adro.2016.10.001 |pmc=PMC5514231 |pmid=28740896}}</ref> Some examples of actual implementation have emerged, but by and large these concepts have been mostly discussed as conceptual ideas and strategies in the literature. There are several data organization and [[Information management|management]] challenges that must be addressed in order to effectively implement a [[Radiation oncologist|radiation oncology]] LHS:


:1. [[Data integration]]: Radiation oncology data are generated from a variety of sources, including EHRs, imaging systems, treatment planning systems (TPSs), and clinical trials. Integration of this data into a single repository can be challenging due to differences in data formats, terminologies, and storage systems. There is often significant [[Semantics|semantic]] heterogeneity in the way that different clinicians and researchers use terminology to describe radiation oncology data. For example, different institutions may use different codes or terms to describe the same condition or treatment.
:1. [[Data integration]]: Radiation oncology data are generated from a variety of sources, including EHRs, imaging systems, treatment planning systems (TPSs), and clinical trials. Integration of this data into a single repository can be challenging due to differences in data formats, terminologies, and storage systems. There is often significant [[Semantics|semantic]] heterogeneity in the way that different clinicians and researchers use terminology to describe radiation oncology data. For example, different institutions may use different codes or terms to describe the same condition or treatment.
Line 51: Line 44:
:4. Build data query tools based on semantic meaning of the data: Since the data are currently stored in multiple RDMSs for the specific purpose to cater the operations aspects of the patient care, extracting common semantic meaning from this data is very challenging. Common semantic meaning in healthcare data is typically achieved through the use of [[Controlled vocabulary|standardized vocabularies]] and ontologies that define concepts and relationships between them. Developing data query tools based on semantic meaning requires a high level of expertise in both the technical and domain-specific aspects of radiation oncology. Moreover, executing complex data queries, which includes tree-based queries, recursive queries, and derived data queries requires multiple tables joining operations in RDMSs, which is a costly operation.
:4. Build data query tools based on semantic meaning of the data: Since the data are currently stored in multiple RDMSs for the specific purpose to cater the operations aspects of the patient care, extracting common semantic meaning from this data is very challenging. Common semantic meaning in healthcare data is typically achieved through the use of [[Controlled vocabulary|standardized vocabularies]] and ontologies that define concepts and relationships between them. Developing data query tools based on semantic meaning requires a high level of expertise in both the technical and domain-specific aspects of radiation oncology. Moreover, executing complex data queries, which includes tree-based queries, recursive queries, and derived data queries requires multiple tables joining operations in RDMSs, which is a costly operation.


While we are on the cusp of an [[artificial intelligence]] (AI) revolution in [[Biomedical sciences|biomedicine]], with the fast-growing development of advanced [[machine learning]] (ML) methods that can analyze complex datasets, there is an urgent need for a scalable intelligent infrastructure that can support these methods. The radiation oncology domain is also one of the most technically advanced medical specialties, with a long history of electronic data generation (e.g.,  [[Radiation therapy|radiation treatment]] (RT) simulation, treatment planning, etc.) that is modeled for each individual patient. This large volume of patient-specific real-world data captured during routine clinical practice, dosimetry, and treatment delivery make this domain ideally suited for rapid learning. [4] Rapid learning concepts could be applied using an LHS, providing the potential to improve patient outcomes and care delivery, reduce costs, and generate new knowledge from real world clinical and dosimetry data.
While we are on the cusp of an [[artificial intelligence]] (AI) revolution in [[Biomedical sciences|biomedicine]], with the fast-growing development of advanced [[machine learning]] (ML) methods that can analyze complex datasets, there is an urgent need for a scalable intelligent infrastructure that can support these methods. The radiation oncology domain is also one of the most technically advanced medical specialties, with a long history of electronic data generation (e.g.,  [[Radiation therapy|radiation treatment]] (RT) simulation, treatment planning, etc.) that is modeled for each individual patient. This large volume of patient-specific real-world data captured during routine clinical practice, dosimetry, and treatment delivery make this domain ideally suited for rapid learning.<ref name=":1">{{Cite journal |last=Etheredge |first=Lynn M. |date=2007-01 |title=A Rapid-Learning Health System: What would a rapid-learning health system look like, and how might we get there? |url=http://www.healthaffairs.org/doi/10.1377/hlthaff.26.2.w107 |journal=Health Affairs |language=en |volume=26 |issue=Suppl1 |pages=w107–w118 |doi=10.1377/hlthaff.26.2.w107 |issn=0278-2715}}</ref> Rapid learning concepts could be applied using an LHS, providing the potential to improve patient outcomes and care delivery, reduce costs, and generate new knowledge from real world clinical and dosimetry data.


Several research groups in radiation oncology, including the University of Michigan, MD Anderson, and Johns Hopkins, have developed data gathering platforms with specific goals. [5] These platforms—such as the M-ROAR platform [6] at the University of Michigan, the system-wide [[electronic data capture]] platform at MD Anderson [7], and the Oncospace program at Johns Hopkins [8]—have been deployed to collect and assess practice patterns, perform outcome analysis, and capture RT-specific data, including dose distributions, organ-at-risk (OAR) information, images, and outcome data. While these platforms serve specific purposes, they rely on relational database-based systems without utilizing standard ontology-based data definitions. However, [[knowledge graph]]-based systems offer significant advantages over these relational database-based systems. Knowledge graph-based systems provide a more integrated and comprehensive representation of data by capturing complex relationships, hierarchies, and semantic connections between entities. They leverage ontologies, which define standardized and structured knowledge, enabling a holistic view of the data and supporting advanced querying and analysis capabilities. Furthermore, knowledge graph-based systems promote data interoperability and integration by adopting standard ontologies, facilitating collaboration and data sharing across different research groups and institutions. As such, knowledge graph-based systems are able to help ensure that research data is more findable, accessible, interoperable, and reusable (FAIR). [22]
Several research groups in radiation oncology, including the University of Michigan, MD Anderson, and Johns Hopkins, have developed data gathering platforms with specific goals.<ref name=":0" /> These platforms—such as the M-ROAR platform<ref name=":1" /> at the University of Michigan, the system-wide [[electronic data capture]] platform at MD Anderson<ref>{{Cite journal |last=Pasalic |first=Dario |last2=Reddy |first2=Jay P. |last3=Edwards |first3=Timothy |last4=Pan |first4=Hubert Y. |last5=Smith |first5=Benjamin D. |date=2018-12 |title=Implementing an Electronic Data Capture System to Improve Clinical Workflow in a Large Academic Radiation Oncology Practice |url=https://ascopubs.org/doi/10.1200/CCI.18.00034 |journal=JCO Clinical Cancer Informatics |language=en |issue=2 |pages=1–12 |doi=10.1200/CCI.18.00034 |issn=2473-4276 |pmc=PMC6874007 |pmid=30652599}}</ref>, and the Oncospace program at Johns Hopkins<ref>{{Cite journal |last=McNutt |first=T.R. |last2=Evans |first2=K. |last3=Wu |first3=B. |last4=Kahzdan |first4=M. |last5=Simari |first5=P. |last6=Sanguineti |first6=G. |last7=Herman |first7=J. |last8=Taylor |first8=R. |last9=Wong |first9=J. |last10=DeWeese |first10=T. |date=2010-11 |title=Oncospace: All Patients on Trial for Analysis of Outcomes, Toxicities, and IMRT Plan Quality |url=https://linkinghub.elsevier.com/retrieve/pii/S0360301610021139 |journal=International Journal of Radiation Oncology*Biology*Physics |language=en |volume=78 |issue=3 |pages=S486 |doi=10.1016/j.ijrobp.2010.07.1139}}</ref>—have been deployed to collect and assess practice patterns, perform outcome analysis, and capture RT-specific data, including dose distributions, organ-at-risk (OAR) information, images, and outcome data. While these platforms serve specific purposes, they rely on relational database-based systems without utilizing standard ontology-based data definitions. However, [[knowledge graph]]-based systems offer significant advantages over these relational database-based systems. Knowledge graph-based systems provide a more integrated and comprehensive representation of data by capturing complex relationships, hierarchies, and semantic connections between entities. They leverage ontologies, which define standardized and structured knowledge, enabling a holistic view of the data and supporting advanced querying and analysis capabilities. Furthermore, knowledge graph-based systems promote data interoperability and integration by adopting standard ontologies, facilitating collaboration and data sharing across different research groups and institutions. As such, knowledge graph-based systems are able to help ensure that research data is more findable, accessible, interoperable, and reusable (FAIR).<ref name=":2">{{Cite journal |last=Wilkinson |first=Mark D. |last2=Dumontier |first2=Michel |last3=Aalbersberg |first3=IJsbrand Jan |last4=Appleton |first4=Gabrielle |last5=Axton |first5=Myles |last6=Baak |first6=Arie |last7=Blomberg |first7=Niklas |last8=Boiten |first8=Jan-Willem |last9=da Silva Santos |first9=Luiz Bonino |last10=Bourne |first10=Philip E. |last11=Bouwman |first11=Jildau |date=2016-03-15 |title=The FAIR Guiding Principles for scientific data management and stewardship |url=https://www.nature.com/articles/sdata201618 |journal=Scientific Data |language=en |volume=3 |issue=1 |pages=160018 |doi=10.1038/sdata.2016.18 |issn=2052-4463 |pmc=PMC4792175 |pmid=26978244}}</ref>


In this paper, we set out to contribute to the advancement of the science of LHSs by presenting a detailed description of the technical characteristics and infrastructure that were employed to design a radiation oncology LHS specifically with a knowledge graph approach. The paper also describes how we have addressed the challenges that arise when building such a system, particularly in the context of constructing a knowledge graph. The main contributions of our work are as follows:
In this paper, we set out to contribute to the advancement of the science of LHSs by presenting a detailed description of the technical characteristics and infrastructure that were employed to design a radiation oncology LHS specifically with a knowledge graph approach. The paper also describes how we have addressed the challenges that arise when building such a system, particularly in the context of constructing a knowledge graph. The main contributions of our work are as follows:
Line 59: Line 52:
:1. Provides an overview of the sources of data within radiation oncology (EHRs, TPS, TMS) and the mechanism to gather data from these sources in a common database.
:1. Provides an overview of the sources of data within radiation oncology (EHRs, TPS, TMS) and the mechanism to gather data from these sources in a common database.


2. Maps the gathered data to a standardized terminology and data dictionary for consistency and interoperability. Here we describe the processing layer built for data cleaning, checking for consistency and formatting before the extract, transform, and load (ETL) procedure is performed in a common database.
:2. Maps the gathered data to a standardized terminology and data dictionary for consistency and interoperability. Here we describe the processing layer built for data cleaning, checking for consistency and formatting before the extract, transform, and load (ETL) procedure is performed in a common database.


3. Adds concepts, classes, and relationships from existing ''NCI Thesaurus'' and [[SNOMED]] terminologies to the previously published Radiation Oncology Ontology (ROO) to fill in gaps with missing critical elements in the LHS.
:3. Adds concepts, classes, and relationships from existing ''NCI Thesaurus'' and [[SNOMED CT]] terminologies to the previously published Radiation Oncology Ontology (ROO) to fill in gaps with missing critical elements in the LHS.


4. Presents a knowledge graph visualization that demonstrates the usefulness of the data, with nodes and relationships for easy understanding by clinical researchers.
:4. Presents a knowledge graph visualization that demonstrates the usefulness of the data, with nodes and relationships for easy understanding by clinical researchers.


5. Develops an ontology-based keyword searching tool that utilizes semantic meaning and relationships to search the RDF knowledge graph for similar patients.
:5. Develops an ontology-based keyword searching tool that utilizes semantic meaning and relationships to search the RDF knowledge graph for similar patients.


6. Provides a valuable contribution to the field of radiation oncology by describing an LHS infrastructure that facilitates data integration, standardization, and utilization to improve patient care and outcomes.
:6. Provides a valuable contribution to the field of radiation oncology by describing an LHS infrastructure that facilitates data integration, standardization, and utilization to improve patient care and outcomes.


==Material and methods==
==Material and methods==
===Gather data from multiple source systems in the radiation oncology domain===
===Gather data from multiple source systems in the radiation oncology domain===
The adoption of EHRs in patients' clinical management is rapidly increasing in healthcare, but the use of data from EHRs in clinical research is lagging. The utilization of patient-specific clinical data available in EHRs has the potential to accelerate learning and bring value in several key topics of research, including comparative effectiveness research, cohort identification for clinical trial matching, and quality measure analysis. [9, 10] However, there is an inherent lack of interest in the use of data from the EHR for research purposes since the EHR and its data were never designed for research. Modern EHR technology has been optimized for capturing health details for clinical record keeping, scheduling, ordering, and capturing data from external sources such as [[Laboratory|laboratories]], diagnostic imaging, and capturing encounter information for billing purposes. [11] Many data elements collected in routine clinical care, which are critical for oncologic care, are neither collected as structured data elements nor with the same defined rigor as those in clinical trials. [12, 13]
The adoption of EHRs in patients' clinical management is rapidly increasing in healthcare, but the use of data from EHRs in clinical research is lagging. The utilization of patient-specific clinical data available in EHRs has the potential to accelerate learning and bring value in several key topics of research, including comparative effectiveness research, cohort identification for clinical trial matching, and quality measure analysis.<ref>{{Cite journal |last=Lambin |first=Philippe |last2=Roelofs |first2=Erik |last3=Reymen |first3=Bart |last4=Velazquez |first4=Emmanuel Rios |last5=Buijsen |first5=Jeroen |last6=Zegers |first6=Catharina M.L. |last7=Carvalho |first7=Sara |last8=Leijenaar |first8=Ralph T.H. |last9=Nalbantov |first9=Georgi |last10=Oberije |first10=Cary |last11=Scott Marshall |first11=M. |date=2013-10 |title=‘Rapid Learning health care in oncology’ – An approach towards decision support systems enabling customised radiotherapy’ |url=https://linkinghub.elsevier.com/retrieve/pii/S0167814013003393 |journal=Radiotherapy and Oncology |language=en |volume=109 |issue=1 |pages=159–164 |doi=10.1016/j.radonc.2013.07.007}}</ref><ref>{{Cite journal |last=Price |first=Gareth |last2=Mackay |first2=Ranald |last3=Aznar |first3=Marianne |last4=McWilliam |first4=Alan |last5=Johnson-Hart |first5=Corinne |last6=van Herk |first6=Marcel |last7=Faivre-Finn |first7=Corinne |date=2021-11 |title=Learning healthcare systems and rapid learning in radiation oncology: Where are we and where are we going? |url=https://linkinghub.elsevier.com/retrieve/pii/S016781402108751X |journal=Radiotherapy and Oncology |language=en |volume=164 |pages=183–195 |doi=10.1016/j.radonc.2021.09.030}}</ref> However, there is an inherent lack of interest in the use of data from the EHR for research purposes since the EHR and its data were never designed for research. Modern EHR technology has been optimized for capturing health details for clinical record keeping, scheduling, ordering, and capturing data from external sources such as [[Laboratory|laboratories]], diagnostic imaging, and capturing encounter information for billing purposes.<ref>{{Cite journal |last=Nordo |first=Amy Harris |last2=Eisenstein |first2=Eric L. |last3=Hawley |first3=Jeffrey |last4=Vadakkeveedu |first4=Sai |last5=Pressley |first5=Melissa |last6=Pennock |first6=Jennifer |last7=Sanderson |first7=Iain |date=2017-07 |title=A comparative effectiveness study of eSource used for data capture for a clinical research registry |url=https://linkinghub.elsevier.com/retrieve/pii/S138650561730103X |journal=International Journal of Medical Informatics |language=en |volume=103 |pages=89–94 |doi=10.1016/j.ijmedinf.2017.04.015 |pmc=PMC5942198 |pmid=28551007}}</ref> Many data elements collected in routine clinical care, which are critical for oncologic care, are neither collected as structured data elements nor with the same defined rigor as those in clinical trials.<ref>{{Cite journal |last=Coleman |first=Nathan |last2=Halas |first2=Gayle |last3=Peeler |first3=William |last4=Casaclang |first4=Natalie |last5=Williamson |first5=Tyler |last6=Katz |first6=Alan |date=2015-12 |title=From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database |url=https://bmcfampract.biomedcentral.com/articles/10.1186/s12875-015-0223-z |journal=BMC Family Practice |language=en |volume=16 |issue=1 |pages=11 |doi=10.1186/s12875-015-0223-z |issn=1471-2296 |pmc=PMC4324413 |pmid=25649201}}</ref><ref>{{Cite journal |last=Spasić |first=Irena |last2=Livsey |first2=Jacqueline |last3=Keane |first3=John A. |last4=Nenadić |first4=Goran |date=2014-09 |title=Text mining of cancer-related information: Review of current status and future directions |url=https://linkinghub.elsevier.com/retrieve/pii/S1386505614001105 |journal=International Journal of Medical Informatics |language=en |volume=83 |issue=9 |pages=605–623 |doi=10.1016/j.ijmedinf.2014.06.009}}</ref>


Given all these challenges with using data from EHRs, we have designed and built a clinical software called Health Information Gateway Exchange (HINGE). HINGE is a web-based electronic structured data capture system that has electronic [[data sharing]] interfaces using the [[Fast Healthcare Interoperability Resources]] (FHIR) [[Health Level 7]] (HL7) standards with a specific goal to collect accurate, comprehensive, and structured data from EHRs. [14] FHIR is an advanced interoperability standard introduced by standards developing organization HL7. FHIR is based on the previous HL7 standards (version 1 & 2) and provides a representational state transfer (REST) architecture, with an [[application programming interface]] (API) in [[Extensible Markup Language]] (XML) and JavaScript Object Notation (JSON) formats. Additionally, there has also been recent regulatory and legislative changes promoting the use of FHIR standards for interoperability and interconnectivity of healthcare systems. [16] HINGE has employed the FHIR interfaces with EHRs to retrieve required patient details such as demographics; list of allergies; prescribed active medications; vitals; lab results; surgery, radiology, and pathology reports; active diagnosis; referrals; encounters; and survival information. We have described the design and implementation of HINGE in our previous publication. [15] In summary, HINGE is designed to automatically capture and abstract clinical, treatment planning, and delivery data for [[cancer]] patients receiving radiotherapy. The system uses disease site-specific “smart” templates to facilitate the entry of relevant clinical information by physicians and clinical staff. The software processes the extracted data for quality and outcome assessment, using well-defined clinical and dosimetry quality measures defined by disease site experts in radiation oncology. The system connects seamlessly to the local IT/medical infrastructure via interfaces and [[Cloud computing|cloud]] services and provides tools to assess variations in radiation oncology practices and outcomes, and determine gaps in radiotherapy quality delivered by each provider.
Given all these challenges with using data from EHRs, we have designed and built a clinical software called Health Information Gateway Exchange (HINGE). HINGE is a web-based electronic structured data capture system that has electronic [[data sharing]] interfaces using the [[Fast Healthcare Interoperability Resources]] (FHIR) [[Health Level 7]] (HL7) standards with a specific goal to collect accurate, comprehensive, and structured data from EHRs.<ref>{{Cite journal |last=Vorisek |first=Carina Nina |last2=Lehne |first2=Moritz |last3=Klopfenstein |first3=Sophie Anne Ines |last4=Mayer |first4=Paula Josephine |last5=Bartschke |first5=Alexander |last6=Haese |first6=Thomas |last7=Thun |first7=Sylvia |date=2022-07-19 |title=Fast Healthcare Interoperability Resources (FHIR) for Interoperability in Health Research: Systematic Review |url=https://medinform.jmir.org/2022/7/e35724 |journal=JMIR Medical Informatics |language=en |volume=10 |issue=7 |pages=e35724 |doi=10.2196/35724 |issn=2291-9694 |pmc=PMC9346559 |pmid=35852842}}</ref> FHIR is an advanced interoperability standard introduced by standards developing organization HL7. FHIR is based on the previous HL7 standards (version 1 & 2) and provides a representational state transfer (REST) architecture, with an [[application programming interface]] (API) in [[Extensible Markup Language]] (XML) and JavaScript Object Notation (JSON) formats. Additionally, there has also been recent regulatory and legislative changes promoting the use of FHIR standards for interoperability and interconnectivity of healthcare systems.<ref>{{Cite web |last=Centers for Medicare & Medicaid Services |date=2021 |title=Burden Reduction - Interoperability - Policies and Regulations |url=https://www.cms.gov/priorities/key-initiatives/burden-reduction/interoperability#hiig_featured_sections |accessdate=30 August 2021}}</ref> HINGE has employed the FHIR interfaces with EHRs to retrieve required patient details such as demographics; list of allergies; prescribed active medications; vitals; lab results; surgery, radiology, and pathology reports; active diagnosis; referrals; encounters; and survival information. We have described the design and implementation of HINGE in our previous publication.<ref>{{Cite journal |last=Kapoor |first=Rishabh |last2=Sleeman |first2=William C. |last3=Nalluri |first3=Joseph J. |last4=Turner |first4=Paul |last5=Bose |first5=Priyankar |last6=Cherevko |first6=Andrii |last7=Srinivasan |first7=Sriram |last8=Syed |first8=Khajamoinuddin |last9=Ghosh |first9=Preetam |last10=Hagan |first10=Michael |last11=Palta |first11=Jatinder R. |date=2021-07 |title=Automated data abstraction for quality surveillance and outcome assessment in radiation oncology |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/acm2.13308 |journal=Journal of Applied Clinical Medical Physics |language=en |volume=22 |issue=7 |pages=177–187 |doi=10.1002/acm2.13308 |issn=1526-9914 |pmc=PMC8292697 |pmid=34101349}}</ref> In summary, HINGE is designed to automatically capture and abstract clinical, treatment planning, and delivery data for [[cancer]] patients receiving radiotherapy. The system uses disease site-specific “smart” templates to facilitate the entry of relevant clinical information by physicians and clinical staff. The software processes the extracted data for quality and outcome assessment, using well-defined clinical and dosimetry quality measures defined by disease site experts in radiation oncology. The system connects seamlessly to the local IT/medical infrastructure via interfaces and [[Cloud computing|cloud]] services and provides tools to assess variations in radiation oncology practices and outcomes, and determine gaps in radiotherapy quality delivered by each provider.


We created a data pipeline from HINGE to export discrete data in JSON-based format. These data are then fed to the extract, transform, and load (ETL) processor. An overview of the data pipeline is shown in Figure 1. ETL is a three-step process where the data are first extracted, transformed (i.e., cleaned, formatted), and loaded into an output radiation oncology clinical [[data warehouse]] (RO-CDW) repository. Since HINGE templates do not function as case report forms and they are formatted based on an operational data structure, the data cleaning process is performed with some basic data preprocessing, including cleaning and checking for redundancy in the dataset, ignoring null values while making sure each data element has its supporting data elements populated in the dataset. As there are several types of datasets, each dataset requires a different type of cleaning. Therefore, multiple scripts for data cleaning have been prepared. The following outlines some of the checks that have been performed using the cleaning scripts.
We created a data pipeline from HINGE to export discrete data in JSON-based format. These data are then fed to the extract, transform, and load (ETL) processor. An overview of the data pipeline is shown in Figure 1. ETL is a three-step process where the data are first extracted, transformed (i.e., cleaned, formatted), and loaded into an output radiation oncology clinical [[data warehouse]] (RO-CDW) repository. Since HINGE templates do not function as case report forms and they are formatted based on an operational data structure, the data cleaning process is performed with some basic data preprocessing, including cleaning and checking for redundancy in the dataset, ignoring null values while making sure each data element has its supporting data elements populated in the dataset. As there are several types of datasets, each dataset requires a different type of cleaning. Therefore, multiple scripts for data cleaning have been prepared. The following outlines some of the checks that have been performed using the cleaning scripts.
Line 94: Line 87:
{| border="0" cellpadding="5" cellspacing="0" width="1200px"
{| border="0" cellpadding="5" cellspacing="0" width="1200px"
  |-
  |-
   | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 1.''' Overview of the data pipeline to gather clinical data into the radiation oncology clinical data warehouse (RO-CDW). As part of this pipeline, we have built HL7/FHIR interfaces between the EHR system and HINGE database to gather pertinent information from the patient's chart. These data are stored in the HINGE database and used to auto-populate disease-site-specific smart templates that depict the clinical workflow from initial consultation to follow-up care. The providers record their clinical assessments in these templates as part of their routine clinical care. Once the templates are finalized and signed by the providers in HINGE, the data are exported in JSON format, and using an ETL process, we can load the data in our RO-CDW's relational SQL database. Additionally, we use SQL stored procedures to extract, transform, and load data from the Varian Aria data tables and extraction of dosimetry DVH curves to our RO-CDW.</blockquote>
   | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 1.''' Overview of the data pipeline to gather clinical data into the radiation oncology clinical data warehouse (RO-CDW). As part of this pipeline, we have built HL7/FHIR interfaces between the EHR system and HINGE database to gather pertinent information from the patient's chart. These data are stored in the HINGE database and used to auto-populate disease-site-specific smart templates that depict the clinical workflow from initial consultation to follow-up care. The providers record their clinical assessments in these templates as part of their routine clinical care. Once the templates are finalized and signed by the providers in HINGE, the data are exported in JSON format, and using an ETL process, we can load the data in our RO-CDW's relational SQL database. Additionally, we use SQL stored procedures to extract, transform, and load data from the Varian Aria data tables and extraction of dosimetry dose-volume histogram (DVH) curves to our RO-CDW.</blockquote>
  |-  
  |-  
|}
|}
Line 101: Line 94:
The main purpose of this step is to ensure that the dataset is of high quality and fidelity when loaded in RO-CDW. In the data loading process, we have written SQL and .Net-based scripts to transform the data into RO-CDW-compatible schema and load them into a Microsoft SQL Server 2016 database. When the data are populated, unique identifiers are assigned to each data table entry, and interrelationships are maintained within the tables so that the investigators can use query tools to query and retrieve the data, identify patient cohorts, and analyze the data.
The main purpose of this step is to ensure that the dataset is of high quality and fidelity when loaded in RO-CDW. In the data loading process, we have written SQL and .Net-based scripts to transform the data into RO-CDW-compatible schema and load them into a Microsoft SQL Server 2016 database. When the data are populated, unique identifiers are assigned to each data table entry, and interrelationships are maintained within the tables so that the investigators can use query tools to query and retrieve the data, identify patient cohorts, and analyze the data.


We have deployed a [[Free and open-source software|free, open-source]], and light-weight [[DICOM]] server known as Orthanc<ref>{{Cite web |last=Jodogne, Sébastien |title=Orthanc |url=https://www.orthanc-server.com/ |publisher=UCLouvain University}}</ref> to collect DICOM-RT datasets from any commercial TPS. Orthanc is a simple yet powerful standalone DICOM server designed to support research and provide query/retrieve functionality of DICOM datasets. Orthanc provides a RESTful API that makes it possible to program using any computer language where DICOM tags stored in the datasets can be downloaded in a JSON format. We used the [[Python (programming language)|Python]] plug-in to connect with the Orthanc database to extract the relevant tag data from the DICOM-RT files. Orthanc was able to seamlessly connect with the Varian Eclipse planning system with the DICOM DIMSE C-STORE protocol.<ref>{{Cite web |last=DICOM Standards Committee |date=2013 |title=7.5 DIMSE Services |work=DICOM PS3.7 2013 - Message Exchange |url=https://dicom.nema.org/dicom/2013/output/chtml/part07/sect_7.5.html |publisher=National Electrical Manufacturers Association}}</ref> Since the TPS conforms to the specifications listed under the Integrating the Healthcare Enterprise—Radiation Oncology (IHE-RO) profile, the DICOM-RT datasets contained all the relevant tags that were required to extract data.
One of the major challenges with examining patients’ DICOM-RT data is the lack of standardized organs at risk (OAR) and target names, as well as ambiguity regarding dose-volume histogram metrics, and multiple prescriptions mentioned across several treatment techniques. With the goal of overcoming these challenges, the AAPM TG 263 initiative has published their recommendations on OAR and target nomenclature. The ETL user interface deploys this standardized nomenclature and requires the importer of the data to match the deemed OARs with their corresponding standard OAR and target names. In addition, this program also suggests a matching name based on an automated process of relabeling using our published techniques (OAR labels<ref>{{Cite journal |last=Syed |first=Khajamoinuddin |last2=Sleeman IV |first2=William |last3=Ivey |first3=Kevin |last4=Hagan |first4=Michael |last5=Palta |first5=Jatinder |last6=Kapoor |first6=Rishabh |last7=Ghosh |first7=Preetam |date=2020-04-30 |title=Integrated Natural Language Processing and Machine Learning Models for Standardizing Radiotherapy Structure Names |url=https://www.mdpi.com/2227-9032/8/2/120 |journal=Healthcare |language=en |volume=8 |issue=2 |pages=120 |doi=10.3390/healthcare8020120 |issn=2227-9032 |pmc=PMC7348919 |pmid=32365973}}</ref>, radiomics features<ref>{{Cite journal |last=Sleeman, W.; Palta, J.; Ghosh, P. et al. |year=2020 |title=Relabeling Non-Standard to Standard Structure Names Using Geometric and Radiomic Information - BReP-SNAP-M-129 |url=https://w3.aapm.org/meetings/2020AM/programInfo/programSessions.php?t=specific&shid&#91;&#93;=1591&sid=8797 |journal=Medical Physics |volume=47 |issue=6 |pages=E438}}</ref>, and geometric information<ref>{{Cite journal |last=Sleeman IV |first=William C. |last2=Nalluri |first2=Joseph |last3=Syed |first3=Khajamoinuddin |last4=Ghosh |first4=Preetam |last5=Krawczyk |first5=Bartosz |last6=Hagan |first6=Michael |last7=Palta |first7=Jatinder |last8=Kapoor |first8=Rishabh |date=2020-09 |title=A Machine Learning method for relabeling arbitrary DICOM structure sets to TG-263 defined labels |url=https://linkinghub.elsevier.com/retrieve/pii/S1532046420301556 |journal=Journal of Biomedical Informatics |language=en |volume=109 |pages=103527 |doi=10.1016/j.jbi.2020.103527}}</ref>). We find that these automated approaches provide an acceptable accuracy over the standard prostate and lung structure types. In order to gather the dose volume histogram data from the DICOM-RT dose and structure set files, we have deployed a DICOM-RT dosimetry parser software. If the DICOM-RT dose file exported by the TPS contains dose-volume histogram (DVH) information, we utilize it. However, if the file lacks this information, we employ our dosimetry parser software to calculate the DVH values from the dose and structure set volume information.
===Mapping data to standardized terminology, data dictionary, and use of Semantic Web technologies===
For data to be interoperable, sharable outside the single [[hospital]] environment, and reusable for the various requirements of an LHS, the use of a standardized terminology and data dictionary is a key requirement. Specifically, clinical data should be transformed following the FAIR data principles.<ref name=":2" /> An ontology describes a domain of classes and is defined as a conceptual model of knowledge representation. The use of ontologies and Semantic Web technologies plays a key role in transforming the healthcare data to be compatible with the FAIR principles. The use of ontologies enables the sharing of information between disparate systems within the multiple clinical domains. An ontology acts as a layer above the standardized data dictionary and terminology where explicit relationships—that is, predicates—are established between unique entities. Ontologies provide formal definitions of the clinical concepts used in the data sources and render the implicit meaning of the relationships among the different vocabulary and terminologies of the data sources explicitly. For example, it can be determined if two classes and data items found in different clinical databases are equivalent or if one is a subset of another. Semantic level information extraction and query are possible only with the use of ontology-based concepts of data mapping.
A rapid way to look for new information on the internet is to use a search engine such as Google. These search engines return a list of suggested web pages devoid of context and semantics and require human interpretation to find useful information. The Semantic Web is a core technology that is used in order to organize and search for specific contextual information on the web. The Semantic Web, which is also known as Web 3.0, is an extension of the current World Wide Web (WWW) via a set of W3C data standards<ref>{{Cite web |date=2023 |title=Web Standards |url=https://www.w3.org/standards/ |publisher=World Wide Web Consortium}}</ref>, with a goal to make internet data machine-readable instead of human-readable. For automatic processing of information by computers, Semantic Web extensions enable data (e.g., text, meta data on images, videos, etc.) to be represented with well-defined data structures and terminologies. To enable the encoding of semantics with the data, web technologies such as [[Resource Description Framework]] (RDF), Web Ontology Language (OWL), SPARQL Protocol, and RDF Query Language are used. RDF is a standard for sharing data on the web.
We utilized an existing ontology known as Radiation Oncology Ontology (ROO)<ref>{{Cite journal |last=Traverso |first=Alberto |last2=van Soest |first2=Johan |last3=Wee |first3=Leonard |last4=Dekker |first4=Andre |date=2018-10 |title=The radiation oncology ontology ( ROO ): Publishing linked data in radiation oncology using semantic web and ontology techniques |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.12879 |journal=Medical Physics |language=en |volume=45 |issue=10 |doi=10.1002/mp.12879 |issn=0094-2405}}</ref>, available on the NCBO Bioportal website.<ref>{{Cite web |date=2023 |title=Welcome to BioPortal |url=https://www.bioontology.org/ |publisher=Board of Trustees of Leland Stanford Junior University}}</ref> The main role of ROO is to define a broad coverage of main concepts used in the radiation oncology domain. The ROO currently consists of 1,183 classes with 211 predicates that are used to establish relationships between these classes. Upon inspection of this ontology, we noticed that the collection of classes and properties were missing some critical clinical elements such as smoking history, CTCAE v5 toxicity scores, diagnostic procedures such as Gleason scores, prostate-specific antigen (PSA) levels, patient reported outcome measures, Karnofsky performance status (KPS) scales, and radiation treatment modality. We utilized the ontology editor tool Protégé<ref>{{Cite journal |last=Noy |first=Natalya F. |last2=Crubezy |first2=Monica |last3=Fergerson |first3=Ray W. |last4=Knublauch |first4=Holger |last5=Tu |first5=Samson W. |last6=Vendetti |first6=Jennifer |last7=Musen |first7=Mark A. |date=2003 |title=Protégé-2000: an open-source ontology-development and knowledge-acquisition environment |url=https://pubmed.ncbi.nlm.nih.gov/14728458 |journal=AMIA ... Annual Symposium proceedings. AMIA Symposium |volume=2003 |pages=953 |issn=1942-597X |pmc=1480139 |pmid=14728458}}</ref> for adding these key classes and properties in the updated ontology file. We reused entries from other published ontologies such as the [[National Cancer Institute]]'s ''NCI Thesaurus''<ref>{{Cite web |last=National Cancer Institute |date=2023 |title=NCI Thesaurus |url=https://ncithesaurus.nci.nih.gov/ncitbrowser/ |publisher=National Institutes of Health}}</ref>, [[International Statistical Classification of Diseases and Related Health Problems|International Classification of Diseases]] version 10 (ICD-10)<ref>{{Cite web |date=2023 |title=International Statistical Classification of Diseases and Related Health Problems (ICD) |url=https://www.who.int/standards/classifications/classification-of-diseases |publisher=World Health Organization}}</ref>, and Dbpedia<ref>{{Cite web |date=2023 |title=DBedia - Global and Unified Access to Knowledge Graphs |url=https://www.dbpedia.org/ |publisher=DBpedia Association}}</ref> ontologies. We added 216 classes (categories defined in Table 1) with 19 predicate elements to the ROO. With over 100,000 terms, the ''NCI Thesaurus'' includes wide coverage of cancer terms, as well as mapping with external terminologies. The ''NCI Thesaurus'' is a product of NCI Enterprise Vocabulary Services (EVS), and its vocabularies consists of public information on cancer, definitions, synonyms, and other information on almost 10,000 cancers and related diseases, 17,000 single agents and related substances, as well as other topics that are associated with cancer. The list of high-level data categories, elements, and codes that are utilized in our work are included in the appendix (Appendix A2).
{|
| style="vertical-align:top;" |
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="100%"
|-
  | colspan="2" style="background-color:white; padding-left:10px; padding-right:10px;" |'''Table 1.''' Additional classes added to the Radiation Oncology Ontology (ROO) and used for mapping with our dataset.
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Categories
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |# of classes
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Race, ethnicity
  | style="background-color:white; padding-left:10px; padding-right:10px;" |5
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Tobacco use
  | style="background-color:white; padding-left:10px; padding-right:10px;" |4
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Blood pressure + vitals
  | style="background-color:white; padding-left:10px; padding-right:10px;" |3
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Laboratory tests (e.g., creatinine, GFR, etc.)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |20
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Prostate-specific diagnostic tests (e.g., Gleason score, PSA, etc.)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |10
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Patient-reported outcome
  | style="background-color:white; padding-left:10px; padding-right:10px;" |8
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |CTCAE v5
  | style="background-color:white; padding-left:10px; padding-right:10px;" |152
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Therapeutic procedures (e.g., immunotherapy, targeted therapy, etc.)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |6
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Radiation treatment modality (e.g., photon, electron, proton, etc.)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |7
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Units (cGy)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |1
|- 
|}
|}
To use and validate the defined ontology, we mapped our data housed in the clinical data warehouse relational database with the concepts and relationships listed in the ontology. This mapping process linked each component (e.g., column headers, values) of the SQL relational database to its corresponding clinical concept (e.g., classes, relationships, and properties) in the ontology. To perform the mapping, the SQL database tables are analyzed and matched with the relevant concepts and properties in the ontology. This can be achieved by identifying the appropriate classes and relationships that best represent the data elements from the SQL relational database. For example, if the SQL relational table provides information about a patient's smoking history, the mapping process would identify the corresponding class or property in the ontology that represents smoking history.
A correspondence between the table columns in the relational database and ontology entities was established using the D2RQ mapping script. An example of this mapping script is shown in Figure 2. With the use of the D2RQ mapping script, individual table columns in relational database schema were mapped to RDF ontology-based codes. This mapping script is executed by the D2RQ platform that connects to the SQL database, reads the schema, performs the mapping, and generates the output file in turtle syntax. Each SQL table column name is mapped to its corresponding class using the <tt>d2rq:ClassMap</tt> command. These classes are also mapped to existing ontology-based concept codes such as NCIT:C48720 for T1 staging. In order to define the relationships between two classes, the <tt>d2rq:refersToClassMap</tt> command is used. The properties of the different classes are defined using the <tt>d2rq:PropertyBridge</tt> command.
Unique resource identifiers (URIs) are used for each entity for enabling the data to be machine-readable and for linking with other RDF databases. The mapping process is specific to the structure and content of the ontology being used, in this case, ROO. It relies on the defined classes, properties, and relationships within the ontology to establish the mapping between the SQL tables input data and the ontology terminology. While the mapping process is specific to the published ontology, it can potentially be generalized to other clinics or healthcare settings that utilize similar ontologies. The generalizability depends on the extent of similarity and overlap between the ontology being used and the terminologies and concepts employed in other clinics. If the ontologies share similar structures and cover similar clinical domains, the mapping process can be applied with appropriate adjustments to accommodate the specific terminologies and concepts used in the target clinic.
[[File:Fig2 Kapoor JofAppCliMedPhys2023 24-10.jpg|900px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="900px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 2.''' Overview of the data mapping between the relational RO-CDW database and the hierarchical graph-based structure based on the defined ontology. The top rectangle displays an example of the various classes of the ontology and their relationships, including the NCI Thesaurus and ICD-10 codes. The bottom rectangle shows the relational database table, and the solid arrows between the top and bottom rectangles display the data mapping.</blockquote>
|-
|}
|}
===Importing data in knowledge based graph-based database===
The output file from the D2RQ mapping step is in Terse RDF Triple Language (turtle) syntax. This syntax is used for representing data in the semantic triples, which comprise a subject, predicate, and object. Each item in the triple is expressed as a Web URI. In order to search data from such formatted datasets, the dataset is imported in RDF knowledge graph databases. An RDF database, also called a Triplestore, is a type of graph database that stores RDF triples. The knowledge on the subject is represented in these triple formats consisting of subject, predicate, and object. An RDF knowledge graph can also be defined as labeled multi-diagraphs, which consist of a set of nodes which could be URIs or literals containing raw data, and the edges between these nodes represent the predicates.<ref>{{Cite journal |last=Urbani |first=Jacopo |last2=Jacobs |first2=Ceriel |date=2020-04-20 |title=Adaptive Low-level Storage of Very Large Knowledge Graphs |url=https://dl.acm.org/doi/10.1145/3366423.3380246 |journal=Proceedings of The Web Conference 2020 |language=en |publisher=ACM |place=Taipei Taiwan |pages=1761–1772 |doi=10.1145/3366423.3380246 |isbn=978-1-4503-7023-3}}</ref> The language used to reach data is called SPARQL—Query Language for RDF. It contains ontologies that are schema models of the database. Although SPARQL adopts various structures of SQL query language, SPARQL uses navigational-based approaches on the RDG graphs to query the data, which is quite different than the table-join-based storage and retrieval methods adopted in relational databases. In our work, we utilized the Ontotext GraphDB software<ref>{{Cite web |date=2023 |title=Ontotext - Maximize the Value of Your Data |url=https://www.ontotext.com/ |publisher=ONTOTEXT AD}}</ref> as our RDF store and SPARQL endpoint.
===Ontology keyword-based searching tool===
It is common practice amongst healthcare providers to use different medical terms to refer to the same clinical concept. For example, if the user is searching for patient records that had a “heart attack,” then besides this text word search, they should also search for synonym concepts such as “myocardial infarction,” “acute coronary syndrome,” and so on. Ontologies such as ''NCI Thesaurus'' have listed synonym terms for each clinical concept. To provide an effective method to search the graph database, we built an ontology-based keyword search engine that utilizes the synonym-based term-matching methods. Another advantage of using ontology-based term searching is realized by using the class parent-children relationships. Ontologies are hierarchical in nature, with the terms in the hierarchy often forming a directed acyclic graph (DAG). For example, if we are searching for patients in our database with clinical stage T1, the matching patient list will only comprise patients that have T1 stage ''NCI Thesaurus'' code (NCIT: C48720) in the graph database. These matching patients will not return any patients with T1a, T1b, and T1c sub-categories that are children of the parent T1 staging class. We built this search engine where we can search on any clinical term and its matching patient records based on both parent and children classes, which are abstracted.
The method that is used in this search engine is as follows. When the user wants to use the ontology to query the graph-based medical records, the only input necessary is the clinical query terms (q-terms) and an indication of whether the synonyms should also be considered while retrieving the patient records. The user has the option to specify the multiple levels of child class search and parent classes to be included in the search parameters. The software will then connect to the Bioportal database via REST API and perform the search to gather the matching classes for the q-terms and the options specified in the program. Using the list of matching classes, a SPARQL-based query is generated and executed with our patient graph database and matching patient list, and the q-term based clinical attributes are returned to the user.
In order to find patients that have not the same but similar attributes based on the search parameters, we have designed a patient similarity search method. The method employed to identify similar patients based on matching knowledge graph attributes involves the creation of a text corpus by performing breadth-first search (BFS) random walks on each patient's individual knowledge graph. This process allows us to explore the graph structure and extract the necessary information for analysis. Within each patient's knowledge graph, approximately 18−25 categorical features were extracted in the text corpus. It is important to note that the number of features extracted from each patient may vary, as it depends on the available data and the complexity of the patient's profile. These features included the diagnosis; tumour, node, metastasis (TNM) staging; [[histology]]; smoking status; performance status; [[pathology]] details; radiation treatment modality; technique; and toxicity grades.
This text corpus is then used to create word embeddings that can be used later to search for similar patients based on similarity and distance metrics. We utilized four vector embedding models, namely Word2Vec<ref>{{Cite journal |last=Mikolov |first=Tomas |last2=Chen |first2=Kai |last3=Corrado |first3=Greg |last4=Dean |first4=Jeffrey |date=2013 |title=Efficient Estimation of Word Representations in Vector Space |url=https://arxiv.org/abs/1301.3781 |journal=arXiv |doi=10.48550/ARXIV.1301.3781}}</ref>, Doc2Vec<ref>{{Cite journal |last=Le |first=Quoc V. |last2=Mikolov |first2=Tomas |date=2014 |title=Distributed Representations of Sentences and Documents |url=https://arxiv.org/abs/1405.4053 |journal=arXiv |doi=10.48550/ARXIV.1405.4053}}</ref>, GloVe<ref>{{Cite journal |last=Pennington |first=Jeffrey |last2=Socher |first2=Richard |last3=Manning |first3=Christopher |date=2014 |title=Glove: Global Vectors for Word Representation |url=http://aclweb.org/anthology/D14-1162 |journal=Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) |language=en |publisher=Association for Computational Linguistics |place=Doha, Qatar |pages=1532–1543 |doi=10.3115/v1/D14-1162}}</ref>, and FastText<ref>{{Cite journal |last=Bojanowski |first=Piotr |last2=Grave |first2=Edouard |last3=Joulin |first3=Armand |last4=Mikolov |first4=Tomas |date=2017-12 |title=Enriching Word Vectors with Subword Information |url=https://direct.mit.edu/tacl/article/43387 |journal=Transactions of the Association for Computational Linguistics |language=en |volume=5 |pages=135–146 |doi=10.1162/tacl_a_00051 |issn=2307-387X}}</ref> to train and generate vector embeddings. The output of word embedding models are vectors, one for each word in the training dictionary, that effectively capture relationships between words. The architecture of these word embedding models is based on a single hidden layer neural network. The description of these models is provided in Appendix A1.
The text corpus used for training is obtained from the Bioportal website, which encompasses NCIT, ICD, and SNOMED codes, as well as class definition text, synonyms, hyponyms terms, parent classes, and sibling classes. We scraped 139,916 classes from the Bioportal website using API calls and used this dataset to train our word embedding models. By incorporating this diverse and comprehensive dataset, we aimed to capture the semantic relationships and contextual information relevant to the medical domain. The training process involved iterating over the training dataset for a total of 100 epochs using CPU hardware. During training, the models learned the underlying patterns and semantic associations within the text corpus, enabling them to generate meaningful vector representations for individual words, phrases, or documents. Once the models were trained, we utilized them to generate vector embeddings for the individual patient text corpus that we had previously obtained. These embeddings served as numerical representations of the patient data, capturing the semantic and contextual information contained within the patient-specific text corpus. The Cosine similarity, Euclidean distance, Manhattan distance, and Minkowski distance metrics are employed to measure the distance between the matched patients and all patient feature vectors.
Figure 3 shows the design architecture of the software system. The main purpose of this search engine is to provide the users with a simple interface to search the patient records.
[[File:Fig3 Kapoor JofAppCliMedPhys2023 24-10.jpg|1100px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1100px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 3.''' Design architecture for the ontology-based keyword search system. When the user wants to query the patient graph database to retrieve matching records, the only input necessary is the medical terms (q-terms) and an indication to include any synonym, parent, or children terminology classes in the search. The software queries the Bioportal API and retrieves all the matching ''NCI Thesaurus'', ICD-10, and SNOMED CT classes to the q-terms. A SPARQL query is generated and executed on the graph database SPARQL endpoint, and the results indicating the matching patient records and their corresponding data fields are displayed to the user. Our architecture includes the generation of text corpus from breadth-first search (BFS) of individual patient graphs and using word embedding models to generate feature vectors to identify similar patient cohorts.</blockquote>
|-
|}
|}
==Results==
===Mapping data to the ontology===
With the aim of testing out the data pipeline and infrastructure, we used our clinical database that has 1,660 patient clinical and dosimetry records. These records are from patients treated with radiotherapy for prostate cancer, non-small cell lung cancer, and small cell lung cancer disease. There are 35,303 clinical and 12,565 DVH based data elements that are stored in our RO-CDW database for these patients. All these data elements were mapped to the ontology using the D2RQ mapping language, resulting in 504,180 RDF tuples. In addition to the raw data, these tuples also defined the interrelationships amongst various defined classes in the dataset. An example of the output RDF tuple file is shown in Figure 4, displaying the patient record relationship with diagnosis, TNM staging, etc. All the entities and predicates in the output RDF file have a URI, which is resolvable as a link for the computer program or human to gather more data on the entities or class. For example, the RDF viewer would be able to resolve the address <nowiki>http://purl.obolibrary.org/obo/NCIT_48720</nowiki> to gather details on the T-stage such as concept definitions, synonym, relationship with other concepts and classes, etc.
We were able to achieve a mapping completeness of 94.19% between the records in our clinical database and RDF tuples. During the validation process, we identified several ambiguities or inconsistencies in the data housed in the relational database, such as indication of use of Everyday Cognition (ECOG) instrument for performance status evaluation but missing values for ECOG performance status score, record of T stage but nodal and metastatic stage missing, and delivered number fractions missing with the prescribed dose information. To maintain data integrity and accuracy, the D2RQ mapping script was designed to drop these values due to missing or incomplete data or ambiguous information. Additionally, the validation process thoroughly examined the interrelationships among the defined classes in the dataset. We verified that the relationships and associations between entities in the RDF tuples accurately reflected the relationships present in the original clinical data. Any discrepancies or inconsistencies found during this analysis were identified and addressed to ensure the fidelity of the mapped data. To evaluate the accuracy of the mapping process, we conducted manual spot checks on a subset of the RDF tuples. This involved randomly selecting samples of RDF tuples and comparing the mapped values to the original data sources. Through these spot checks, we ensured that the mapping process accurately represented and preserved the information from the clinical and dosimetry data during the transformation into RDF tuples. Overall, the validation process provided assurance that the pipeline effectively transformed the clinical and dosimetry data stored in the RO-CDW database into RDF tuples while preserving the integrity, accuracy, and relationships of the original data.
[[File:Fig4 Kapoor JofAppCliMedPhys2023 24-10.jpg|900px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="900px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 4.''' Example of the output RDF tuple file.</blockquote>
|-
|}
|}
===Visualization of data in ontology-based graphical format===
Visualizations on ontologies play a key role for users to understand the structure of the data and work with the dataset and its applications. This has an appealing potential when it comes to exploring or verifying complex and large collections of data such as ontologies. We utilized the Allegrograph Gruff toolkit<ref>{{Cite web |date=2023 |title=AllegroGraph - Knowledge Graph + LLM Solutions |url=https://allegrograph.com/ |publisher=Franz, Inc}}</ref> that enables users to create visual knowledge graphs that display data relationships in a neat graphical user interface (GUI). The Gruff toolkit uses simple SPARQL queries to gather the data for rendering the graph with nodes and edges. These visualizations are useful because they increase the users’ understanding of data by instantly illustrating relevant relationships amongst class and concepts, hidden patterns, and data significance to outcomes. An example of the graph-based visualization for a prostate and non-small cell lung cancer patient is shown in Figures 5 and 6. Here all the nodes stand for concepts and classes, and the edges represent relationships between these concepts. All the nodes in the graph have URIs that are resolvable as a web link for the computer program or human to gather more data on the entities or classes. The color of the nodes in the graph visualization are based on the node type, and there are inherent properties of each node that include the unique system code (e.g., ''NCI Thesaurus'' code or ICD code, etc.), synonyms terms, definitions, value type (e.g., string, integer, floating point number, etc.). The edges connecting the nodes are defined as properties and stored as predicates in the ontology data file. The use of these predicates enables the computer program to effectively find the queried nodes and their interrelationships. Each of these properties are defined with URIs that are available for gathering more detailed information on the relationship definitions. The left panel in Figures 5 and 6 shows various property types or relationship types that connect the nodes in the graph. Using SPARQL language and Gruff visualization tools, users can query the data without having any prior knowledge of the relational database structure or schema, since these SPARQL queries are based on universal publish classes defined in the ''NCI Thesaurus'', Units Ontology, and ICD-10 ontologies.
[[File:Fig5 Kapoor JofAppCliMedPhys2023 24-10.jpg|1100px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1100px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 5.''' Example of the graph structure of a prostate cancer patient record based on the ontology. Each node in the graph are entities that represent objects or concepts and have a unique identifier and can have properties and relationships to other nodes in the graph. These nodes are connected by directed edges, representing relationships between the information, such as the relationship between the diagnosis node and the radiation treatment node. Similarly, there are edges from the diagnosis node to the toxicity node and further to the specific CTCAE toxicity class, indicating that the patient was evaluated for adverse effects after receiving radiation therapy. The different types of edge relationships from the ontology that are used in this example are listed on the left panel of the figure. The right panel shows different types of nodes that are used in the example.</blockquote>
|-
|}
|}
[[File:Fig6 Kapoor JofAppCliMedPhys2023 24-10.jpg|1100px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1100px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 6.''' Example of the graph structure of a non-small cell lung cancer (NSCLC) patient based on the ontology. This has a similar structure to the previous prostate cancer example with NSCLC content. The nodes in green and aqua blue color (highlighted in the right panel) indicate the use of NCI Thesaurus classes to represent the use of standard terminology to define the context for each node present in the graph. For simpler visualization, the NCI Thesaurus codes and URIs are not displayed with this example.</blockquote>
|-
|}
|}
Finally, these SPARQL queries can be used with commonly available programming languages like Python and [[R (programming language)|R]] via REST APIs. We also verified data from the SPARQL queries and the SQL queries from the CDW database for accuracy of the mapping. Our analysis found no difference in the resultant data from the two query techniques. The main advantage of using the SPARQL method is that the data can be queried without any prior knowledge of the original data structure based on the universal concepts defined in the ontology. Also, the data from multiple sources can be seamlessly integrated in the RDF graph database without the use of complex data matching techniques and schema modifications, which is currently required with relational databases. This is only possible if all the data stored in the RDF graph database refers to published codes from the commonly used ontologies.
===Searching the data using ontology-based keywords===
For effective searching of discrete data from the RDF graph database, we built an ontology-based keyword searching web tool. The public website for this tool is https://hinge-ontology-search.anvil.app. Here we are able to search the database based on keywords (q-terms). The tool is connected to the Bioportal via REST API and finds the matching classes or concepts and renders the results including the class name, ''NCI Thesaurus'' code, and definitions. We specifically used the ''NCI Thesaurus'' ontology for our query which is 112 MB in size and contains approximately 64,000 terms. The search tool can find the classes based on synonym term queries where it matches the q-terms with the listed synonym terms in the classes (Figure 7a). The tool has features to search the child and parent classes on the matching q-term classes. A screenshot of the web tool with the child class search is shown in Figure 7b. The user can also specify the level of search ,which indicates if the returned classes should include classes of children of children. In the example in Figure 9b, we are showing the q-term used for searching “fatigue” while including the child classes up to one level, and the return classes included the fatigue based CTCAE class and the grade 1, 2, 3 fatigue classes. Once all the classes used for searching are found by the tool, it searches the RDF graph database for matching patient cases with these classes. The matching patient list, including the found class in the patient's graph, is displayed to the user. This tool is convenient for end users to abstract cohorts of patients that have particular classes or concepts in their records without the user learning and implementing the complex SPARQL query language. Based on our evaluation, we found that the average time taken to obtain results is less than five seconds per q-term if there are less than five child classes in the query. The maximum time taken is 11 seconds for a q-term that had 16 child classes.
[[File:Fig7 Kapoor JofAppCliMedPhys2023 24-10.jpg|1200px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1200px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 7.''' Screenshot of the ontology-based keyword search portal. '''(A)''' Search performed using two q-terms returns results with definitions of the matching classes from the Bioportal and the corresponding patient records from the RDF graph database. '''(B)''' Search performed to include child class up to one level on the matching q-term class. Returned results display the matching class, child classes with Fatigue CTCAE grades, and matching patient records from the RDF graph database.</blockquote>
|-
|}
|}


For evaluating the patient similarity-based work embedding models, we evaluated the quality of the feature embedding-based vectors produced by using the technique called t-Distributed Stochastic Neighbor Embedding (t-SNE) and cluster analysis with a predetermined number of clusters set to five based on the diagnosis groups for our patient cohort. Our main objective is to determine the similarity between patient data that are in the same cluster based on their corresponding diagnosis groups. This method can reveal the local and global features encoded by the feature vectors and thus can be used to visualize clusters within the data. We applied t-SNE to all 1,660 patient feature-based vectors produced via the four word embedding models. The t-SNE plot is shown in Figure 8; it shows that the disease data points can be grouped into five clusters with varying degrees of separability and overlap. The analysis of patient similarity using different embedding models revealed interesting patterns. The Word2Vec model showed the highest mean cosine similarity of 0.902, indicating a relatively higher level of similarity among patient embeddings within the five diagnosis groups. In contrast, the Doc2Vec model exhibited a lower mean cosine similarity of 0.637. The GloVe model demonstrated a moderate mean cosine similarity of 0.801, while the FastText model achieved a similar level of 0.855. Regarding distance metrics, the GloVe model displayed lower mean Euclidean and Manhattan distances, suggesting that patient embeddings derived from this model were more compact and closer in proximity. Conversely, the Doc2Vec, Word2Vec, and FastText models yielded higher mean distances, indicating greater variation and dispersion among the patient embeddings. These findings provide valuable insights into the performance of different embedding models for capturing patient similarity, facilitating improved understanding and decision-making in the clinical domain.


[[File:Fig8 Kapoor JofAppCliMedPhys2023 24-10.jpg|1200px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1200px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 8.''' '''(A)''' Annotation embeddings produced by Word2Vec, Doc2Vec, GloVe, and FastText, a 2D-image of the embeddings projected down to three dimensions using the T-SNE technique. Each point indicates one patient, and color of a point indicates the cohort of the patient based on the diagnosis-based cluster. A good visualization result is that the points of the same color are near each other. '''(B)''' Results of the evaluation metrics used to measure patient similarity. The Word2Vec model had the best cosine similarity, and the GloVe model had the best Euclidean, Manhattan, and Minkowski distance, suggesting that patient embeddings derived from this model were more compact and closer in proximity.</blockquote>
|-
|}
|}
==Discussion==
0Despite the availability of many important clinical and imaging databases such as TCIA, TCGA, and NIH data commons, clinical data science researchers still face severe technical challenges in accessing, interpreting, integrating, analyzing, and utilizing the semantic meaning of heterogeneous data and knowledge from these disparately collected and isolated data sources.<ref>{{Cite journal |last=McNutt |first=Todd R. |last2=Bowers |first2=Michael |last3=Cheng |first3=Zhi |last4=Han |first4=Peijin |last5=Hui |first5=Xuan |last6=Moore |first6=Joseph |last7=Robertson |first7=Scott |last8=Mayo |first8=Charles |last9=Voong |first9=Ranh |last10=Quon |first10=Harry |date=2018-10 |title=Practical data collection and extraction for big data applications in radiotherapy |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.12817 |journal=Medical Physics |language=en |volume=45 |issue=10 |doi=10.1002/mp.12817 |issn=0094-2405}}</ref><ref>{{Cite journal |last=Mayo |first=Cs |last2=Phillips |first2=M |last3=McNutt |first3=Tr |last4=Palta |first4=J |last5=Dekker |first5=A |last6=Miller |first6=Rc |last7=Xiao |first7=Y |last8=Moran |first8=Jm |last9=Matuszak |first9=Mm |last10=Gabriel |first10=P |last11=Ayan |first11=As |date=2018-10 |title=Treatment data and technical process challenges for practical big data efforts in radiation oncology |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.13114 |journal=Medical Physics |language=en |volume=45 |issue=10 |doi=10.1002/mp.13114 |issn=0094-2405 |pmc=PMC8082598 |pmid=30226286}}</ref> These tasks pose huge challenges for most clinical data science researchers. Even if data are available and accessible, it still presents a formidable task of cleaning such data for LHSs because of inconsistent data formats, syntaxes, notations, and schemas in data sources. This severely hampers the consumption of data and inherent knowledge stored in these data sources. This requires the researcher to learn multiple software systems, configurations, and access requirements, which leads to significant increase in time and complexity for scientific research.
Robust LHSs in radiation oncology require comprehensive clinical and dosimetry data. Furthermore, advanced ML models and AI require high fidelity and high veracity data to improve the model performance. Scalable intelligent infrastructure that can provide the data from multiple data sources and can support these models are not yet prevalent.<ref>{{Cite journal |last=Jochems |first=Arthur |last2=Deist |first2=Timo M. |last3=van Soest |first3=Johan |last4=Eble |first4=Michael |last5=Bulens |first5=Paul |last6=Coucke |first6=Philippe |last7=Dries |first7=Wim |last8=Lambin |first8=Philippe |last9=Dekker |first9=Andre |date=2016-12 |title=Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – A real life proof of concept |url=https://linkinghub.elsevier.com/retrieve/pii/S0167814016343365 |journal=Radiotherapy and Oncology |language=en |volume=121 |issue=3 |pages=459–467 |doi=10.1016/j.radonc.2016.10.002}}</ref><ref>{{Cite journal |last=Zerka |first=Fadila |last2=Barakat |first2=Samir |last3=Walsh |first3=Sean |last4=Bogowicz |first4=Marta |last5=Leijenaar |first5=Ralph T. H. |last6=Jochems |first6=Arthur |last7=Miraglio |first7=Benjamin |last8=Townend |first8=David |last9=Lambin |first9=Philippe |date=2020-11 |title=Systematic Review of Privacy-Preserving Distributed Machine Learning From Federated Databases in Health Care |url=https://ascopubs.org/doi/10.1200/CCI.19.00047 |journal=JCO Clinical Cancer Informatics |language=en |issue=4 |pages=184–200 |doi=10.1200/CCI.19.00047 |issn=2473-4276 |pmc=PMC7113079 |pmid=32134684}}</ref> Infrastructures are required to provide an integrated solution to capture data from multiple sources and then structure the data in a knowledge base with semantically interlinked entities for seamless consumption in ML methods. The use of such an infrastructure solution will allow researchers to mine novel associations from multiple, heterogeneous, and multiple domain sources simultaneously and gather relevant knowledge to provide feedback to the clinical providers for obtaining better clinical outcomes for patients on a personalized basis, which will enhance the quality of clinical research. Table 2 provides some comparison metrics between our knowledge graph-based ontology-specific search solution and the traditional relational database-based solution from the various oncology data sources.
{|
| style="vertical-align:top;" |
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="80%"
|-
  | colspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |'''Table 2.''' Comparison between knowledge graph-based ontology-specific search solution and the traditional relational database-based solution from the various oncology data sources.
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Comparison metrics
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Knowledge graph-based solution
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Relational database-based solution
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Data integration and interlinking
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Efficient integration of data from multiple sources and linking through semantic relationships in the knowledge graph
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Limited ability to integrate and establish relationships between data from different tables in the database
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Data discovery and accessibility
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Enhanced data discoverability and accessibility due to ontology-based indexing and semantic querying
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Relatively limited data discoverability and accessibility through traditional SQL queries
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Semantic enrichment
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Relationships among data fields are established and used for searching for the patient cohort; allows searching for synonym, hyponym terms that are not present in the dataset and gather patients that have similar attributes
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Relationships among data fields need to be manually established; each synonym and hyponym term needs to be manually annotated in the dataset; limited querying flexibility primarily based on structured SQL queries
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Scalability and performance
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Highly scalable with linking new data from future patient encounters and data from other clinical domains; is able to handle complex queries due to optimized knowledge graph traversal methods
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Performance may degrade with large datasets or complex queries due to table joins and indexing limitations
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Data analysis and visualization
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Enables advanced data analytics, visualization, and identification of trends and patterns in patient outcomes through graph-based analysis
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Limited data analysis capabilities and visualization options compared to graph-based analytics
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Data reusability and interoperability
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Supports data reusability and interoperability by adhering to FAIR principles (findable, accessible, interoperable, and reusable)
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Relational databases offer limited data reusability and interoperability without additional integration efforts
|-
|}
|}
Ontologies are used to create a more robust and interoperable LHS. The fundamental advantage to transform the clinical and dosimetry data into standard ontologies is that it enables the transfer, reuse, and sharing of the patient data and seamless integration with other data sources.<ref>{{Cite journal |last=Kapoor |first=Rishabh |last2=Sleeman |first2=William |last3=Palta |first3=Jatinder |last4=Weiss |first4=Elisabeth |date=2023-03 |title=3D deep convolution neural network for radiation pneumonitis prediction following stereotactic body radiotherapy |url=https://aapm.onlinelibrary.wiley.com/doi/10.1002/acm2.13875 |journal=Journal of Applied Clinical Medical Physics |language=en |volume=24 |issue=3 |pages=e13875 |doi=10.1002/acm2.13875 |issn=1526-9914 |pmc=PMC10018674 |pmid=36546583}}</ref><ref>{{Cite journal |last=Kamdar |first=Maulik R. |last2=Fernández |first2=Javier D. |last3=Polleres |first3=Axel |last4=Tudorache |first4=Tania |last5=Musen |first5=Mark A. |date=2019-09-10 |title=Enabling Web-scale data integration in biomedicine through Linked Open Data |url=https://www.nature.com/articles/s41746-019-0162-5 |journal=npj Digital Medicine |language=en |volume=2 |issue=1 |pages=90 |doi=10.1038/s41746-019-0162-5 |issn=2398-6352 |pmc=PMC6736878 |pmid=31531395}}</ref><ref>{{Cite journal |last=Phillips |first=Mark H. |last2=Serra |first2=Lucas M. |last3=Dekker |first3=Andre |last4=Ghosh |first4=Preetam |last5=Luk |first5=Samuel M.H. |last6=Kalet |first6=Alan |last7=Mayo |first7=Charles |date=2020-04 |title=Ontologies in radiation oncology |url=https://linkinghub.elsevier.com/retrieve/pii/S1120179720300727 |journal=Physica Medica |language=en |volume=72 |pages=103–113 |doi=10.1016/j.ejmp.2020.03.017}}</ref> Their most important advantage is the conversion of data into a knowledge graph. We have shown the process to transform clinical traditional database schemas into a knowledge graph-based database with the use of ontologies. The main advantages of using an ontology-based graph database as opposed to traditional relational databases is that the traditional relational databases are designed to cater to a particular application and its software requirements, and data stored is not conducive for clinical research. These databases are not suited to gather data from multiple data sources when the structure of data, schema, and data types are unknown. On the other hand, ontology-based graph databases are schema-free and designed to store large amount of data with defined interrelationships and the definitions based on universally defined concepts that enable any clinical researcher to query the data without understanding the inherent data structure and schema used to store data in the database. The ontology structure makes querying the data more intuitive for researchers and clinicians because it matches the domain knowledge logical structure.<ref>{{Cite journal |last=Min |first=Hua |last2=Manion |first2=Frank J. |last3=Goralczyk |first3=Elizabeth |last4=Wong |first4=Yu-Ning |last5=Ross |first5=Eric |last6=Beck |first6=J. Robert |date=2009-12 |title=Integration of prostate cancer clinical data using an ontology |url=https://linkinghub.elsevier.com/retrieve/pii/S1532046409000793 |journal=Journal of Biomedical Informatics |language=en |volume=42 |issue=6 |pages=1035–1045 |doi=10.1016/j.jbi.2009.05.007 |pmc=PMC2784120 |pmid=19497389}}</ref>
Each data node in the graph has a unique URI that is useful to transform the data using the FAIR principles, which ensure that data and information is findable, by assigning a globally unique and persistent identifier to each data field. To make the data accessible, these data can readily be shared with almost no pre- or post-processing requirements. Interoperability can be achieved by using standard ontologies to represent the data, and once the data are shared and merged with data from other domains, it can be reused for multiple applications for the benefit of patients and their care. These approaches enable the use of federated queries where each hospital maintains its local knowledge graph that represents its specific radiation oncology data but can securely collaborate and gain insights from a collective pool of knowledge without sharing individual patient data. Federated queries involve formulating standardized queries that can be executed across multiple local knowledge graphs simultaneously. These queries leverage the common ontology-based definitions and consistent representation of data structures to retrieve relevant information from each hospital's knowledge graph. By adhering to common ontology terms and relationships, federated queries can effectively integrate data from multiple hospitals, facilitating cross-institutional analysis and knowledge sharing. Traditional methods with AI and ML techniques do not address the issues of data sharing, nor interpretability amongst multiple systems and institutions. With this approach, hospitals can leverage the collective intelligence within the federated knowledge graph to gain insights, identify patterns, and conduct research without compromising patient privacy and data security. Additionally, ontologies can be used to enhance data analysis by allowing for more precise querying and reasoning over the data. For example, an ontology-based query might retrieve all patients who received a certain type of radiation treatment, while an ontology-based reasoning system might infer that a certain treatment plan parameter or dose constraint is contraindicated for a certain type of cancer.<ref>{{Cite journal |last=Yan |first=Jihong |last2=Wang |first2=Chengyu |last3=Cheng |first3=Wenliang |last4=Gao |first4=Ming |last5=Zhou |first5=Aoying |date=2018-02 |title=A retrospective of knowledge graphs |url=http://link.springer.com/10.1007/s11704-016-5228-9 |journal=Frontiers of Computer Science |language=en |volume=12 |issue=1 |pages=55–74 |doi=10.1007/s11704-016-5228-9 |issn=2095-2228}}</ref>
Overall, the use of ontologies and graph-based databases increases the semantic interoperability of clinical and dosimetry data in the radiation oncology domain. The overall architecture of infrastructure is shown in Figure 9. This infrastructure can gather clinical data from EHRs using the HINGE platform, delivery data from the radiation oncology treatment management systems using the FHIR-based interfaces, and radiation oncology treatment planning systems using the DICOM data export. All these data are loaded into a common relational database where data mapping based on ontology and standard taxonomy definitions is performed. The mapped data are transformed into the RDF triple format and uploaded into an RDF graph-based database. The ontology-based keyword search program that can then be used to query the RDF graph database by clinicians and researchers based on any keyword/s. The software can match the patient records based on the synonyms and hyponyms of the search keywords and provide a list of patient records with an exact match, as well as patients who have similar attributes in their clinical record.
We also analyzed patient similarity using four different embedding models, with the Word2Vec model achieving the highest mean cosine similarity, indicating a higher level of similarity among patient embedding vectors. This suggests that the Word2Vec model captures semantic relationships well, leading to more comparable patient representations. When examining distance metrics, the GloVe model stood out with lower mean Euclidean and Manhattan distances. This indicates that patient embeddings derived from the GloVe model are more compact and closer in proximity, signifying a more clustered distribution of similar patients. The choice of which model is better for an application depends on the specific requirements and priorities. If the ability to capture semantic relationships and identify patients with similar attributes is crucial, the Word2Vec model may be more suitable. Conversely, if compactness and clustering of similar patients are of primary importance, the GloVe model may be preferred. These findings provide valuable insights into the performance and characteristics of the different models, enabling researchers and practitioners to make informed decisions about which model best suits their specific requirements. Our designed search tool is useful for cohort identification and can potentially be used to identify patients and their inherent data for quality measure analysis, comparative effectiveness research, continuous quality improvement, and most importantly to support the use, training, and evaluation of ML models directly for streaming clinical data. In the future, we plan to test the scalability of the tool by measuring the performance as the size of the ontology and the number of patients in the database increases. This test can help to determine whether the tool can handle large-scale datasets and ontologies. We also plan to perform cross-validation testing, which will provide the tool's ability to generalize to other ontologies and datasets while comparing the results obtained with those obtained from a gold standard.
[[File:Fig9 Kapoor JofAppCliMedPhys2023 24-10.jpg|1100px]]
{{clear}}
{|
| style="vertical-align:top;" |
{| border="0" cellpadding="5" cellspacing="0" width="1100px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |<blockquote>'''Figure 9.''' Overall architecture of our radiation oncology learning health system (RO-LHS) infrastructure. Here we have the data captured at care delivery from the three data sources and the informatics layer to extract, transform, and load this data based on standard taxonomy and ontologies into the RO-LHS core data repository. This repository is the RDF graph database that stores the data with established definitions and relationships based on the standard terminology and ontology. The data listed in the RO-LHS is made available for subsequent applications such as quality measure analysis, cohort identification, continuous quality improvement, and building ML models that can be applied back to the care delivery to improve care, thus completing the loop for an effective learning health system.</blockquote>
|-
|}
|}
It's important to consider the limitations of the analysis. The analysis is solely based on the categorical clinical attributes, and other relevant factors, such as DVH scores, that are continuous numerical variables that have not been considered for our patient similarity analysis. This is because the word embedding models require the input features included in its dictionary before it can generate the vectors. For numerical variables it is not possible to include all the numerical attributes in the training datasets for the word embedding models. Additionally, the word embedding model and cosine similarity scores have their own limitations and may not capture the full complexity of patient similarity because they do not consider temporal aspect of the features. These results provide a starting point for exploring patient similarity and can guide further analysis and investigation. It would be valuable to validate the findings using additional patient data, evaluate the clinical significance of attribute variations, and assess the impact of patient similarity on treatment outcomes and prognosis.
As a proof of concept, the RO-LHS infrastructure system described in this paper successfully demonstrates the procedures of gathering data from multiple clinical systems and using ontology-based data integration. With this system, the radiation oncology datasets would be available using open semantic ontology-based formats and help facilitate interoperability and execution of large scientific studies. This system shows that the ontology developed with domain knowledge can be used to integrate semantic based data and knowledge from multiple data sources. In this work, the ontology was constructed by merging the concepts defined in the ROO, ''NCI Thesaurus'', ICD-10, and Units Ontology.
==Appendix==
===Appendix A1===
====Description of word embedding models: Word2Vec, Doc2Vec, GloVe, and FastText====
In [[natural language processing]] (NLP) and text analysis, Word2Vec, Doc2Vec, GloVe, and FastText are popular models. For creating embeddings for words or documents, each model uses a different approach, capturing semantic relationships between words and documents. Here is a brief description of each model and its differences:
*'''Word2Vec''': Word2Vec is one of the most widely used embedding models that represents words as dense vectors in a continuous vector space. It employs two primary architectures: CBOW and Skip-gram. CBOW predicts target words using context words, while Skip-gram predicts target words based on context words. Through training on substantial text data, Word2Vec effectively captures semantic relationships between words.
*'''Doc2Vec''': Doc2Vec extends Word2Vec to capture embeddings at the document level. It represents documents, such as paragraphs or entire documents, as continuous vectors in a similar way to how Word2Vec represents individual words. This model architecture is also known as Paragraph Vector, and it learns document representations by incorporating word embeddings and a unique document ID during the training process. This enables the model to capture semantic similarities between different documents.
*'''GloVe''': Global Vectors for Word Representation (GloVe) is another popular model for generating word embeddings. This model uses the global matrix factorization and local context window methods to generate the embeddings. GloVe constructs a co-occurrence matrix based on word-to-word co-occurrence statistics from a large corpus and factorizes this matrix to obtain word vectors. It considers the global statistical information of word co-occurrences, resulting in embeddings that capture both syntactic and semantic relationships between words.
*'''FastText''': FastText is a model developed by Facebook Research that extends the idea of Word2Vec by incorporating information about sub-words. Instead of treating each word as a single entity, FastText model represents words as bags of character n-grams (sub-word units). By considering sub-words, FastText can handle out-of-vocabulary words and capture morphological information. This model enables better representations for rare words, inflections, and compound words. FastText also supports efficient training and retrieval, making it useful for large-scale applications.
In summary, Word2Vec focuses on word-level embeddings, Doc2Vec extends it to capture document-level embeddings, GloVe emphasizes global word co-occurrence statistics, and FastText incorporates sub-word information for enhanced representations. The choice of model depends on the specific task, data characteristics, and requirements of the application at hand.
====Evaluation metrics for measuring patient similarity====
'''Cosine similarity'''
Cosine similarity measures the cosine of the angle between two vectors. It calculates the similarity between vectors irrespective of their magnitudes. The cosine similarity between vectors A and B is computed using the dot product of the vectors divided by the product of their magnitudes:
:<math>{Cosine~similarity} = ( {A{.B}} ) / ( | |A| | * | |B| | )</math>
'''Euclidean distance'''
Euclidean distance is a popular metric to measure the straight-line distance between two points in Euclidean space. In the context of vector spaces, it calculates the distance between two vectors in terms of their coordinates. The Euclidean distance between vectors A and B with ''n'' dimensions is calculated as:
:<math>{Euclidean~distance} = \sqrt{(A\lbrack 1 \rbrack - B\lbrack 1 \rbrack)^{2} + (A\lbrack 2 \rbrack - B\lbrack 2 \rbrack)^{2} + ... + (A\lbrack n \rbrack - B\lbrack n \rbrack)^{2}}</math>
'''Manhattan distance'''
Manhattan distance—also known as city block distance or L1 distance—measures the sum of the absolute differences between the coordinates of two vectors. It represents the distance traveled along the grid-like paths in a city block. The Manhattan distance between vectors A and B with ''n'' dimensions is calculated as:
:<math>{Manhattan~distance} = |A\lbrack 1 \rbrack - B\lbrack 1 \rbrack| + |A\lbrack 2 \rbrack - B\lbrack 2 \rbrack| + ... + |A\lbrack n \rbrack - B\lbrack n \rbrack|</math>
'''Minkowski distance'''
Minkowski distance is a generalization of both Euclidean and Manhattan distances. It measures the distance between two vectors in terms of their coordinates, with a parameter ''p'' determining the degree of the distance metric. The Minkowski distance between vectors A and B with ''n'' dimensions is calculated as:
:<math>{Minkowski~distance} = (|A\lbrack 1 \rbrack - B\lbrack 1 \rbrack|^{P} + |A\lbrack 2 \rbrack - B\lbrack 2 \rbrack|^{P} + ... + |A\lbrack n \rbrack - B\lbrack n \rbrack|^{P})^{(1/p)}</math>
When ''p'' = 1, it is equivalent to the Manhattan distance, and when ''p'' = 2, it is equivalent to the Euclidean distance.
These metrics provide different ways to quantify the similarity or dissimilarity between vectors, each with its own characteristics and use cases.
===Appendix 2===
{|
| style="vertical-align:top;" |
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="60%"
|-
  | colspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |'''Table A1.''' Key data elements that are used to map between our clinical data warehouse relational database and ontology-based graph database. This table shows some examples of the codes used for the purpose of this mapping. Abbreviations: ICD-10, International Classification of Diseases, Version 10; NCIT, National Cancer Institute Thesaurus; ROO, Radiation Oncology Ontology; UO, Units Ontology.
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Category
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Attribute
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Codes/datatypes
|- 
  | rowspan="7" style="background-color:white; padding-left:10px; padding-right:10px;" |Patient details
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Patient ID
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C16960
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Race
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C17049
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Ethnicity
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C16564
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Date of birth
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C68615
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Date of death
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C70810
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Sex at birth
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Male: NCIT: C16576; Female: NCIT: C20197
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Cause of death
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C99531
|-   
  | rowspan="8" style="background-color:white; padding-left:10px; padding-right:10px;" |Other patient details
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Vital status
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C25717; Alive: NCIT: C37987; Deceased: NCIT: C28554
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Tobacco use history
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C181760; Smoker: NCIT: C67147; Former Smoker: C67148
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Smoking pack years
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 127063
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Patient height
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C25347
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Patient weight
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 25208
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Blood pressure
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C54706
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Heart rate
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C49677
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Temperature
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C25206
|-   
  | rowspan="11" style="background-color:white; padding-left:10px; padding-right:10px;" |Diagnosis and staging
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Staging system
  | style="background-color:white; padding-left:10px; padding-right:10px;" |
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Diagnosis
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C15220
|-         
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ICD version
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ICD:10
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ICD code
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ICD 10 codes (e.g., C61)
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Histology
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Adenocarcinoma: NCIT: C2852; Ductal Carcinoma: NCIT: C36858; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Clinical TNM staging
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C48881
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Pathological TNM staging
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C48739
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Staging-T
  | style="background-color:white; padding-left:10px; padding-right:10px;" |T1: NCIT: C48720; T2 ...; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Staging-N
  | style="background-color:white; padding-left:10px; padding-right:10px;" |N0: NCIT: C48705; N1 ...; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Staging-M
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Mx: NCIT: C48704; M0 ...; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Biopsy obtained via imaging
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C17369
|-   
  | rowspan="8" style="background-color:white; padding-left:10px; padding-right:10px;" |Prostate-specific elements
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Had prostatectomy
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 15307
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Prostatectomy margin status
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 123560
|-       
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Primary Gleason score
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C48603
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Secondary Gleason score
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 48604
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Tertiary Gleason score
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 48605
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Total number of prostate tissue cores
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 148277
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Number of positive cores
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 148278
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Prostate-specific antigen level
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 124827
|-   
  | rowspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |Patient reported outcome
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Patient reported outcome
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: 95401
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |PRO instruments
  | style="background-color:white; padding-left:10px; padding-right:10px;" |EPIC-26: NCIT: C127367; AUA IPSS: NCIT: C84350; IIEF: NCIT: C103521; EPIC-CP: NCIT: C127368; SHIM: NCIT: C138113
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |PRO question response
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Integer
|-     
  | rowspan="2" style="background-color:white; padding-left:10px; padding-right:10px;" |Performance score
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Scoring system
  | style="background-color:white; padding-left:10px; padding-right:10px;" |KPS: NCIT: C28013; ECOG: NCIT: C105721; ZUBROD: NCIT: C25400
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Performance score value
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ECOG 1: NCIT: C105723; KPS 10: NCIT: C105718; etc.
|-       
  | rowspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |Toxicity reporting
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Coding system
  | style="background-color:white; padding-left:10px; padding-right:10px;" |CTCAE v5: NCIT: C49704; RTOG: NCIT: C19778
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Toxicity measure
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Erectile dysfunction: NCIT: C55615; Fatigue: NCIT: C146753; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Toxicity grade
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Erectile dysfunction Grade 1: NCIT: C55616; Fatigue Grade 1: NCIT: C55292; etc.
|-   
  | rowspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |Treatment procedures
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Therapy included in the treatment procedure
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Radiation Therapy: NCIT: C15313; Systemic Therapy: NCIT: C15698; Surgical Procedure: NCIT: C15329; Hormone Therapy: NCIT: C15445
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Agents used—Hormone therapy
  | style="background-color:white; padding-left:10px; padding-right:10px;" |String
|-         
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Drugs used—Chemotherapy
  | style="background-color:white; padding-left:10px; padding-right:10px;" |String
|- 
  | rowspan="11" style="background-color:white; padding-left:10px; padding-right:10px;" |RT treatment course
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Radiation treatment modality
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Photon: NCIT: C88112; Electron: NCIT: C40428; Proton: NCIT: C17024; etc.
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Radiation treatment technique
  | style="background-color:white; padding-left:10px; padding-right:10px;" |IMRT: NCIT: C16135; SBRT: NCIT: C118286; 3D CRT: NCIT: C116035; etc.
|-     
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Target volume
  | style="background-color:white; padding-left:10px; padding-right:10px;" |PTV: NCIT: C82606; CTV: NCIT: C112912; GTV: NCIT: C112913; etc.
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Prescribed radiation dose
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ROO: C100013—Float
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Radiation dose units
  | style="background-color:white; padding-left:10px; padding-right:10px;" |cGy: NCIT: C64693; Gy: NCIT: C18063
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Number of prescribed fractions
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C15654—Float
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Organs at risk—structure
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Bladder: NCIT: C12414; Rectum: NCIT: C12390; Heart: NCIT: 12727; etc.
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Delivered radiation dose
  | style="background-color:white; padding-left:10px; padding-right:10px;" |ROO: C100013—Float
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Number of delivered fractions
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C15654—Float
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Start date of RT course
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Date
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;" |End date of RT course
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Date
|- 
  | rowspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |Dose volume histogram
  | style="background-color:white; padding-left:10px; padding-right:10px;" |DVH constraint
  | style="background-color:white; padding-left:10px; padding-right:10px;" |NCIT: C112816—String
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |DVH value
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Float
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |DVH value units
  | style="background-color:white; padding-left:10px; padding-right:10px;" |Gy: NCIT: C18063; cGy: NCIT: C64693; %: UO: 0000187
|- 
|}
|}


==Abbreviations, acronyms, and initialisms==
==Abbreviations, acronyms, and initialisms==


*'''AI''': artificial intelligence
*'''API''': application programming interface
*'''BFS''': breadth-first search
*'''DVH''': dosimetry dose-volume histogram
*'''ECOG''': Everyday Cognition
*'''EHR''': electronic health record
*'''ETL''': extract, transform, and load
*'''FAIR''': findable, accessible, interoperable, and reusable
*'''FHIR''': Fast Healthcare Interoperability Resources
*'''GUI''': graphical user interface
*'''HINGE''': Health Information Gateway Exchange
*'''HL7''': Health Level 7
*'''ICD''': International Classification of Diseases
*'''JSON''': JavaScript Object Notation
*'''KPS''': Karnofsky performance status
*'''LHS''': learning health system
*'''ML''': machine learning
*'''NCI''': National Cancer Institute
*'''NCIT''': ''NCI Thesaurus''
*'''NLP''': natural language processing
*'''NSCLC''': non-small cell lung cancer
*'''OAR''': organs at risk
*'''OWL''':  Web Ontology Language
*'''PSA''': prostate-specific antigen
*'''RDF''': Resource Description Framework
*'''RDMS''': relational database management system
*'''REST''': representational state transfer
*'''RO-CDW''': radiation oncology clinical data warehouse
*'''RO-LHS''': radiation oncology learning health system
*'''ROO''': Radiation Oncology Ontology
*'''RT''': radiation treatment or radiation therapy
*'''TMS''': treatment management system
*'''TPS''':  treatment planning system
*'''XML''': Extensible Markup Language


==Acknowledgements==
==Acknowledgements==
===Author contributions===
All the authors listed above have made substantial contributions in the design, build, analysis, and implementation of the system mentioned in the manuscript. This work has been jointly carried out from the team from the US Veterans Healthcare Administration and Virginia Commonwealth University. All the authors have made significant contributions in drafting and critically reviewing the manuscript text and figures. All the authors have provided their approval with the final version of the submitted manuscript.


===Conflict of interest===
The authors declare no conflicts of interest.


==References==
==References==

Latest revision as of 13:10, 13 May 2024

Full article title Infrastructure tools to support an effective radiation oncology learning health system
Journal Journal of Applied Clinical Medical Physics
Author(s) Kapoor, Rishabh; Sleeman IV, William C.; Ghosh, Preetam; Palta, Jatinder
Author affiliation(s) Virginia Commonwealth University
Primary contact rishabh dot kapoor at vcuhealth dot org
Year published 2023
Volume and issue 24(10)
Article # e14127
DOI 10.1002/acm2.14127
ISSN 1526-9914
Distribution license Creative Commons Attribution 4.0 International
Website https://aapm.onlinelibrary.wiley.com/doi/10.1002/acm2.14127
Download https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14127 (PDF)

Abstract

Purpose: The concept of the radiation oncology learning health system (RO‐LHS) represents a promising approach to improving the quality of care by integrating clinical, dosimetry, treatment delivery, and research data in real‐time. This paper describes a novel set of tools to support the development of an RO‐LHS and the current challenges they can address.

Methods: We present a knowledge graph‐based approach to map radiotherapy data from clinical databases to an ontology‐based data repository using FAIR principles. This strategy ensures that the data are easily discoverable, accessible, and can be used by other clinical decision support systems. It allows for visualization, presentation, and analysis of valuable data and information to identify trends and patterns in patient outcomes. We designed a search engine that utilizes ontology‐based keyword searching and synonym‐based term matching that leverages the hierarchical nature of ontologies to retrieve patient records based on parent and children classes, as well as connects to the Bioportal database for relevant clinical attributes retrieval. To identify similar patients, a method involving text corpus creation and vector embedding models (Word2Vec, Doc2Vec, GloVe, and FastText) are employed, using cosine similarity and distance metrics.

Results: The data pipeline and tool were tested with 1,660 patient clinical and dosimetry records, resulting in 504,180 RDF (Resource Description Framework) tuples and visualized data relationships using graph‐based representations. Patient similarity analysis using embedding models showed that the Word2Vec model had the highest mean cosine similarity, while the GloVe model exhibited more compact embeddings with lower Euclidean and Manhattan distances.

Conclusions: The framework and tools described support the development of an RO‐LHS. By integrating diverse data sources and facilitating data discovery and analysis, they contribute to continuous learning and improvement in patient care. The tools enhance the quality of care by enabling the identification of cohorts, clinical decision support, and the development of clinical studies and machine learning (ML) programs in radiation oncology.

Keywords: FAIR, learning health system infrastructure, ontology, Radiation Oncology Ontology, Semantic Web

Background and significance

For the past three decades, there is a growing interest in building learning organizations to address the most pressing and complex business, social, and economic challenges facing society today.[1] For healthcare, the National Academy of Medicine has defined the concept of a learning health system (LHS) as an entity where science, incentive, culture, and informatics are aligned for continuous innovation, with new knowledge capture and discovery as an integral part for practicing evidence-based medicine.[2] The current dependency on randomized controlled clinical trials that use a controlled environment for scientific evidence creation with only a small percent (<3%) of patient samples is inadequate now and may be irrelevant in the future since these trials take too much time, are too expensive, and are fraught with questions of generalizability. The Agency for Healthcare Research and Quality has also been promoting the development of LHSs as part of a key strategy for healthcare organizations to make transformational changes to improve healthcare quality and value. Large-scale healthcare systems are now recognizing the need to build infrastructure capable of continuous learning and improvement in delivering care to patients and address critical population health issues.[3] In an LHS, data collection should be performed from various sources such as electronic health records (EHRs), treatment delivery records, imaging records, patient-generated data records, and administrative and claims data, which then allows for this aggregated data to be analyzed for generating new insights and knowledge that can be used to improve patient care and outcomes.

However, only a few attempts at leveraging existing infrastructure tools used in routine clinical practice to transform the healthcare domain into an LHS have been made.[4][5] Some examples of actual implementation have emerged, but by and large these concepts have been mostly discussed as conceptual ideas and strategies in the literature. There are several data organization and management challenges that must be addressed in order to effectively implement a radiation oncology LHS:

1. Data integration: Radiation oncology data are generated from a variety of sources, including EHRs, imaging systems, treatment planning systems (TPSs), and clinical trials. Integration of this data into a single repository can be challenging due to differences in data formats, terminologies, and storage systems. There is often significant semantic heterogeneity in the way that different clinicians and researchers use terminology to describe radiation oncology data. For example, different institutions may use different codes or terms to describe the same condition or treatment.
2. Data stored in disparate database schemas: Presently, the EHR, TPS, and treatment management system (TMS) data are housed in a series of relational database management systems (RDMS), which have rigid database structures, varying data schemas and can include lots of uncoded textual data. Tumor registries also stores data in their own defined schemas. Although the column names in the relational databases between two software products might be the same, semantic meaning based on the application of use may be completely different. Changing a database schema requires a lot of programming effort and code changes because of the rigid structure of the stored data, and it is generally advisable to retire old tables and build new tables with the added column definitions.
3. Episodic linking of records: Episodic linking of records refers to the process of integrating patient data from multiple encounters or episodes of care into a single comprehensive record. This record includes information about the patient's medical history, diagnosis, treatment plan, and outcomes, which can be used to improve care delivery, research, and education. Linking multiple data sources based on the patients episodic history of care is quite challenging because the heterogeneity of these data sources does not normally follow any common data storage standards.
4. Build data query tools based on semantic meaning of the data: Since the data are currently stored in multiple RDMSs for the specific purpose to cater the operations aspects of the patient care, extracting common semantic meaning from this data is very challenging. Common semantic meaning in healthcare data is typically achieved through the use of standardized vocabularies and ontologies that define concepts and relationships between them. Developing data query tools based on semantic meaning requires a high level of expertise in both the technical and domain-specific aspects of radiation oncology. Moreover, executing complex data queries, which includes tree-based queries, recursive queries, and derived data queries requires multiple tables joining operations in RDMSs, which is a costly operation.

While we are on the cusp of an artificial intelligence (AI) revolution in biomedicine, with the fast-growing development of advanced machine learning (ML) methods that can analyze complex datasets, there is an urgent need for a scalable intelligent infrastructure that can support these methods. The radiation oncology domain is also one of the most technically advanced medical specialties, with a long history of electronic data generation (e.g., radiation treatment (RT) simulation, treatment planning, etc.) that is modeled for each individual patient. This large volume of patient-specific real-world data captured during routine clinical practice, dosimetry, and treatment delivery make this domain ideally suited for rapid learning.[6] Rapid learning concepts could be applied using an LHS, providing the potential to improve patient outcomes and care delivery, reduce costs, and generate new knowledge from real world clinical and dosimetry data.

Several research groups in radiation oncology, including the University of Michigan, MD Anderson, and Johns Hopkins, have developed data gathering platforms with specific goals.[4] These platforms—such as the M-ROAR platform[6] at the University of Michigan, the system-wide electronic data capture platform at MD Anderson[7], and the Oncospace program at Johns Hopkins[8]—have been deployed to collect and assess practice patterns, perform outcome analysis, and capture RT-specific data, including dose distributions, organ-at-risk (OAR) information, images, and outcome data. While these platforms serve specific purposes, they rely on relational database-based systems without utilizing standard ontology-based data definitions. However, knowledge graph-based systems offer significant advantages over these relational database-based systems. Knowledge graph-based systems provide a more integrated and comprehensive representation of data by capturing complex relationships, hierarchies, and semantic connections between entities. They leverage ontologies, which define standardized and structured knowledge, enabling a holistic view of the data and supporting advanced querying and analysis capabilities. Furthermore, knowledge graph-based systems promote data interoperability and integration by adopting standard ontologies, facilitating collaboration and data sharing across different research groups and institutions. As such, knowledge graph-based systems are able to help ensure that research data is more findable, accessible, interoperable, and reusable (FAIR).[9]

In this paper, we set out to contribute to the advancement of the science of LHSs by presenting a detailed description of the technical characteristics and infrastructure that were employed to design a radiation oncology LHS specifically with a knowledge graph approach. The paper also describes how we have addressed the challenges that arise when building such a system, particularly in the context of constructing a knowledge graph. The main contributions of our work are as follows:

1. Provides an overview of the sources of data within radiation oncology (EHRs, TPS, TMS) and the mechanism to gather data from these sources in a common database.
2. Maps the gathered data to a standardized terminology and data dictionary for consistency and interoperability. Here we describe the processing layer built for data cleaning, checking for consistency and formatting before the extract, transform, and load (ETL) procedure is performed in a common database.
3. Adds concepts, classes, and relationships from existing NCI Thesaurus and SNOMED CT terminologies to the previously published Radiation Oncology Ontology (ROO) to fill in gaps with missing critical elements in the LHS.
4. Presents a knowledge graph visualization that demonstrates the usefulness of the data, with nodes and relationships for easy understanding by clinical researchers.
5. Develops an ontology-based keyword searching tool that utilizes semantic meaning and relationships to search the RDF knowledge graph for similar patients.
6. Provides a valuable contribution to the field of radiation oncology by describing an LHS infrastructure that facilitates data integration, standardization, and utilization to improve patient care and outcomes.

Material and methods

Gather data from multiple source systems in the radiation oncology domain

The adoption of EHRs in patients' clinical management is rapidly increasing in healthcare, but the use of data from EHRs in clinical research is lagging. The utilization of patient-specific clinical data available in EHRs has the potential to accelerate learning and bring value in several key topics of research, including comparative effectiveness research, cohort identification for clinical trial matching, and quality measure analysis.[10][11] However, there is an inherent lack of interest in the use of data from the EHR for research purposes since the EHR and its data were never designed for research. Modern EHR technology has been optimized for capturing health details for clinical record keeping, scheduling, ordering, and capturing data from external sources such as laboratories, diagnostic imaging, and capturing encounter information for billing purposes.[12] Many data elements collected in routine clinical care, which are critical for oncologic care, are neither collected as structured data elements nor with the same defined rigor as those in clinical trials.[13][14]

Given all these challenges with using data from EHRs, we have designed and built a clinical software called Health Information Gateway Exchange (HINGE). HINGE is a web-based electronic structured data capture system that has electronic data sharing interfaces using the Fast Healthcare Interoperability Resources (FHIR) Health Level 7 (HL7) standards with a specific goal to collect accurate, comprehensive, and structured data from EHRs.[15] FHIR is an advanced interoperability standard introduced by standards developing organization HL7. FHIR is based on the previous HL7 standards (version 1 & 2) and provides a representational state transfer (REST) architecture, with an application programming interface (API) in Extensible Markup Language (XML) and JavaScript Object Notation (JSON) formats. Additionally, there has also been recent regulatory and legislative changes promoting the use of FHIR standards for interoperability and interconnectivity of healthcare systems.[16] HINGE has employed the FHIR interfaces with EHRs to retrieve required patient details such as demographics; list of allergies; prescribed active medications; vitals; lab results; surgery, radiology, and pathology reports; active diagnosis; referrals; encounters; and survival information. We have described the design and implementation of HINGE in our previous publication.[17] In summary, HINGE is designed to automatically capture and abstract clinical, treatment planning, and delivery data for cancer patients receiving radiotherapy. The system uses disease site-specific “smart” templates to facilitate the entry of relevant clinical information by physicians and clinical staff. The software processes the extracted data for quality and outcome assessment, using well-defined clinical and dosimetry quality measures defined by disease site experts in radiation oncology. The system connects seamlessly to the local IT/medical infrastructure via interfaces and cloud services and provides tools to assess variations in radiation oncology practices and outcomes, and determine gaps in radiotherapy quality delivered by each provider.

We created a data pipeline from HINGE to export discrete data in JSON-based format. These data are then fed to the extract, transform, and load (ETL) processor. An overview of the data pipeline is shown in Figure 1. ETL is a three-step process where the data are first extracted, transformed (i.e., cleaned, formatted), and loaded into an output radiation oncology clinical data warehouse (RO-CDW) repository. Since HINGE templates do not function as case report forms and they are formatted based on an operational data structure, the data cleaning process is performed with some basic data preprocessing, including cleaning and checking for redundancy in the dataset, ignoring null values while making sure each data element has its supporting data elements populated in the dataset. As there are several types of datasets, each dataset requires a different type of cleaning. Therefore, multiple scripts for data cleaning have been prepared. The following outlines some of the checks that have been performed using the cleaning scripts.

1. Data type validation: We verified whether the column values were in the correct data types (e.g., integer, string, float). For instance, the “Performance Status Value” column in a patient record should be an integer value.
2. Cross-field consistency check: Some fields require other column values to validate their content. For example, the “Radiotherapy Treatment Start Date” should not be earlier than the “Date of Diagnosis.” We conducted a cross-field validation check to ensure that such conditions were met.
3. Mandatory element check: Certain columns in the input data file cannot be empty, such as “Patient ID Number” and “RT Course ID” in the dataset. We performed a mandatory field check to ensure that these fields were properly filled.
4. Range validation: This check ensures that the values fall within an acceptable range. For example, the “Marital Status” column should contain values between 1 to 9.
5. Format check: We verified the format of data values to ensure that they were consistent with the expected year-month-day (YYYYMMDD) format.


Fig1 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 1. Overview of the data pipeline to gather clinical data into the radiation oncology clinical data warehouse (RO-CDW). As part of this pipeline, we have built HL7/FHIR interfaces between the EHR system and HINGE database to gather pertinent information from the patient's chart. These data are stored in the HINGE database and used to auto-populate disease-site-specific smart templates that depict the clinical workflow from initial consultation to follow-up care. The providers record their clinical assessments in these templates as part of their routine clinical care. Once the templates are finalized and signed by the providers in HINGE, the data are exported in JSON format, and using an ETL process, we can load the data in our RO-CDW's relational SQL database. Additionally, we use SQL stored procedures to extract, transform, and load data from the Varian Aria data tables and extraction of dosimetry dose-volume histogram (DVH) curves to our RO-CDW.

The main purpose of this step is to ensure that the dataset is of high quality and fidelity when loaded in RO-CDW. In the data loading process, we have written SQL and .Net-based scripts to transform the data into RO-CDW-compatible schema and load them into a Microsoft SQL Server 2016 database. When the data are populated, unique identifiers are assigned to each data table entry, and interrelationships are maintained within the tables so that the investigators can use query tools to query and retrieve the data, identify patient cohorts, and analyze the data.

We have deployed a free, open-source, and light-weight DICOM server known as Orthanc[18] to collect DICOM-RT datasets from any commercial TPS. Orthanc is a simple yet powerful standalone DICOM server designed to support research and provide query/retrieve functionality of DICOM datasets. Orthanc provides a RESTful API that makes it possible to program using any computer language where DICOM tags stored in the datasets can be downloaded in a JSON format. We used the Python plug-in to connect with the Orthanc database to extract the relevant tag data from the DICOM-RT files. Orthanc was able to seamlessly connect with the Varian Eclipse planning system with the DICOM DIMSE C-STORE protocol.[19] Since the TPS conforms to the specifications listed under the Integrating the Healthcare Enterprise—Radiation Oncology (IHE-RO) profile, the DICOM-RT datasets contained all the relevant tags that were required to extract data.

One of the major challenges with examining patients’ DICOM-RT data is the lack of standardized organs at risk (OAR) and target names, as well as ambiguity regarding dose-volume histogram metrics, and multiple prescriptions mentioned across several treatment techniques. With the goal of overcoming these challenges, the AAPM TG 263 initiative has published their recommendations on OAR and target nomenclature. The ETL user interface deploys this standardized nomenclature and requires the importer of the data to match the deemed OARs with their corresponding standard OAR and target names. In addition, this program also suggests a matching name based on an automated process of relabeling using our published techniques (OAR labels[20], radiomics features[21], and geometric information[22]). We find that these automated approaches provide an acceptable accuracy over the standard prostate and lung structure types. In order to gather the dose volume histogram data from the DICOM-RT dose and structure set files, we have deployed a DICOM-RT dosimetry parser software. If the DICOM-RT dose file exported by the TPS contains dose-volume histogram (DVH) information, we utilize it. However, if the file lacks this information, we employ our dosimetry parser software to calculate the DVH values from the dose and structure set volume information.

Mapping data to standardized terminology, data dictionary, and use of Semantic Web technologies

For data to be interoperable, sharable outside the single hospital environment, and reusable for the various requirements of an LHS, the use of a standardized terminology and data dictionary is a key requirement. Specifically, clinical data should be transformed following the FAIR data principles.[9] An ontology describes a domain of classes and is defined as a conceptual model of knowledge representation. The use of ontologies and Semantic Web technologies plays a key role in transforming the healthcare data to be compatible with the FAIR principles. The use of ontologies enables the sharing of information between disparate systems within the multiple clinical domains. An ontology acts as a layer above the standardized data dictionary and terminology where explicit relationships—that is, predicates—are established between unique entities. Ontologies provide formal definitions of the clinical concepts used in the data sources and render the implicit meaning of the relationships among the different vocabulary and terminologies of the data sources explicitly. For example, it can be determined if two classes and data items found in different clinical databases are equivalent or if one is a subset of another. Semantic level information extraction and query are possible only with the use of ontology-based concepts of data mapping.

A rapid way to look for new information on the internet is to use a search engine such as Google. These search engines return a list of suggested web pages devoid of context and semantics and require human interpretation to find useful information. The Semantic Web is a core technology that is used in order to organize and search for specific contextual information on the web. The Semantic Web, which is also known as Web 3.0, is an extension of the current World Wide Web (WWW) via a set of W3C data standards[23], with a goal to make internet data machine-readable instead of human-readable. For automatic processing of information by computers, Semantic Web extensions enable data (e.g., text, meta data on images, videos, etc.) to be represented with well-defined data structures and terminologies. To enable the encoding of semantics with the data, web technologies such as Resource Description Framework (RDF), Web Ontology Language (OWL), SPARQL Protocol, and RDF Query Language are used. RDF is a standard for sharing data on the web.

We utilized an existing ontology known as Radiation Oncology Ontology (ROO)[24], available on the NCBO Bioportal website.[25] The main role of ROO is to define a broad coverage of main concepts used in the radiation oncology domain. The ROO currently consists of 1,183 classes with 211 predicates that are used to establish relationships between these classes. Upon inspection of this ontology, we noticed that the collection of classes and properties were missing some critical clinical elements such as smoking history, CTCAE v5 toxicity scores, diagnostic procedures such as Gleason scores, prostate-specific antigen (PSA) levels, patient reported outcome measures, Karnofsky performance status (KPS) scales, and radiation treatment modality. We utilized the ontology editor tool Protégé[26] for adding these key classes and properties in the updated ontology file. We reused entries from other published ontologies such as the National Cancer Institute's NCI Thesaurus[27], International Classification of Diseases version 10 (ICD-10)[28], and Dbpedia[29] ontologies. We added 216 classes (categories defined in Table 1) with 19 predicate elements to the ROO. With over 100,000 terms, the NCI Thesaurus includes wide coverage of cancer terms, as well as mapping with external terminologies. The NCI Thesaurus is a product of NCI Enterprise Vocabulary Services (EVS), and its vocabularies consists of public information on cancer, definitions, synonyms, and other information on almost 10,000 cancers and related diseases, 17,000 single agents and related substances, as well as other topics that are associated with cancer. The list of high-level data categories, elements, and codes that are utilized in our work are included in the appendix (Appendix A2).

Table 1. Additional classes added to the Radiation Oncology Ontology (ROO) and used for mapping with our dataset.
Categories # of classes
Race, ethnicity 5
Tobacco use 4
Blood pressure + vitals 3
Laboratory tests (e.g., creatinine, GFR, etc.) 20
Prostate-specific diagnostic tests (e.g., Gleason score, PSA, etc.) 10
Patient-reported outcome 8
CTCAE v5 152
Therapeutic procedures (e.g., immunotherapy, targeted therapy, etc.) 6
Radiation treatment modality (e.g., photon, electron, proton, etc.) 7
Units (cGy) 1

To use and validate the defined ontology, we mapped our data housed in the clinical data warehouse relational database with the concepts and relationships listed in the ontology. This mapping process linked each component (e.g., column headers, values) of the SQL relational database to its corresponding clinical concept (e.g., classes, relationships, and properties) in the ontology. To perform the mapping, the SQL database tables are analyzed and matched with the relevant concepts and properties in the ontology. This can be achieved by identifying the appropriate classes and relationships that best represent the data elements from the SQL relational database. For example, if the SQL relational table provides information about a patient's smoking history, the mapping process would identify the corresponding class or property in the ontology that represents smoking history.

A correspondence between the table columns in the relational database and ontology entities was established using the D2RQ mapping script. An example of this mapping script is shown in Figure 2. With the use of the D2RQ mapping script, individual table columns in relational database schema were mapped to RDF ontology-based codes. This mapping script is executed by the D2RQ platform that connects to the SQL database, reads the schema, performs the mapping, and generates the output file in turtle syntax. Each SQL table column name is mapped to its corresponding class using the d2rq:ClassMap command. These classes are also mapped to existing ontology-based concept codes such as NCIT:C48720 for T1 staging. In order to define the relationships between two classes, the d2rq:refersToClassMap command is used. The properties of the different classes are defined using the d2rq:PropertyBridge command.

Unique resource identifiers (URIs) are used for each entity for enabling the data to be machine-readable and for linking with other RDF databases. The mapping process is specific to the structure and content of the ontology being used, in this case, ROO. It relies on the defined classes, properties, and relationships within the ontology to establish the mapping between the SQL tables input data and the ontology terminology. While the mapping process is specific to the published ontology, it can potentially be generalized to other clinics or healthcare settings that utilize similar ontologies. The generalizability depends on the extent of similarity and overlap between the ontology being used and the terminologies and concepts employed in other clinics. If the ontologies share similar structures and cover similar clinical domains, the mapping process can be applied with appropriate adjustments to accommodate the specific terminologies and concepts used in the target clinic.


Fig2 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 2. Overview of the data mapping between the relational RO-CDW database and the hierarchical graph-based structure based on the defined ontology. The top rectangle displays an example of the various classes of the ontology and their relationships, including the NCI Thesaurus and ICD-10 codes. The bottom rectangle shows the relational database table, and the solid arrows between the top and bottom rectangles display the data mapping.

Importing data in knowledge based graph-based database

The output file from the D2RQ mapping step is in Terse RDF Triple Language (turtle) syntax. This syntax is used for representing data in the semantic triples, which comprise a subject, predicate, and object. Each item in the triple is expressed as a Web URI. In order to search data from such formatted datasets, the dataset is imported in RDF knowledge graph databases. An RDF database, also called a Triplestore, is a type of graph database that stores RDF triples. The knowledge on the subject is represented in these triple formats consisting of subject, predicate, and object. An RDF knowledge graph can also be defined as labeled multi-diagraphs, which consist of a set of nodes which could be URIs or literals containing raw data, and the edges between these nodes represent the predicates.[30] The language used to reach data is called SPARQL—Query Language for RDF. It contains ontologies that are schema models of the database. Although SPARQL adopts various structures of SQL query language, SPARQL uses navigational-based approaches on the RDG graphs to query the data, which is quite different than the table-join-based storage and retrieval methods adopted in relational databases. In our work, we utilized the Ontotext GraphDB software[31] as our RDF store and SPARQL endpoint.

Ontology keyword-based searching tool

It is common practice amongst healthcare providers to use different medical terms to refer to the same clinical concept. For example, if the user is searching for patient records that had a “heart attack,” then besides this text word search, they should also search for synonym concepts such as “myocardial infarction,” “acute coronary syndrome,” and so on. Ontologies such as NCI Thesaurus have listed synonym terms for each clinical concept. To provide an effective method to search the graph database, we built an ontology-based keyword search engine that utilizes the synonym-based term-matching methods. Another advantage of using ontology-based term searching is realized by using the class parent-children relationships. Ontologies are hierarchical in nature, with the terms in the hierarchy often forming a directed acyclic graph (DAG). For example, if we are searching for patients in our database with clinical stage T1, the matching patient list will only comprise patients that have T1 stage NCI Thesaurus code (NCIT: C48720) in the graph database. These matching patients will not return any patients with T1a, T1b, and T1c sub-categories that are children of the parent T1 staging class. We built this search engine where we can search on any clinical term and its matching patient records based on both parent and children classes, which are abstracted.

The method that is used in this search engine is as follows. When the user wants to use the ontology to query the graph-based medical records, the only input necessary is the clinical query terms (q-terms) and an indication of whether the synonyms should also be considered while retrieving the patient records. The user has the option to specify the multiple levels of child class search and parent classes to be included in the search parameters. The software will then connect to the Bioportal database via REST API and perform the search to gather the matching classes for the q-terms and the options specified in the program. Using the list of matching classes, a SPARQL-based query is generated and executed with our patient graph database and matching patient list, and the q-term based clinical attributes are returned to the user.

In order to find patients that have not the same but similar attributes based on the search parameters, we have designed a patient similarity search method. The method employed to identify similar patients based on matching knowledge graph attributes involves the creation of a text corpus by performing breadth-first search (BFS) random walks on each patient's individual knowledge graph. This process allows us to explore the graph structure and extract the necessary information for analysis. Within each patient's knowledge graph, approximately 18−25 categorical features were extracted in the text corpus. It is important to note that the number of features extracted from each patient may vary, as it depends on the available data and the complexity of the patient's profile. These features included the diagnosis; tumour, node, metastasis (TNM) staging; histology; smoking status; performance status; pathology details; radiation treatment modality; technique; and toxicity grades.

This text corpus is then used to create word embeddings that can be used later to search for similar patients based on similarity and distance metrics. We utilized four vector embedding models, namely Word2Vec[32], Doc2Vec[33], GloVe[34], and FastText[35] to train and generate vector embeddings. The output of word embedding models are vectors, one for each word in the training dictionary, that effectively capture relationships between words. The architecture of these word embedding models is based on a single hidden layer neural network. The description of these models is provided in Appendix A1.

The text corpus used for training is obtained from the Bioportal website, which encompasses NCIT, ICD, and SNOMED codes, as well as class definition text, synonyms, hyponyms terms, parent classes, and sibling classes. We scraped 139,916 classes from the Bioportal website using API calls and used this dataset to train our word embedding models. By incorporating this diverse and comprehensive dataset, we aimed to capture the semantic relationships and contextual information relevant to the medical domain. The training process involved iterating over the training dataset for a total of 100 epochs using CPU hardware. During training, the models learned the underlying patterns and semantic associations within the text corpus, enabling them to generate meaningful vector representations for individual words, phrases, or documents. Once the models were trained, we utilized them to generate vector embeddings for the individual patient text corpus that we had previously obtained. These embeddings served as numerical representations of the patient data, capturing the semantic and contextual information contained within the patient-specific text corpus. The Cosine similarity, Euclidean distance, Manhattan distance, and Minkowski distance metrics are employed to measure the distance between the matched patients and all patient feature vectors.

Figure 3 shows the design architecture of the software system. The main purpose of this search engine is to provide the users with a simple interface to search the patient records.


Fig3 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 3. Design architecture for the ontology-based keyword search system. When the user wants to query the patient graph database to retrieve matching records, the only input necessary is the medical terms (q-terms) and an indication to include any synonym, parent, or children terminology classes in the search. The software queries the Bioportal API and retrieves all the matching NCI Thesaurus, ICD-10, and SNOMED CT classes to the q-terms. A SPARQL query is generated and executed on the graph database SPARQL endpoint, and the results indicating the matching patient records and their corresponding data fields are displayed to the user. Our architecture includes the generation of text corpus from breadth-first search (BFS) of individual patient graphs and using word embedding models to generate feature vectors to identify similar patient cohorts.

Results

Mapping data to the ontology

With the aim of testing out the data pipeline and infrastructure, we used our clinical database that has 1,660 patient clinical and dosimetry records. These records are from patients treated with radiotherapy for prostate cancer, non-small cell lung cancer, and small cell lung cancer disease. There are 35,303 clinical and 12,565 DVH based data elements that are stored in our RO-CDW database for these patients. All these data elements were mapped to the ontology using the D2RQ mapping language, resulting in 504,180 RDF tuples. In addition to the raw data, these tuples also defined the interrelationships amongst various defined classes in the dataset. An example of the output RDF tuple file is shown in Figure 4, displaying the patient record relationship with diagnosis, TNM staging, etc. All the entities and predicates in the output RDF file have a URI, which is resolvable as a link for the computer program or human to gather more data on the entities or class. For example, the RDF viewer would be able to resolve the address http://purl.obolibrary.org/obo/NCIT_48720 to gather details on the T-stage such as concept definitions, synonym, relationship with other concepts and classes, etc.

We were able to achieve a mapping completeness of 94.19% between the records in our clinical database and RDF tuples. During the validation process, we identified several ambiguities or inconsistencies in the data housed in the relational database, such as indication of use of Everyday Cognition (ECOG) instrument for performance status evaluation but missing values for ECOG performance status score, record of T stage but nodal and metastatic stage missing, and delivered number fractions missing with the prescribed dose information. To maintain data integrity and accuracy, the D2RQ mapping script was designed to drop these values due to missing or incomplete data or ambiguous information. Additionally, the validation process thoroughly examined the interrelationships among the defined classes in the dataset. We verified that the relationships and associations between entities in the RDF tuples accurately reflected the relationships present in the original clinical data. Any discrepancies or inconsistencies found during this analysis were identified and addressed to ensure the fidelity of the mapped data. To evaluate the accuracy of the mapping process, we conducted manual spot checks on a subset of the RDF tuples. This involved randomly selecting samples of RDF tuples and comparing the mapped values to the original data sources. Through these spot checks, we ensured that the mapping process accurately represented and preserved the information from the clinical and dosimetry data during the transformation into RDF tuples. Overall, the validation process provided assurance that the pipeline effectively transformed the clinical and dosimetry data stored in the RO-CDW database into RDF tuples while preserving the integrity, accuracy, and relationships of the original data.


Fig4 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 4. Example of the output RDF tuple file.

Visualization of data in ontology-based graphical format

Visualizations on ontologies play a key role for users to understand the structure of the data and work with the dataset and its applications. This has an appealing potential when it comes to exploring or verifying complex and large collections of data such as ontologies. We utilized the Allegrograph Gruff toolkit[36] that enables users to create visual knowledge graphs that display data relationships in a neat graphical user interface (GUI). The Gruff toolkit uses simple SPARQL queries to gather the data for rendering the graph with nodes and edges. These visualizations are useful because they increase the users’ understanding of data by instantly illustrating relevant relationships amongst class and concepts, hidden patterns, and data significance to outcomes. An example of the graph-based visualization for a prostate and non-small cell lung cancer patient is shown in Figures 5 and 6. Here all the nodes stand for concepts and classes, and the edges represent relationships between these concepts. All the nodes in the graph have URIs that are resolvable as a web link for the computer program or human to gather more data on the entities or classes. The color of the nodes in the graph visualization are based on the node type, and there are inherent properties of each node that include the unique system code (e.g., NCI Thesaurus code or ICD code, etc.), synonyms terms, definitions, value type (e.g., string, integer, floating point number, etc.). The edges connecting the nodes are defined as properties and stored as predicates in the ontology data file. The use of these predicates enables the computer program to effectively find the queried nodes and their interrelationships. Each of these properties are defined with URIs that are available for gathering more detailed information on the relationship definitions. The left panel in Figures 5 and 6 shows various property types or relationship types that connect the nodes in the graph. Using SPARQL language and Gruff visualization tools, users can query the data without having any prior knowledge of the relational database structure or schema, since these SPARQL queries are based on universal publish classes defined in the NCI Thesaurus, Units Ontology, and ICD-10 ontologies.


Fig5 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 5. Example of the graph structure of a prostate cancer patient record based on the ontology. Each node in the graph are entities that represent objects or concepts and have a unique identifier and can have properties and relationships to other nodes in the graph. These nodes are connected by directed edges, representing relationships between the information, such as the relationship between the diagnosis node and the radiation treatment node. Similarly, there are edges from the diagnosis node to the toxicity node and further to the specific CTCAE toxicity class, indicating that the patient was evaluated for adverse effects after receiving radiation therapy. The different types of edge relationships from the ontology that are used in this example are listed on the left panel of the figure. The right panel shows different types of nodes that are used in the example.

Fig6 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 6. Example of the graph structure of a non-small cell lung cancer (NSCLC) patient based on the ontology. This has a similar structure to the previous prostate cancer example with NSCLC content. The nodes in green and aqua blue color (highlighted in the right panel) indicate the use of NCI Thesaurus classes to represent the use of standard terminology to define the context for each node present in the graph. For simpler visualization, the NCI Thesaurus codes and URIs are not displayed with this example.

Finally, these SPARQL queries can be used with commonly available programming languages like Python and R via REST APIs. We also verified data from the SPARQL queries and the SQL queries from the CDW database for accuracy of the mapping. Our analysis found no difference in the resultant data from the two query techniques. The main advantage of using the SPARQL method is that the data can be queried without any prior knowledge of the original data structure based on the universal concepts defined in the ontology. Also, the data from multiple sources can be seamlessly integrated in the RDF graph database without the use of complex data matching techniques and schema modifications, which is currently required with relational databases. This is only possible if all the data stored in the RDF graph database refers to published codes from the commonly used ontologies.

Searching the data using ontology-based keywords

For effective searching of discrete data from the RDF graph database, we built an ontology-based keyword searching web tool. The public website for this tool is https://hinge-ontology-search.anvil.app. Here we are able to search the database based on keywords (q-terms). The tool is connected to the Bioportal via REST API and finds the matching classes or concepts and renders the results including the class name, NCI Thesaurus code, and definitions. We specifically used the NCI Thesaurus ontology for our query which is 112 MB in size and contains approximately 64,000 terms. The search tool can find the classes based on synonym term queries where it matches the q-terms with the listed synonym terms in the classes (Figure 7a). The tool has features to search the child and parent classes on the matching q-term classes. A screenshot of the web tool with the child class search is shown in Figure 7b. The user can also specify the level of search ,which indicates if the returned classes should include classes of children of children. In the example in Figure 9b, we are showing the q-term used for searching “fatigue” while including the child classes up to one level, and the return classes included the fatigue based CTCAE class and the grade 1, 2, 3 fatigue classes. Once all the classes used for searching are found by the tool, it searches the RDF graph database for matching patient cases with these classes. The matching patient list, including the found class in the patient's graph, is displayed to the user. This tool is convenient for end users to abstract cohorts of patients that have particular classes or concepts in their records without the user learning and implementing the complex SPARQL query language. Based on our evaluation, we found that the average time taken to obtain results is less than five seconds per q-term if there are less than five child classes in the query. The maximum time taken is 11 seconds for a q-term that had 16 child classes.


Fig7 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 7. Screenshot of the ontology-based keyword search portal. (A) Search performed using two q-terms returns results with definitions of the matching classes from the Bioportal and the corresponding patient records from the RDF graph database. (B) Search performed to include child class up to one level on the matching q-term class. Returned results display the matching class, child classes with Fatigue CTCAE grades, and matching patient records from the RDF graph database.

For evaluating the patient similarity-based work embedding models, we evaluated the quality of the feature embedding-based vectors produced by using the technique called t-Distributed Stochastic Neighbor Embedding (t-SNE) and cluster analysis with a predetermined number of clusters set to five based on the diagnosis groups for our patient cohort. Our main objective is to determine the similarity between patient data that are in the same cluster based on their corresponding diagnosis groups. This method can reveal the local and global features encoded by the feature vectors and thus can be used to visualize clusters within the data. We applied t-SNE to all 1,660 patient feature-based vectors produced via the four word embedding models. The t-SNE plot is shown in Figure 8; it shows that the disease data points can be grouped into five clusters with varying degrees of separability and overlap. The analysis of patient similarity using different embedding models revealed interesting patterns. The Word2Vec model showed the highest mean cosine similarity of 0.902, indicating a relatively higher level of similarity among patient embeddings within the five diagnosis groups. In contrast, the Doc2Vec model exhibited a lower mean cosine similarity of 0.637. The GloVe model demonstrated a moderate mean cosine similarity of 0.801, while the FastText model achieved a similar level of 0.855. Regarding distance metrics, the GloVe model displayed lower mean Euclidean and Manhattan distances, suggesting that patient embeddings derived from this model were more compact and closer in proximity. Conversely, the Doc2Vec, Word2Vec, and FastText models yielded higher mean distances, indicating greater variation and dispersion among the patient embeddings. These findings provide valuable insights into the performance of different embedding models for capturing patient similarity, facilitating improved understanding and decision-making in the clinical domain.


Fig8 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 8. (A) Annotation embeddings produced by Word2Vec, Doc2Vec, GloVe, and FastText, a 2D-image of the embeddings projected down to three dimensions using the T-SNE technique. Each point indicates one patient, and color of a point indicates the cohort of the patient based on the diagnosis-based cluster. A good visualization result is that the points of the same color are near each other. (B) Results of the evaluation metrics used to measure patient similarity. The Word2Vec model had the best cosine similarity, and the GloVe model had the best Euclidean, Manhattan, and Minkowski distance, suggesting that patient embeddings derived from this model were more compact and closer in proximity.

Discussion

0Despite the availability of many important clinical and imaging databases such as TCIA, TCGA, and NIH data commons, clinical data science researchers still face severe technical challenges in accessing, interpreting, integrating, analyzing, and utilizing the semantic meaning of heterogeneous data and knowledge from these disparately collected and isolated data sources.[37][38] These tasks pose huge challenges for most clinical data science researchers. Even if data are available and accessible, it still presents a formidable task of cleaning such data for LHSs because of inconsistent data formats, syntaxes, notations, and schemas in data sources. This severely hampers the consumption of data and inherent knowledge stored in these data sources. This requires the researcher to learn multiple software systems, configurations, and access requirements, which leads to significant increase in time and complexity for scientific research.

Robust LHSs in radiation oncology require comprehensive clinical and dosimetry data. Furthermore, advanced ML models and AI require high fidelity and high veracity data to improve the model performance. Scalable intelligent infrastructure that can provide the data from multiple data sources and can support these models are not yet prevalent.[39][40] Infrastructures are required to provide an integrated solution to capture data from multiple sources and then structure the data in a knowledge base with semantically interlinked entities for seamless consumption in ML methods. The use of such an infrastructure solution will allow researchers to mine novel associations from multiple, heterogeneous, and multiple domain sources simultaneously and gather relevant knowledge to provide feedback to the clinical providers for obtaining better clinical outcomes for patients on a personalized basis, which will enhance the quality of clinical research. Table 2 provides some comparison metrics between our knowledge graph-based ontology-specific search solution and the traditional relational database-based solution from the various oncology data sources.

Table 2. Comparison between knowledge graph-based ontology-specific search solution and the traditional relational database-based solution from the various oncology data sources.
Comparison metrics Knowledge graph-based solution Relational database-based solution
Data integration and interlinking Efficient integration of data from multiple sources and linking through semantic relationships in the knowledge graph Limited ability to integrate and establish relationships between data from different tables in the database
Data discovery and accessibility Enhanced data discoverability and accessibility due to ontology-based indexing and semantic querying Relatively limited data discoverability and accessibility through traditional SQL queries
Semantic enrichment Relationships among data fields are established and used for searching for the patient cohort; allows searching for synonym, hyponym terms that are not present in the dataset and gather patients that have similar attributes Relationships among data fields need to be manually established; each synonym and hyponym term needs to be manually annotated in the dataset; limited querying flexibility primarily based on structured SQL queries
Scalability and performance Highly scalable with linking new data from future patient encounters and data from other clinical domains; is able to handle complex queries due to optimized knowledge graph traversal methods Performance may degrade with large datasets or complex queries due to table joins and indexing limitations
Data analysis and visualization Enables advanced data analytics, visualization, and identification of trends and patterns in patient outcomes through graph-based analysis Limited data analysis capabilities and visualization options compared to graph-based analytics
Data reusability and interoperability Supports data reusability and interoperability by adhering to FAIR principles (findable, accessible, interoperable, and reusable) Relational databases offer limited data reusability and interoperability without additional integration efforts

Ontologies are used to create a more robust and interoperable LHS. The fundamental advantage to transform the clinical and dosimetry data into standard ontologies is that it enables the transfer, reuse, and sharing of the patient data and seamless integration with other data sources.[41][42][43] Their most important advantage is the conversion of data into a knowledge graph. We have shown the process to transform clinical traditional database schemas into a knowledge graph-based database with the use of ontologies. The main advantages of using an ontology-based graph database as opposed to traditional relational databases is that the traditional relational databases are designed to cater to a particular application and its software requirements, and data stored is not conducive for clinical research. These databases are not suited to gather data from multiple data sources when the structure of data, schema, and data types are unknown. On the other hand, ontology-based graph databases are schema-free and designed to store large amount of data with defined interrelationships and the definitions based on universally defined concepts that enable any clinical researcher to query the data without understanding the inherent data structure and schema used to store data in the database. The ontology structure makes querying the data more intuitive for researchers and clinicians because it matches the domain knowledge logical structure.[44]

Each data node in the graph has a unique URI that is useful to transform the data using the FAIR principles, which ensure that data and information is findable, by assigning a globally unique and persistent identifier to each data field. To make the data accessible, these data can readily be shared with almost no pre- or post-processing requirements. Interoperability can be achieved by using standard ontologies to represent the data, and once the data are shared and merged with data from other domains, it can be reused for multiple applications for the benefit of patients and their care. These approaches enable the use of federated queries where each hospital maintains its local knowledge graph that represents its specific radiation oncology data but can securely collaborate and gain insights from a collective pool of knowledge without sharing individual patient data. Federated queries involve formulating standardized queries that can be executed across multiple local knowledge graphs simultaneously. These queries leverage the common ontology-based definitions and consistent representation of data structures to retrieve relevant information from each hospital's knowledge graph. By adhering to common ontology terms and relationships, federated queries can effectively integrate data from multiple hospitals, facilitating cross-institutional analysis and knowledge sharing. Traditional methods with AI and ML techniques do not address the issues of data sharing, nor interpretability amongst multiple systems and institutions. With this approach, hospitals can leverage the collective intelligence within the federated knowledge graph to gain insights, identify patterns, and conduct research without compromising patient privacy and data security. Additionally, ontologies can be used to enhance data analysis by allowing for more precise querying and reasoning over the data. For example, an ontology-based query might retrieve all patients who received a certain type of radiation treatment, while an ontology-based reasoning system might infer that a certain treatment plan parameter or dose constraint is contraindicated for a certain type of cancer.[45]

Overall, the use of ontologies and graph-based databases increases the semantic interoperability of clinical and dosimetry data in the radiation oncology domain. The overall architecture of infrastructure is shown in Figure 9. This infrastructure can gather clinical data from EHRs using the HINGE platform, delivery data from the radiation oncology treatment management systems using the FHIR-based interfaces, and radiation oncology treatment planning systems using the DICOM data export. All these data are loaded into a common relational database where data mapping based on ontology and standard taxonomy definitions is performed. The mapped data are transformed into the RDF triple format and uploaded into an RDF graph-based database. The ontology-based keyword search program that can then be used to query the RDF graph database by clinicians and researchers based on any keyword/s. The software can match the patient records based on the synonyms and hyponyms of the search keywords and provide a list of patient records with an exact match, as well as patients who have similar attributes in their clinical record.

We also analyzed patient similarity using four different embedding models, with the Word2Vec model achieving the highest mean cosine similarity, indicating a higher level of similarity among patient embedding vectors. This suggests that the Word2Vec model captures semantic relationships well, leading to more comparable patient representations. When examining distance metrics, the GloVe model stood out with lower mean Euclidean and Manhattan distances. This indicates that patient embeddings derived from the GloVe model are more compact and closer in proximity, signifying a more clustered distribution of similar patients. The choice of which model is better for an application depends on the specific requirements and priorities. If the ability to capture semantic relationships and identify patients with similar attributes is crucial, the Word2Vec model may be more suitable. Conversely, if compactness and clustering of similar patients are of primary importance, the GloVe model may be preferred. These findings provide valuable insights into the performance and characteristics of the different models, enabling researchers and practitioners to make informed decisions about which model best suits their specific requirements. Our designed search tool is useful for cohort identification and can potentially be used to identify patients and their inherent data for quality measure analysis, comparative effectiveness research, continuous quality improvement, and most importantly to support the use, training, and evaluation of ML models directly for streaming clinical data. In the future, we plan to test the scalability of the tool by measuring the performance as the size of the ontology and the number of patients in the database increases. This test can help to determine whether the tool can handle large-scale datasets and ontologies. We also plan to perform cross-validation testing, which will provide the tool's ability to generalize to other ontologies and datasets while comparing the results obtained with those obtained from a gold standard.


Fig9 Kapoor JofAppCliMedPhys2023 24-10.jpg

Figure 9. Overall architecture of our radiation oncology learning health system (RO-LHS) infrastructure. Here we have the data captured at care delivery from the three data sources and the informatics layer to extract, transform, and load this data based on standard taxonomy and ontologies into the RO-LHS core data repository. This repository is the RDF graph database that stores the data with established definitions and relationships based on the standard terminology and ontology. The data listed in the RO-LHS is made available for subsequent applications such as quality measure analysis, cohort identification, continuous quality improvement, and building ML models that can be applied back to the care delivery to improve care, thus completing the loop for an effective learning health system.

It's important to consider the limitations of the analysis. The analysis is solely based on the categorical clinical attributes, and other relevant factors, such as DVH scores, that are continuous numerical variables that have not been considered for our patient similarity analysis. This is because the word embedding models require the input features included in its dictionary before it can generate the vectors. For numerical variables it is not possible to include all the numerical attributes in the training datasets for the word embedding models. Additionally, the word embedding model and cosine similarity scores have their own limitations and may not capture the full complexity of patient similarity because they do not consider temporal aspect of the features. These results provide a starting point for exploring patient similarity and can guide further analysis and investigation. It would be valuable to validate the findings using additional patient data, evaluate the clinical significance of attribute variations, and assess the impact of patient similarity on treatment outcomes and prognosis.

As a proof of concept, the RO-LHS infrastructure system described in this paper successfully demonstrates the procedures of gathering data from multiple clinical systems and using ontology-based data integration. With this system, the radiation oncology datasets would be available using open semantic ontology-based formats and help facilitate interoperability and execution of large scientific studies. This system shows that the ontology developed with domain knowledge can be used to integrate semantic based data and knowledge from multiple data sources. In this work, the ontology was constructed by merging the concepts defined in the ROO, NCI Thesaurus, ICD-10, and Units Ontology.

Appendix

Appendix A1

Description of word embedding models: Word2Vec, Doc2Vec, GloVe, and FastText

In natural language processing (NLP) and text analysis, Word2Vec, Doc2Vec, GloVe, and FastText are popular models. For creating embeddings for words or documents, each model uses a different approach, capturing semantic relationships between words and documents. Here is a brief description of each model and its differences:

  • Word2Vec: Word2Vec is one of the most widely used embedding models that represents words as dense vectors in a continuous vector space. It employs two primary architectures: CBOW and Skip-gram. CBOW predicts target words using context words, while Skip-gram predicts target words based on context words. Through training on substantial text data, Word2Vec effectively captures semantic relationships between words.
  • Doc2Vec: Doc2Vec extends Word2Vec to capture embeddings at the document level. It represents documents, such as paragraphs or entire documents, as continuous vectors in a similar way to how Word2Vec represents individual words. This model architecture is also known as Paragraph Vector, and it learns document representations by incorporating word embeddings and a unique document ID during the training process. This enables the model to capture semantic similarities between different documents.
  • GloVe: Global Vectors for Word Representation (GloVe) is another popular model for generating word embeddings. This model uses the global matrix factorization and local context window methods to generate the embeddings. GloVe constructs a co-occurrence matrix based on word-to-word co-occurrence statistics from a large corpus and factorizes this matrix to obtain word vectors. It considers the global statistical information of word co-occurrences, resulting in embeddings that capture both syntactic and semantic relationships between words.
  • FastText: FastText is a model developed by Facebook Research that extends the idea of Word2Vec by incorporating information about sub-words. Instead of treating each word as a single entity, FastText model represents words as bags of character n-grams (sub-word units). By considering sub-words, FastText can handle out-of-vocabulary words and capture morphological information. This model enables better representations for rare words, inflections, and compound words. FastText also supports efficient training and retrieval, making it useful for large-scale applications.

In summary, Word2Vec focuses on word-level embeddings, Doc2Vec extends it to capture document-level embeddings, GloVe emphasizes global word co-occurrence statistics, and FastText incorporates sub-word information for enhanced representations. The choice of model depends on the specific task, data characteristics, and requirements of the application at hand.

Evaluation metrics for measuring patient similarity

Cosine similarity

Cosine similarity measures the cosine of the angle between two vectors. It calculates the similarity between vectors irrespective of their magnitudes. The cosine similarity between vectors A and B is computed using the dot product of the vectors divided by the product of their magnitudes:

Euclidean distance

Euclidean distance is a popular metric to measure the straight-line distance between two points in Euclidean space. In the context of vector spaces, it calculates the distance between two vectors in terms of their coordinates. The Euclidean distance between vectors A and B with n dimensions is calculated as:

Manhattan distance

Manhattan distance—also known as city block distance or L1 distance—measures the sum of the absolute differences between the coordinates of two vectors. It represents the distance traveled along the grid-like paths in a city block. The Manhattan distance between vectors A and B with n dimensions is calculated as:

Minkowski distance

Minkowski distance is a generalization of both Euclidean and Manhattan distances. It measures the distance between two vectors in terms of their coordinates, with a parameter p determining the degree of the distance metric. The Minkowski distance between vectors A and B with n dimensions is calculated as:

When p = 1, it is equivalent to the Manhattan distance, and when p = 2, it is equivalent to the Euclidean distance.

These metrics provide different ways to quantify the similarity or dissimilarity between vectors, each with its own characteristics and use cases.

Appendix 2

Table A1. Key data elements that are used to map between our clinical data warehouse relational database and ontology-based graph database. This table shows some examples of the codes used for the purpose of this mapping. Abbreviations: ICD-10, International Classification of Diseases, Version 10; NCIT, National Cancer Institute Thesaurus; ROO, Radiation Oncology Ontology; UO, Units Ontology.
Category Attribute Codes/datatypes
Patient details Patient ID NCIT: C16960
Race NCIT: C17049
Ethnicity NCIT: C16564
Date of birth NCIT: C68615
Date of death NCIT: C70810
Sex at birth Male: NCIT: C16576; Female: NCIT: C20197
Cause of death NCIT: C99531
Other patient details Vital status NCIT: C25717; Alive: NCIT: C37987; Deceased: NCIT: C28554
Tobacco use history NCIT: C181760; Smoker: NCIT: C67147; Former Smoker: C67148
Smoking pack years NCIT: 127063
Patient height NCIT: C25347
Patient weight NCIT: 25208
Blood pressure NCIT: C54706
Heart rate NCIT: C49677
Temperature NCIT: C25206
Diagnosis and staging Staging system
Diagnosis NCIT: C15220
ICD version ICD:10
ICD code ICD 10 codes (e.g., C61)
Histology Adenocarcinoma: NCIT: C2852; Ductal Carcinoma: NCIT: C36858; etc.
Clinical TNM staging NCIT: C48881
Pathological TNM staging NCIT: C48739
Staging-T T1: NCIT: C48720; T2 ...; etc.
Staging-N N0: NCIT: C48705; N1 ...; etc.
Staging-M Mx: NCIT: C48704; M0 ...; etc.
Biopsy obtained via imaging NCIT: C17369
Prostate-specific elements Had prostatectomy NCIT: 15307
Prostatectomy margin status NCIT: 123560
Primary Gleason score NCIT: C48603
Secondary Gleason score NCIT: 48604
Tertiary Gleason score NCIT: 48605
Total number of prostate tissue cores NCIT: 148277
Number of positive cores NCIT: 148278
Prostate-specific antigen level NCIT: 124827
Patient reported outcome Patient reported outcome NCIT: 95401
PRO instruments EPIC-26: NCIT: C127367; AUA IPSS: NCIT: C84350; IIEF: NCIT: C103521; EPIC-CP: NCIT: C127368; SHIM: NCIT: C138113
PRO question response Integer
Performance score Scoring system KPS: NCIT: C28013; ECOG: NCIT: C105721; ZUBROD: NCIT: C25400
Performance score value ECOG 1: NCIT: C105723; KPS 10: NCIT: C105718; etc.
Toxicity reporting Coding system CTCAE v5: NCIT: C49704; RTOG: NCIT: C19778
Toxicity measure Erectile dysfunction: NCIT: C55615; Fatigue: NCIT: C146753; etc.
Toxicity grade Erectile dysfunction Grade 1: NCIT: C55616; Fatigue Grade 1: NCIT: C55292; etc.
Treatment procedures Therapy included in the treatment procedure Radiation Therapy: NCIT: C15313; Systemic Therapy: NCIT: C15698; Surgical Procedure: NCIT: C15329; Hormone Therapy: NCIT: C15445
Agents used—Hormone therapy String
Drugs used—Chemotherapy String
RT treatment course Radiation treatment modality Photon: NCIT: C88112; Electron: NCIT: C40428; Proton: NCIT: C17024; etc.
Radiation treatment technique IMRT: NCIT: C16135; SBRT: NCIT: C118286; 3D CRT: NCIT: C116035; etc.
Target volume PTV: NCIT: C82606; CTV: NCIT: C112912; GTV: NCIT: C112913; etc.
Prescribed radiation dose ROO: C100013—Float
Radiation dose units cGy: NCIT: C64693; Gy: NCIT: C18063
Number of prescribed fractions NCIT: C15654—Float
Organs at risk—structure Bladder: NCIT: C12414; Rectum: NCIT: C12390; Heart: NCIT: 12727; etc.
Delivered radiation dose ROO: C100013—Float
Number of delivered fractions NCIT: C15654—Float
Start date of RT course Date
End date of RT course Date
Dose volume histogram DVH constraint NCIT: C112816—String
DVH value Float
DVH value units Gy: NCIT: C18063; cGy: NCIT: C64693; %: UO: 0000187

Abbreviations, acronyms, and initialisms

  • AI: artificial intelligence
  • API: application programming interface
  • BFS: breadth-first search
  • DVH: dosimetry dose-volume histogram
  • ECOG: Everyday Cognition
  • EHR: electronic health record
  • ETL: extract, transform, and load
  • FAIR: findable, accessible, interoperable, and reusable
  • FHIR: Fast Healthcare Interoperability Resources
  • GUI: graphical user interface
  • HINGE: Health Information Gateway Exchange
  • HL7: Health Level 7
  • ICD: International Classification of Diseases
  • JSON: JavaScript Object Notation
  • KPS: Karnofsky performance status
  • LHS: learning health system
  • ML: machine learning
  • NCI: National Cancer Institute
  • NCIT: NCI Thesaurus
  • NLP: natural language processing
  • NSCLC: non-small cell lung cancer
  • OAR: organs at risk
  • OWL: Web Ontology Language
  • PSA: prostate-specific antigen
  • RDF: Resource Description Framework
  • RDMS: relational database management system
  • REST: representational state transfer
  • RO-CDW: radiation oncology clinical data warehouse
  • RO-LHS: radiation oncology learning health system
  • ROO: Radiation Oncology Ontology
  • RT: radiation treatment or radiation therapy
  • TMS: treatment management system
  • TPS: treatment planning system
  • XML: Extensible Markup Language

Acknowledgements

Author contributions

All the authors listed above have made substantial contributions in the design, build, analysis, and implementation of the system mentioned in the manuscript. This work has been jointly carried out from the team from the US Veterans Healthcare Administration and Virginia Commonwealth University. All the authors have made significant contributions in drafting and critically reviewing the manuscript text and figures. All the authors have provided their approval with the final version of the submitted manuscript.

Conflict of interest

The authors declare no conflicts of interest.

References

  1. Senge, Peter M. (2006). The fifth discipline: the art and practice of the learning organization (Rev. and updated ed.). New York: Doubleday/Currency. ISBN 978-0-385-51725-6. OCLC ocm65166960. https://www.worldcat.org/title/mediawiki/oclc/ocm65166960. 
  2. The Learning Healthcare System: Workshop Summary (IOM Roundtable on Evidence-Based Medicine). Washington, D.C.: National Academies Press. 1 June 2007. doi:10.17226/11903. ISBN 978-0-309-10300-8. http://www.nap.edu/catalog/11903. 
  3. Budrionis, Andrius; Bellika, Johan Gustav (1 December 2016). "The Learning Healthcare System: Where are we now? A systematic review" (in en). Journal of Biomedical Informatics 64: 87–92. doi:10.1016/j.jbi.2016.09.018. https://linkinghub.elsevier.com/retrieve/pii/S1532046416301319. 
  4. 4.0 4.1 Matuszak, Martha M.; Fuller, Clifton D.; Yock, Torunn I.; Hess, Clayton B.; McNutt, Todd; Jolly, Shruti; Gabriel, Peter; Mayo, Charles S. et al. (1 October 2018). "Performance/outcomes data and physician process challenges for practical big data efforts in radiation oncology" (in en). Medical Physics 45 (10). doi:10.1002/mp.13136. ISSN 0094-2405. PMC PMC6679351. PMID 30229946. https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.13136. 
  5. Mayo, Charles S.; Kessler, Marc L.; Eisbruch, Avraham; Weyburne, Grant; Feng, Mary; Hayman, James A.; Jolly, Shruti; El Naqa, Issam et al. (1 October 2016). "The big data effort in radiation oncology: Data mining or data farming?" (in en). Advances in Radiation Oncology 1 (4): 260–271. doi:10.1016/j.adro.2016.10.001. PMC PMC5514231. PMID 28740896. https://linkinghub.elsevier.com/retrieve/pii/S2452109416300550. 
  6. 6.0 6.1 Etheredge, Lynn M. (1 January 2007). "A Rapid-Learning Health System: What would a rapid-learning health system look like, and how might we get there?" (in en). Health Affairs 26 (Suppl1): w107–w118. doi:10.1377/hlthaff.26.2.w107. ISSN 0278-2715. http://www.healthaffairs.org/doi/10.1377/hlthaff.26.2.w107. 
  7. Pasalic, Dario; Reddy, Jay P.; Edwards, Timothy; Pan, Hubert Y.; Smith, Benjamin D. (1 December 2018). "Implementing an Electronic Data Capture System to Improve Clinical Workflow in a Large Academic Radiation Oncology Practice" (in en). JCO Clinical Cancer Informatics (2): 1–12. doi:10.1200/CCI.18.00034. ISSN 2473-4276. PMC PMC6874007. PMID 30652599. https://ascopubs.org/doi/10.1200/CCI.18.00034. 
  8. McNutt, T.R.; Evans, K.; Wu, B.; Kahzdan, M.; Simari, P.; Sanguineti, G.; Herman, J.; Taylor, R. et al. (1 November 2010). "Oncospace: All Patients on Trial for Analysis of Outcomes, Toxicities, and IMRT Plan Quality" (in en). International Journal of Radiation Oncology*Biology*Physics 78 (3): S486. doi:10.1016/j.ijrobp.2010.07.1139. https://linkinghub.elsevier.com/retrieve/pii/S0360301610021139. 
  9. 9.0 9.1 Wilkinson, Mark D.; Dumontier, Michel; Aalbersberg, IJsbrand Jan; Appleton, Gabrielle; Axton, Myles; Baak, Arie; Blomberg, Niklas; Boiten, Jan-Willem et al. (15 March 2016). "The FAIR Guiding Principles for scientific data management and stewardship" (in en). Scientific Data 3 (1): 160018. doi:10.1038/sdata.2016.18. ISSN 2052-4463. PMC PMC4792175. PMID 26978244. https://www.nature.com/articles/sdata201618. 
  10. Lambin, Philippe; Roelofs, Erik; Reymen, Bart; Velazquez, Emmanuel Rios; Buijsen, Jeroen; Zegers, Catharina M.L.; Carvalho, Sara; Leijenaar, Ralph T.H. et al. (1 October 2013). "‘Rapid Learning health care in oncology’ – An approach towards decision support systems enabling customised radiotherapy’" (in en). Radiotherapy and Oncology 109 (1): 159–164. doi:10.1016/j.radonc.2013.07.007. https://linkinghub.elsevier.com/retrieve/pii/S0167814013003393. 
  11. Price, Gareth; Mackay, Ranald; Aznar, Marianne; McWilliam, Alan; Johnson-Hart, Corinne; van Herk, Marcel; Faivre-Finn, Corinne (1 November 2021). "Learning healthcare systems and rapid learning in radiation oncology: Where are we and where are we going?" (in en). Radiotherapy and Oncology 164: 183–195. doi:10.1016/j.radonc.2021.09.030. https://linkinghub.elsevier.com/retrieve/pii/S016781402108751X. 
  12. Nordo, Amy Harris; Eisenstein, Eric L.; Hawley, Jeffrey; Vadakkeveedu, Sai; Pressley, Melissa; Pennock, Jennifer; Sanderson, Iain (1 July 2017). "A comparative effectiveness study of eSource used for data capture for a clinical research registry" (in en). International Journal of Medical Informatics 103: 89–94. doi:10.1016/j.ijmedinf.2017.04.015. PMC PMC5942198. PMID 28551007. https://linkinghub.elsevier.com/retrieve/pii/S138650561730103X. 
  13. Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan (1 December 2015). "From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database" (in en). BMC Family Practice 16 (1): 11. doi:10.1186/s12875-015-0223-z. ISSN 1471-2296. PMC PMC4324413. PMID 25649201. https://bmcfampract.biomedcentral.com/articles/10.1186/s12875-015-0223-z. 
  14. Spasić, Irena; Livsey, Jacqueline; Keane, John A.; Nenadić, Goran (1 September 2014). "Text mining of cancer-related information: Review of current status and future directions" (in en). International Journal of Medical Informatics 83 (9): 605–623. doi:10.1016/j.ijmedinf.2014.06.009. https://linkinghub.elsevier.com/retrieve/pii/S1386505614001105. 
  15. Vorisek, Carina Nina; Lehne, Moritz; Klopfenstein, Sophie Anne Ines; Mayer, Paula Josephine; Bartschke, Alexander; Haese, Thomas; Thun, Sylvia (19 July 2022). "Fast Healthcare Interoperability Resources (FHIR) for Interoperability in Health Research: Systematic Review" (in en). JMIR Medical Informatics 10 (7): e35724. doi:10.2196/35724. ISSN 2291-9694. PMC PMC9346559. PMID 35852842. https://medinform.jmir.org/2022/7/e35724. 
  16. Centers for Medicare & Medicaid Services (2021). "Burden Reduction - Interoperability - Policies and Regulations". https://www.cms.gov/priorities/key-initiatives/burden-reduction/interoperability#hiig_featured_sections. Retrieved 30 August 2021. 
  17. Kapoor, Rishabh; Sleeman, William C.; Nalluri, Joseph J.; Turner, Paul; Bose, Priyankar; Cherevko, Andrii; Srinivasan, Sriram; Syed, Khajamoinuddin et al. (1 July 2021). "Automated data abstraction for quality surveillance and outcome assessment in radiation oncology" (in en). Journal of Applied Clinical Medical Physics 22 (7): 177–187. doi:10.1002/acm2.13308. ISSN 1526-9914. PMC PMC8292697. PMID 34101349. https://aapm.onlinelibrary.wiley.com/doi/10.1002/acm2.13308. 
  18. Jodogne, Sébastien. "Orthanc". UCLouvain University. https://www.orthanc-server.com/. 
  19. DICOM Standards Committee (2013). "7.5 DIMSE Services". DICOM PS3.7 2013 - Message Exchange. National Electrical Manufacturers Association. https://dicom.nema.org/dicom/2013/output/chtml/part07/sect_7.5.html. 
  20. Syed, Khajamoinuddin; Sleeman IV, William; Ivey, Kevin; Hagan, Michael; Palta, Jatinder; Kapoor, Rishabh; Ghosh, Preetam (30 April 2020). "Integrated Natural Language Processing and Machine Learning Models for Standardizing Radiotherapy Structure Names" (in en). Healthcare 8 (2): 120. doi:10.3390/healthcare8020120. ISSN 2227-9032. PMC PMC7348919. PMID 32365973. https://www.mdpi.com/2227-9032/8/2/120. 
  21. Sleeman, W.; Palta, J.; Ghosh, P. et al. (2020). "Relabeling Non-Standard to Standard Structure Names Using Geometric and Radiomic Information - BReP-SNAP-M-129". Medical Physics 47 (6): E438. https://w3.aapm.org/meetings/2020AM/programInfo/programSessions.php?t=specific&shid%5B%5D=1591&sid=8797. 
  22. Sleeman IV, William C.; Nalluri, Joseph; Syed, Khajamoinuddin; Ghosh, Preetam; Krawczyk, Bartosz; Hagan, Michael; Palta, Jatinder; Kapoor, Rishabh (1 September 2020). "A Machine Learning method for relabeling arbitrary DICOM structure sets to TG-263 defined labels" (in en). Journal of Biomedical Informatics 109: 103527. doi:10.1016/j.jbi.2020.103527. https://linkinghub.elsevier.com/retrieve/pii/S1532046420301556. 
  23. "Web Standards". World Wide Web Consortium. 2023. https://www.w3.org/standards/. 
  24. Traverso, Alberto; van Soest, Johan; Wee, Leonard; Dekker, Andre (1 October 2018). "The radiation oncology ontology ( ROO ): Publishing linked data in radiation oncology using semantic web and ontology techniques" (in en). Medical Physics 45 (10). doi:10.1002/mp.12879. ISSN 0094-2405. https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.12879. 
  25. "Welcome to BioPortal". Board of Trustees of Leland Stanford Junior University. 2023. https://www.bioontology.org/. 
  26. Noy, Natalya F.; Crubezy, Monica; Fergerson, Ray W.; Knublauch, Holger; Tu, Samson W.; Vendetti, Jennifer; Musen, Mark A. (2003). "Protégé-2000: an open-source ontology-development and knowledge-acquisition environment". AMIA ... Annual Symposium proceedings. AMIA Symposium 2003: 953. ISSN 1942-597X. PMC 1480139. PMID 14728458. https://pubmed.ncbi.nlm.nih.gov/14728458. 
  27. National Cancer Institute (2023). "NCI Thesaurus". National Institutes of Health. https://ncithesaurus.nci.nih.gov/ncitbrowser/. 
  28. "International Statistical Classification of Diseases and Related Health Problems (ICD)". World Health Organization. 2023. https://www.who.int/standards/classifications/classification-of-diseases. 
  29. "DBedia - Global and Unified Access to Knowledge Graphs". DBpedia Association. 2023. https://www.dbpedia.org/. 
  30. Urbani, Jacopo; Jacobs, Ceriel (20 April 2020). "Adaptive Low-level Storage of Very Large Knowledge Graphs" (in en). Proceedings of The Web Conference 2020 (Taipei Taiwan: ACM): 1761–1772. doi:10.1145/3366423.3380246. ISBN 978-1-4503-7023-3. https://dl.acm.org/doi/10.1145/3366423.3380246. 
  31. "Ontotext - Maximize the Value of Your Data". ONTOTEXT AD. 2023. https://www.ontotext.com/. 
  32. Mikolov, Tomas; Chen, Kai; Corrado, Greg; Dean, Jeffrey (2013). "Efficient Estimation of Word Representations in Vector Space". arXiv. doi:10.48550/ARXIV.1301.3781. https://arxiv.org/abs/1301.3781. 
  33. Le, Quoc V.; Mikolov, Tomas (2014). "Distributed Representations of Sentences and Documents". arXiv. doi:10.48550/ARXIV.1405.4053. https://arxiv.org/abs/1405.4053. 
  34. Pennington, Jeffrey; Socher, Richard; Manning, Christopher (2014). "Glove: Global Vectors for Word Representation" (in en). Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Doha, Qatar: Association for Computational Linguistics): 1532–1543. doi:10.3115/v1/D14-1162. http://aclweb.org/anthology/D14-1162. 
  35. Bojanowski, Piotr; Grave, Edouard; Joulin, Armand; Mikolov, Tomas (1 December 2017). "Enriching Word Vectors with Subword Information" (in en). Transactions of the Association for Computational Linguistics 5: 135–146. doi:10.1162/tacl_a_00051. ISSN 2307-387X. https://direct.mit.edu/tacl/article/43387. 
  36. "AllegroGraph - Knowledge Graph + LLM Solutions". Franz, Inc. 2023. https://allegrograph.com/. 
  37. McNutt, Todd R.; Bowers, Michael; Cheng, Zhi; Han, Peijin; Hui, Xuan; Moore, Joseph; Robertson, Scott; Mayo, Charles et al. (1 October 2018). "Practical data collection and extraction for big data applications in radiotherapy" (in en). Medical Physics 45 (10). doi:10.1002/mp.12817. ISSN 0094-2405. https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.12817. 
  38. Mayo, Cs; Phillips, M; McNutt, Tr; Palta, J; Dekker, A; Miller, Rc; Xiao, Y; Moran, Jm et al. (1 October 2018). "Treatment data and technical process challenges for practical big data efforts in radiation oncology" (in en). Medical Physics 45 (10). doi:10.1002/mp.13114. ISSN 0094-2405. PMC PMC8082598. PMID 30226286. https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.13114. 
  39. Jochems, Arthur; Deist, Timo M.; van Soest, Johan; Eble, Michael; Bulens, Paul; Coucke, Philippe; Dries, Wim; Lambin, Philippe et al. (1 December 2016). "Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – A real life proof of concept" (in en). Radiotherapy and Oncology 121 (3): 459–467. doi:10.1016/j.radonc.2016.10.002. https://linkinghub.elsevier.com/retrieve/pii/S0167814016343365. 
  40. Zerka, Fadila; Barakat, Samir; Walsh, Sean; Bogowicz, Marta; Leijenaar, Ralph T. H.; Jochems, Arthur; Miraglio, Benjamin; Townend, David et al. (1 November 2020). "Systematic Review of Privacy-Preserving Distributed Machine Learning From Federated Databases in Health Care" (in en). JCO Clinical Cancer Informatics (4): 184–200. doi:10.1200/CCI.19.00047. ISSN 2473-4276. PMC PMC7113079. PMID 32134684. https://ascopubs.org/doi/10.1200/CCI.19.00047. 
  41. Kapoor, Rishabh; Sleeman, William; Palta, Jatinder; Weiss, Elisabeth (1 March 2023). "3D deep convolution neural network for radiation pneumonitis prediction following stereotactic body radiotherapy" (in en). Journal of Applied Clinical Medical Physics 24 (3): e13875. doi:10.1002/acm2.13875. ISSN 1526-9914. PMC PMC10018674. PMID 36546583. https://aapm.onlinelibrary.wiley.com/doi/10.1002/acm2.13875. 
  42. Kamdar, Maulik R.; Fernández, Javier D.; Polleres, Axel; Tudorache, Tania; Musen, Mark A. (10 September 2019). "Enabling Web-scale data integration in biomedicine through Linked Open Data" (in en). npj Digital Medicine 2 (1): 90. doi:10.1038/s41746-019-0162-5. ISSN 2398-6352. PMC PMC6736878. PMID 31531395. https://www.nature.com/articles/s41746-019-0162-5. 
  43. Phillips, Mark H.; Serra, Lucas M.; Dekker, Andre; Ghosh, Preetam; Luk, Samuel M.H.; Kalet, Alan; Mayo, Charles (1 April 2020). "Ontologies in radiation oncology" (in en). Physica Medica 72: 103–113. doi:10.1016/j.ejmp.2020.03.017. https://linkinghub.elsevier.com/retrieve/pii/S1120179720300727. 
  44. Min, Hua; Manion, Frank J.; Goralczyk, Elizabeth; Wong, Yu-Ning; Ross, Eric; Beck, J. Robert (1 December 2009). "Integration of prostate cancer clinical data using an ontology" (in en). Journal of Biomedical Informatics 42 (6): 1035–1045. doi:10.1016/j.jbi.2009.05.007. PMC PMC2784120. PMID 19497389. https://linkinghub.elsevier.com/retrieve/pii/S1532046409000793. 
  45. Yan, Jihong; Wang, Chengyu; Cheng, Wenliang; Gao, Ming; Zhou, Aoying (1 February 2018). "A retrospective of knowledge graphs" (in en). Frontiers of Computer Science 12 (1): 55–74. doi:10.1007/s11704-016-5228-9. ISSN 2095-2228. http://link.springer.com/10.1007/s11704-016-5228-9. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, though grammar and word usage was substantially updated for improved readability. In some cases important information was missing from the references, and that information was added.