Difference between revisions of "Journal:The GAAIN Entity Mapper: An active-learning system for medical data mapping"

From LIMSWiki
Jump to navigationJump to search
(Created stub. Saving and adding more.)
 
(Added content. Saving and adding more.)
Line 33: Line 33:


Our primary interest is in biomedical data sharing and specifically harmonized data sharing. Harmonized data from multiple data providers has been curated to a unified representation after reconciling the different formats, representation, and terminology from which it was derived.<ref name="DoanPrinc12">{{cite book |title=Principles of Data Integration |author=Doan, A.; Halevy, A.; Ives, Z. |publisher=Elsevier |year=2012 |edition=1st |pages=520 |isbn=9780123914798}}</ref><ref name="OhmannFuture09">{{cite journal |title=Future developments of medical informatics from the viewpoint of networked clinical research: Interoperability and integration |journal=Methods of Information in Medicine |author=Ohmann, C.; Kuchinke, W. |volume=48 |issue=1 |pages=45–54 |year=2009 |doi=10.3414/ME9137 |pmid=19151883}}</ref> The process of data harmonization can be resource-intensive and time-consuming; the present work describes a software solution to significantly automate that process. Data harmonization is fundamentally about data alignment, the establishment of correspondence between related or identical data elements across different datasets. Consider the very simple example of a data element capturing the gender of a subject that is defined as “SEX” in one dataset, “GENDER” in another and “M/F” in yet another. When harmonizing data, a unified element is needed to capture this gender concept and to link (align) the individual elements in different datasets with this unified element. This unified element is the “G.GENDER” element as illustrated in Figure 1.
Our primary interest is in biomedical data sharing and specifically harmonized data sharing. Harmonized data from multiple data providers has been curated to a unified representation after reconciling the different formats, representation, and terminology from which it was derived.<ref name="DoanPrinc12">{{cite book |title=Principles of Data Integration |author=Doan, A.; Halevy, A.; Ives, Z. |publisher=Elsevier |year=2012 |edition=1st |pages=520 |isbn=9780123914798}}</ref><ref name="OhmannFuture09">{{cite journal |title=Future developments of medical informatics from the viewpoint of networked clinical research: Interoperability and integration |journal=Methods of Information in Medicine |author=Ohmann, C.; Kuchinke, W. |volume=48 |issue=1 |pages=45–54 |year=2009 |doi=10.3414/ME9137 |pmid=19151883}}</ref> The process of data harmonization can be resource-intensive and time-consuming; the present work describes a software solution to significantly automate that process. Data harmonization is fundamentally about data alignment, the establishment of correspondence between related or identical data elements across different datasets. Consider the very simple example of a data element capturing the gender of a subject that is defined as “SEX” in one dataset, “GENDER” in another and “M/F” in yet another. When harmonizing data, a unified element is needed to capture this gender concept and to link (align) the individual elements in different datasets with this unified element. This unified element is the “G.GENDER” element as illustrated in Figure 1.
[[File:Fig1 Ashish FrontInNeuroinformatics2016 9.jpg|454px]]
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="454px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Figure 1. Data element mapping.'''</blockquote>
|-
|}
|}
The data mapping problem can be solved in two ways as illustrated in Figure 1. We could map elements across two datasets, for instance match the element “GENDER” from one data source (DATA SOURCE 1 in Figure 1) to the element “SEX” in a second source (DATA SOURCE 2). We could also map elements from one dataset to elements from a common data model. A common data model is a uniform representation which all data sources or providers in a data sharing network agree to adopt. The fundamental mapping task is the same in both. Also, the task of data alignment is inevitable regardless of the data sharing model one employs. In a centralized data sharing model<ref name="NDAR">{{cite web |url=https://ndar.nih.gov/ |title=National Database for Autism Research |publisher=NIH |date=2015}}</ref>, where we create a single unified store of data from multiple data sources, the data from any data source must be mapped and transformed to the unified representation of the central repository. In federated or mediated approaches to data sharing<ref name="DoanPrinc12" /> individual data sources (such as databases) must be mapped to a “global” unified model through mapping rules. The common data model approach, which is also the GAAIN approach, also requires us to map and transform every dataset to the (GAAIN) common data model. This kind of data alignment or mapping can be labor intensive in biomedical and clinical data integration case studies.<ref name="AshishNeuro10">{{cite journal |title=Neuroscience data integration through mediation: An (F)BIRN case study |journal=Frontiers in Neuroinformatics |author=Ashish, N.; Ambite, J.L.; Muslea, M.; Turner, J.A. |volume=4 |pages=118 |year=2010 |doi=10.3389/fninf.2010.00118 |pmid=21228907 |pmc=PMC3017358}}</ref> A single dataset typically has thousands of distinct data elements of which a large subset must be accurately mapped. And it is widely acknowledged that data sharing and integration processes need to be simplified and made less resource-intensive for data sharing, for them to become a viable solution for the medical and clinical data sharing domain as well as the more general enterprise information integration domain.<ref name="HalevyEnt05">{{cite journal |title=Enterprise information integration: Successes, challenges and controversies |journal=Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data |author=Halevy, A.Y.; Ashish, N.; Bitton, D. et al. |year=2005 |pages=778–787 |doi=10.1145/1066157.1066246}}</ref> The GEM system is designed to achieve this by providing automated assistance to developers for such data alignment or mapping.
The GEM data mapping approach is centered on exploiting the information in the data documentation, typically in the form of data dictionaries associated with the data. The importance of data dictionary documentation, and for Alzheimer's data in particular, has been articulated by Morris et al.<ref name="MorrisTheUni06">{{cite journal |title=The Uniform Data Set (UDS): Clinical and cognitive variables and descriptive data from Alzheimer Disease Centers |journal=Alzheimer Disease and Associated Disorders |author=Morris, J.C.; Weintraub, S.; Chui, H.C. et al. |volume=20 |issue=4 |pages=210–6 |doi=10.1097/01.wad.0000213865.09806.92 |pmid=17132964}}</ref> These data dictionaries contain detailed descriptive information and metadata about each data element in the dataset. Our solution is based on extracting this rich metadata in data dictionaries, developing element similarity measures based on text mining of the element descriptions, and employing machine-learning classifiers to meaningfully combine multiple indicative factors for an element match.


==References==
==References==

Revision as of 19:12, 23 May 2016

Full article title The GAAIN Entity Mapper: An active-learning system for medical data mapping
Journal Frontiers in Neuroinformatics
Author(s) Ashish, N.; Dewan, P.; Toga, A.W.
Author affiliation(s) University of Southern California at Los Angeles
Primary contact Email: nashish@loni.usc.edu
Editors Van Ooyen, A.
Year published 2016
Volume and issue 9
Page(s) 30
DOI 10.3389/fninf.2015.00030
ISSN 1662-5196
Distribution license Creative Commons Attribution 4.0 International
Website http://journal.frontiersin.org/article/10.3389/fninf.2015.00030/full
Download http://journal.frontiersin.org/article/10.3389/fninf.2015.00030/pdf (PDF)

Abstract

This work is focused on mapping biomedical datasets to a common representation, as an integral part of data harmonization for integrated biomedical data access and sharing. We present GEM, an intelligent software assistant for automated data mapping across different datasets or from a dataset to a common data model. The GEM system automates data mapping by providing precise suggestions for data element mappings. It leverages the detailed metadata about elements in associated dataset documentation such as data dictionaries that are typically available with biomedical datasets. It employs unsupervised text mining techniques to determine similarity between data elements and also employs machine-learning classifiers to identify element matches. It further provides an active-learning capability where the process of training the GEM system is optimized. Our experimental evaluations show that the GEM system provides highly accurate data mappings (over 90 percent accuracy) for real datasets of thousands of data elements each, in the Alzheimer's disease research domain. Further, the effort in training the system for new datasets is also optimized. We are currently employing the GEM system to map Alzheimer's disease datasets from around the globe into a common representation, as part of a global Alzheimer's disease integrated data sharing and analysis network called GAAIN. GEM achieves significantly higher data mapping accuracy for biomedical datasets compared to other state-of-the-art tools for database schema matching that have similar functionality. With the use of active-learning capabilities, the user effort in training the system is minimal.

Keywords: data mapping, machine learning, active learning, data harmonization, common data model

Background and significance

This paper describes a software solution for biomedical data harmonization. Our work is in the context of the “GAAIN” project in the domain of Alzheimer's disease data. However, this solution is applicable to any biomedical or clinical data harmonization in general. GAAIN — the Global Alzheimer's Association Interactive Network — is a data sharing federated network of Alzheimer's disease datasets from around the globe. The aim of GAAIN is to create a network of Alzheimer's disease data, researchers, analytical tools and computational resources to better our understanding of this disease. A key capability of this network is also to provide investigators with access to harmonized data across multiple, independently created Alzheimer's datasets.

Our primary interest is in biomedical data sharing and specifically harmonized data sharing. Harmonized data from multiple data providers has been curated to a unified representation after reconciling the different formats, representation, and terminology from which it was derived.[1][2] The process of data harmonization can be resource-intensive and time-consuming; the present work describes a software solution to significantly automate that process. Data harmonization is fundamentally about data alignment, the establishment of correspondence between related or identical data elements across different datasets. Consider the very simple example of a data element capturing the gender of a subject that is defined as “SEX” in one dataset, “GENDER” in another and “M/F” in yet another. When harmonizing data, a unified element is needed to capture this gender concept and to link (align) the individual elements in different datasets with this unified element. This unified element is the “G.GENDER” element as illustrated in Figure 1.


Fig1 Ashish FrontInNeuroinformatics2016 9.jpg

Figure 1. Data element mapping.

The data mapping problem can be solved in two ways as illustrated in Figure 1. We could map elements across two datasets, for instance match the element “GENDER” from one data source (DATA SOURCE 1 in Figure 1) to the element “SEX” in a second source (DATA SOURCE 2). We could also map elements from one dataset to elements from a common data model. A common data model is a uniform representation which all data sources or providers in a data sharing network agree to adopt. The fundamental mapping task is the same in both. Also, the task of data alignment is inevitable regardless of the data sharing model one employs. In a centralized data sharing model[3], where we create a single unified store of data from multiple data sources, the data from any data source must be mapped and transformed to the unified representation of the central repository. In federated or mediated approaches to data sharing[1] individual data sources (such as databases) must be mapped to a “global” unified model through mapping rules. The common data model approach, which is also the GAAIN approach, also requires us to map and transform every dataset to the (GAAIN) common data model. This kind of data alignment or mapping can be labor intensive in biomedical and clinical data integration case studies.[4] A single dataset typically has thousands of distinct data elements of which a large subset must be accurately mapped. And it is widely acknowledged that data sharing and integration processes need to be simplified and made less resource-intensive for data sharing, for them to become a viable solution for the medical and clinical data sharing domain as well as the more general enterprise information integration domain.[5] The GEM system is designed to achieve this by providing automated assistance to developers for such data alignment or mapping.

The GEM data mapping approach is centered on exploiting the information in the data documentation, typically in the form of data dictionaries associated with the data. The importance of data dictionary documentation, and for Alzheimer's data in particular, has been articulated by Morris et al.[6] These data dictionaries contain detailed descriptive information and metadata about each data element in the dataset. Our solution is based on extracting this rich metadata in data dictionaries, developing element similarity measures based on text mining of the element descriptions, and employing machine-learning classifiers to meaningfully combine multiple indicative factors for an element match.

References

  1. 1.0 1.1 Doan, A.; Halevy, A.; Ives, Z. (2012). Principles of Data Integration (1st ed.). Elsevier. pp. 520. ISBN 9780123914798. 
  2. Ohmann, C.; Kuchinke, W. (2009). "Future developments of medical informatics from the viewpoint of networked clinical research: Interoperability and integration". Methods of Information in Medicine 48 (1): 45–54. doi:10.3414/ME9137. PMID 19151883. 
  3. "National Database for Autism Research". NIH. 2015. https://ndar.nih.gov/. 
  4. Ashish, N.; Ambite, J.L.; Muslea, M.; Turner, J.A. (2010). "Neuroscience data integration through mediation: An (F)BIRN case study". Frontiers in Neuroinformatics 4: 118. doi:10.3389/fninf.2010.00118. PMC PMC3017358. PMID 21228907. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3017358. 
  5. Halevy, A.Y.; Ashish, N.; Bitton, D. et al. (2005). "Enterprise information integration: Successes, challenges and controversies". Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data: 778–787. doi:10.1145/1066157.1066246. 
  6. Morris, J.C.; Weintraub, S.; Chui, H.C. et al.. "The Uniform Data Set (UDS): Clinical and cognitive variables and descriptive data from Alzheimer Disease Centers". Alzheimer Disease and Associated Disorders 20 (4): 210–6. doi:10.1097/01.wad.0000213865.09806.92. PMID 17132964. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. References are in order of appearance rather than alphabetical order (as the original was). Some grammar, punctuation, and minor wording issues have been corrected.