Difference between revisions of "User:Shawndouglas/sandbox/sublevel1"

From LIMSWiki
Jump to navigationJump to search
(Finished adding rest of content.)
Line 3: Line 3:
| type      = notice
| type      = notice
| style    = width: 960px;
| style    = width: 960px;
| text      = This is sublevel1 of my sandbox, where I play with features and test MediaWiki code. If you wish to leave a comment for me, please see [[User_talk:Shawndouglas|my discussion page]] instead.<p></p>
| text      = This is sublevel2 of my sandbox, where I play with features and test MediaWiki code. If you wish to leave a comment for me, please see [[User_talk:Shawndouglas|my discussion page]] instead.<p></p>
}}
}}


==Sandbox begins below==
==Sandbox begins below==
{{Infobox journal article
{{Infobox journal article
|name        =  
|name        =  
Line 13: Line 12:
|alt          = <!-- Alternative text for images -->
|alt          = <!-- Alternative text for images -->
|caption      =  
|caption      =  
|title_full  = How could the ethical management of health data in the medical field inform police use of DNA?
|title_full  = A data quality strategy to enable FAIR, programmatic access across large,<br />diverse data collections for high performance data analysis
|journal      = ''Frontiers in Public Health''
|journal      = ''Informatics''
|authors      = Krikorian, Gaelle; Vailly, Joëlle
|authors      = Evans, Ben; Druken, Kelsey; Wang, Jingbo; Yang, Rui; Richards, Clare; Wyborn, Lesley
|affiliations = Institut de recherche interdisciplinaire sur les enjeux sociaux (IRIS)
|affiliations = Australian National University
|contact      = Email: gaelle.krikorian@gmail.com
|contact      = Email: Jingbo dot Wang at anu dot edu dot au
|editors      = Lefèvre, Thomas
|editors      = Ge, Mouzhi; Dohnal, Vlastislav
|pub_year    = 2018
|pub_year    = 2017
|vol_iss      = '''6'''
|vol_iss      = '''4'''(4)
|pages        = 154
|pages        = 45
|doi          = [http://10.3389/fpubh.2018.00154 10.3389/fpubh.2018.00154]
|doi          = [http://10.3390/informatics4040045 10.3390/informatics4040045]
|issn        = 2296-2565
|issn        = 2227-9709
|license      = [http://creativecommons.org/licenses/by/4.0/ Creative Commons Attribution 4.0 International]
|license      = [http://creativecommons.org/licenses/by/4.0/ Creative Commons Attribution 4.0 International]
|website      = [https://www.frontiersin.org/articles/10.3389/fpubh.2018.00154/full https://www.frontiersin.org/articles/10.3389/fpubh.2018.00154/full]
|website      = [http://www.mdpi.com/2227-9709/4/4/45/htm http://www.mdpi.com/2227-9709/4/4/45/htm]
|download    = [https://www.frontiersin.org/articles/10.3389/fpubh.2018.00154/pdf https://www.frontiersin.org/articles/10.3389/fpubh.2018.00154/pdf] (PDF)
|download    = [http://www.mdpi.com/2227-9709/4/4/45/pdf http://www.mdpi.com/2227-9709/4/4/45/pdf] (PDF)
}}
}}
{{ombox
{{ombox
Line 33: Line 32:
| text      = This article should not be considered complete until this message box has been removed. This is a work in progress.
| text      = This article should not be considered complete until this message box has been removed. This is a work in progress.
}}
}}
==Abstract==
To ensure seamless, programmatic access to data for high-performance computing (HPC) and [[Data analysis|analysis]] across multiple research domains, it is vital to have a methodology for standardization of both data and services. At the Australian National Computational Infrastructure (NCI) we have developed a data quality strategy (DQS) that currently provides processes for: (1) consistency of data structures needed for a high-performance data (HPD) platform; (2) [[quality control]] (QC) through compliance with recognized community standards; (3) benchmarking cases of operational performance tests; and (4) [[quality assurance]] (QA) of data through demonstrated functionality and performance across common platforms, tools, and services. By implementing the NCI DQS, we have seen progressive improvement in the quality and usefulness of the datasets across different subject domains, and demonstrated the ease by which modern programmatic methods can be used to access the data, either ''in situ'' or via web services, and for uses ranging from traditional analysis methods through to emerging machine learning techniques. To help increase data re-usability by broader communities, particularly in high-performance environments, the DQS is also used to identify the need for any extensions to the relevant international standards for interoperability and/or programmatic access.
'''Keywords''': data quality, quality control, quality assurance, benchmarks, performance, data management policy, netCDF, high-performance computing, HPC, fair data
==Introduction==
==Introduction==
Various events paved the way for the production of ethical norms regulating biomedical practices, from the Nuremberg Code (1947)—produced by the international trial of Nazi regime leaders and collaborators—and the Declaration of Helsinki by the World Medical Association (1964) to the invention of the term “bioethics” by American biologist Van Rensselaer Potter.<ref name="PotterBio70">{{cite journal |title=Bioethics, the science of survival |journal=Perspectives in Biology and Medicine |author=Potter, V.R. |volume=14 |issue=1 |pages=127–53 |year=1970 |doi=10.1353/pbm.1970.0015}}</ref> The ethics of biomedicine has given rise to various controversies—particularly in the fields of newborn screening<ref name=VaillyTheBirth13">{{cite book |title=The Birth of a Genetics Policy: Social Issues of Newborn Screening |author=Vailly, J. |publisher=Routledge |pages=240 |year=2013 |isbn=9781472422729}}</ref>, prenatal screening<ref name="IsambertÉthique80">{{cite journal |title=Éthique et génétique: De l'utopie eugénique au contrôle des malformations congénitales |journal=Revue française de sociologie |author=Isambert, F.A. |volume=21 |issue=3 |pages=331–54 |year=1980 |doi=10.2307/3320930}}</ref>, and cloning<ref name="PulmanLesEnjeux05">{{cite journal |title=Les enjeux du clonage |journal=Revue française de sociologie |author=Pulman, B. |volume=46 |issue=3 |pages=413–42 |year=2005 |doi=10.3917/rfs.463.0413}}</ref>—resulting in the institutionalization of ethical questions in the biomedical world of genetics. In 1994, France passed legislation (commonly known as the “bioethics laws”) to regulate medical practices in genetics. The medical community has also organized itself in order to manage ethical issues relating to its decisions, with a view to handling “practices with many strong uncertainties” and enabling clinical judgments and decisions to be made not by individual practitioners but rather by multidisciplinary groups drawing on different modes of judgment and forms of expertise.<ref name="BourretDécision08">{{cite journal |title=Décision et jugement médicaux en situation de forte incertitude : l’exemple de deux pratiques cliniques à l’épreuve de la génétique |journal=Sciences sociales et santé |author=Bourret, P.; Rabeharisoa, V. |volume=26 |issue=1 |pages=128 |year=2008 |doi=10.3917/sss.261.0033}}</ref> Thus, the biomedical approach to genetics has been characterized by various debates and the existence of public controversies.
The National Computational Infrastructure (NCI) manages one of Australia’s largest and more diverse repositories (10+ petabytes) of research data collections spanning datasets from climate, coasts, oceans, and geophysics through to astronomy, [[bioinformatics]], and the social sciences.<ref name="WangLarge14">{{cite journal |title=Large-Scale Data Collection Metadata Management at the National Computation Infrastructure |journal=Proceedings from the American Geophysical Union, Fall Meeting 2014 |author=Wang, J.; Evans, B.J.K.; Bastrakova, I. et al. |pages=IN14B-07 |year=2014}}</ref> Within these domains, data can be of different types such as gridded, ungridded (i.e., line surveys, point clouds), and raster image types, as well as having diverse coordinate reference projections and resolutions. NCI has been following the Force 11 FAIR data principles to make data findable, accessible, interoperable, and reusable.<ref name="F11FAIR">{{cite web |url=https://www.force11.org/group/fairgroup/fairprinciples |title=The FAIR Data Principles |publisher=Force11 |accessdate=23 August 2017}}</ref> These principles provide guidelines for a research data repository to enable data-intensive science, and enable researchers to answer problems such as how to trust the scientific quality of data and determine if the data is usable by their software platform and tools.


In the judicial sphere, the situation is very different. Since the end of the 1990s, developments in biomedical research have led to genetic data being used in police work and legal proceedings. Today, [[forensic science]] is omnipresent in investigations, not just in complex criminal cases but also routinely in cases of “minor” or “mass” delinquency. Genetics, which certainly receives the most media coverage among the techniques involved<ref name="BrewerMedia09">{{cite journal |title=Media Use and Public Perceptions of DNA Evidence |journal=Science Communication |author=Brewer, P.R.; Ley, B.L. |volume=32 |issue=1 |pages=93–117 |year=2009 |doi=10.1177/1075547009340343}}</ref>, has taken on considerable importance.<ref name="WilliamsGenetic08">{{cite book |title=Genetic Policing: The Uses of DNA in Police Investigations |author=Williams, R.; Johnson, P. |publisher=Willan |pages=208 |year=2008 |isbn=9781843922049}}</ref> However, although very similar techniques are used in biomedicine and police work (DNA amplification, [[sequencing]], etc.), the forms of collective management surrounding them are very different, as well as the ethico-legal frameworks and their evolution, as this text will demonstrate.
To ensure broader reuse of the data and enable transdisciplinary integration across multiple domains, as well as enabling programmatic access, a dataset must be usable and of value to a broad range of users from different communities.<ref name="EvansExtend16">{{cite journal |title=Extending the Common Framework for Earth Observation Data to other Disciplinary Data and Programmatic Access |journal=Proceedings from the American Geophysical Union, Fall General Assembly 2016 |author=Evans, B.J.K.; Wyborn, L.A.; Druken, K.A. et al. |pages=IN22A-05 |year=2016}}</ref> Therefore, a set of standards and "best practices" for ensuring the quality of scientific data products is a critical component in the life cycle of data management. We undertake both QC through compliance with recognized community standards (e.g., checking the header of the files to make sure it is compliant with community convention standard) and QA of data through demonstrated functionality and performance across common platforms, tools, and services (e.g., verifying the data to be functioning with designated software and libraries).


'''Keywords''': DNA, police, ethics, genetic technologies, criminal investigations
The Earth Science Information Partners (ESIP) Information Quality Cluster (IQC) has been established for collecting such standards and best practices and then assisting data producers in their implementation, and users in their taking advantage of them.<ref name="RamapriyanEnsuring17">{{cite journal |title=Ensuring and Improving Information Quality for Earth Science Data and Products |journal=D-Lib Magazine |author=Ramapriyan, H.; Peng, G.; Moroni, D.; Shie, C.-L. |volume=23 |issue=7/8 |year=2017 |doi=10.1045/july2017-ramapriyan}}</ref> ESIP considers four different aspects of [[information]] quality in close relation to different stages of data products in their four-stage life cycle<ref name="RamapriyanEnsuring17" />: (1) define, develop, and validate; (2) produce, access, and deliver; (3) maintain, preserve, and disseminate; and (4) enable use, provide support, and service.


==Nature of the information and genetic data produced in the police sphere==
Science teams or data producers are responsible for managing data quality during the first two stages, while data publishers are responsible for the latter two stages. As NCI is both a digital repository, which manages the storage and distribution of reference data for a range of users, as well as the provider of high-end compute and data analysis platforms, the data quality processes are focused on the latter two stages. A check on the scientific correctness is considered to be part of the first two stages and is not included in the definition of "data quality" that is described in this paper.


In police work in France, data produced by DNA are currently compiled and used in two different ways: first, to create files on individuals in the FNAEG or ''Fichier national automatisé des empreintes génétiques'' (national automated DNA database) and, second, in order to obtain [[information]] about perpetrators of crimes (their appearance, their origin, their kinship links to other individuals).
==NCI's data quality strategy (DQS)==
NCI developed a DQS to establish a level of assurance, and hence confidence, for our user community and key stakeholders as an integral part of service provision.<ref name="AtkinTotal05">{{cite book |chapter=Chapter 8: Service Specifications, Service Level Agreements and Performance |title=Total Facilities Management |author=Atkin, B.; Brooks, A. |publisher=Wiley |isbn=9781405127905}}</ref> It is also a step on the pathway to meet the technical requirements of a trusted digital repository, such as the CoreTrustSeal certification.<ref name="CTSData">{{cite web |url=https://www.coretrustseal.org/why-certification/requirements/ |title=Data Repositories Requirements |publisher=CoreTrustSeal |accessdate=24 October 2017}}</ref> As meeting these requirements involves the systematic application of agreed policies and procedures, our DQS provides a suite of guidelines, recommendations, and processes for: (1) consistency of data structures suitable for the underlying high-performance data (HPD) platform; (2) QC through compliance with recognized community standards; (3) benchmarking performance using operational test cases; and (4) QA through demonstrated functionality and benchmarking across common platforms, tools, and services.


Police use of DNA has been allowed in France since the 1998 law providing for the creation of the FNAEG. A DNA profile corresponds to a “specific individual alphanumeric combination”<ref name="CabalRapport01">{{cite book |title=Rapport sur la valeur scientifique de l'utilisation des empreintes génétiques dans le domaine judiciaire |author=Cabal, C.; Le Déaut, J.-Y.; Revol, H. |publisher=Assemblée nationale |year=2001 |isbn=2111150177}}</ref> that is the numerical encoding of analysis of DNA segments. This profile is the result of analysis of DNA fragments using genetic markers. This analysis can be carried out on a minute amount of genetic material (saliva, blood, sperm, hair, contact, etc.). It identifies the presence of sequences specific to an individual that differentiate them from any other person (with the exception of an identical twin) but that are not supposed to provide any phenotypical information (about appearance, geographical origin, or diseases).{{efn|The Order of 10 August 2015 increased the number of markers analyzed to 21; policemen and analysis laboratories had three years to comply with this new requirement.}} Such profiles therefore make individuals “identifiable in their uniqueness.”<ref name="BonniolL'ADN14">{{cite journal |title=L’ADN au service d’une nouvelle quête des ancêtres? |journal=Civilisations |author=Bonniol, J.-L.; Darlu, P. |volume=63 |pages=201–19 |year=2014 |doi=10.4000/civilisations.3747}}</ref> During investigations, DNA is collected from suspects or unidentified stains left on crime scenes or people and the results of this analysis are entered into the database. Identification through the FNAEG was originally restricted to a limited number of crimes—those of a sexual nature, as part of the law relating to the prevention and punishment of sexual crimes and the protection of minors. This remit has progressively been extended to include the vast majority of crimes and offences{{efn|Act n°98-468 of 17 June 1998 relative to the punishment of sexual crimes and the protection of minors introduced article 706-54 into the Code of Criminal Procedure making provision for the creation of an automated national database to centralize the DNA profiles of persons convicted of offences of a sexual nature. The remit of the database was then extended on several occasions. In 2001, it included serious crimes against persons. In 2003, the law on internal security extended it to persons convicted of or implicated in crimes and offences against persons or property.}}, leading to the routine use of DNA in investigations.{{efn|Collecting DNA samples in investigations is now the rule. An ''ad hoc'' body of staff has been trained over the past 15 years that almost systematically processes crime scenes.}} As a result of this evolution, there has been a substantial increase in the number of persons with files in the FNAEG, more than three million as of late 2015.{{efn|This figure was provided to the French Parliament by the Ministry of the Interior following a question by parliamentarian Sergio Coronado (member of the “Ecologist” parliamentary group) (http://questions.assemblee-nationale.fr/q14/14-79728QE.htm).}}
NCI’s DQS was developed iteratively through firstly a review of other approaches for management of data QC and data QA (e.g., Ramapriyan ''et al.''<ref name="RamapriyanEnsuring17" /> and Stall<ref name="StallAGU16">{{cite web |url=https://www.scidatacon.org/2016/sessions/100/ |title=AGU's Data Management Maturity Model |work=Auditing of Trustworthy Data Repositories |author=Stall, S.; Downs, R.R.; Kempler, S.J. |publisher=SciDataCon 2016 |date=2016}}</ref>) to establish the DQS methodology and secondly applying this to selected use cases at NCI which captured existing and emerging requirements, particularly the use cases that relate to HPC.


New techniques have also emerged in recent years. It is now possible to obtain indications about an individual's physical appearance based on a sample of his/her DNA<ref name="KayserImproving11">{{cite journal |title=Improving human forensics through advances in genetics, genomics and molecular biology |journal=Nature Reviews Genetics |author=Kayser, M.; de Knijff, P. |volume=12 |issue=3 |pages=179–92 |year=2011 |doi=10.1038/nrg2952 |pmid=21331090}}</ref><ref name="KayserForensic15">{{cite journal |title=Forensic DNA Phenotyping: Predicting human appearance from crime scene material for investigative purposes |journal=Forensic Science International Genetics |author=Kayser, M. |volume=18 |pages=33–48 |year=2015 |doi=10.1016/j.fsigen.2015.02.003 |pmid=25716572}}</ref>: the analyses in question provide statistical information on eye, hair, and skin color, etc. These techniques are more exploratory and aim not to match DNA with an identity by comparison but to determine the characteristics of the perpetrator of a crime. These data result from [[Data analysis|analysis]] of several dozen DNA markers that, unlike the FNAEG's data, are selected deliberately so that they can provide information about a person's physical appearance. They are therefore aimed at “generating a suspect”<ref name="M'charekBeyond13">{{cite journal |title=Beyond Fact or Fiction: On the Materiality of Race in Practice |journal=Cultural Anthropology |author=M'charek, A. |volume=28 |issue=3 |pages=420–42 |year=2013 |doi=10.1111/cuan.12012}}</ref> but because the information about this person's features are incomplete (e.g., a person with blue eyes, fair skin, light brown hair, and of European “bio-geographical” ancestry), they define “target populations of interest” to guide police investigations.<ref name="CaliebePredictive18">{{cite journal |title=Predictive values in Forensic DNA Phenotyping are not necessarily prevalence-dependent |journal=FSI Genetics |author=Caliebe, A.; Krawczak, M.; Kayser, M. |volume=33 |pages=e7–e8 |year=2018 |doi=10.1016/j.fsigen.2017.11.006}}</ref> Several private and public laboratories in France now produce what professionals often refer to as “DNA photofits”; it is estimated that several dozen such analyses have been carried out since 2014 as part of investigations.
Our approach is consistent with the American Geophysical Union (AGU) Data Management Maturity (DMM)SM model<ref name="StallAGU16" /><ref name="StallTheAmerican16">{{cite journal |title=The American Geophysical Union Data Management Maturity Program |journal=Proceedings from the eResearch Australasia Conference 2016 |author=Stall, S.; Hanson, B.; Wyborn, L. |pages=72 |year=2016 |url=https://eresearchau.files.wordpress.com/2016/03/eresau2016_paper_72.pdf}}</ref>, which was developed in partnership the Capability Maturity Model Integration (CMMI) Institute and adapted for their DMMSM<ref name="CMMIDataMan">{{cite web |url=https://cmmiinstitute.com/store/data-management-maturity-(dmm) |title=Data Management Maturity (DMM) |publisher=CMMI Institute LLC}}</ref> model for applications in the Earth and space sciences. The AGU DMMSM model aims to provide guidance on how to improve data quality and consistency and facilitate reuse in the data life cycle. It enables both producers of data and repositories that store data to ensure that datasets are "fit-for-purpose," repeatable, and trustworthy. The Data Quality Process Areas in the AGU DMMSM model define a collaborative approach for receiving, assessing, cleansing, and curating data to ensure "fitness" for intended use in the scientific community.


==How is this framed legally, politically, and ethically?==
After several iterations, the NCI DQS was established as part of the formal data publishing process and is applied throughout the cycle from submission of data to the NCI repository through to its final publication. The approach is also being adopted by the data producers who now engage with the process from the preparation stage, prior to ingestion onto the NCI data platform. Early consultation and feedback has greatly improved both the quality of the data as well as the timeliness for publication. To improve the efficiency further, one of our major data suppliers is including our DQS requirements in their data generation processes to ensure data quality is considered earlier in data production.
The legal framework surrounding how the police and justice system use DNA analysis was devised to follow the creation of the FNAEG. For this reason, and in order to defuse fears and criticisms, the law only allows analyses using “non-coding” DNA so as to meet the initial objective of allowing identification without providing information about individuals. French law only provides the police DNA for identification purposes “within the framework of investigative measures or the preparation of a case during a judicial proceeding,”{{efn|Art. 16.11 of the Civil Code}} in cases of missing persons{{efn|Art. 26, Domestic Security Guidance and Planning Act n° 95-73 of 21 January 1995}}, or, more recently, in the context of familial searches to allow “searches for persons directly related to [an] unknown person” who has left a stain at a crime scene (i.e., without determining phenotype).{{efn|This possibility was written into law in 2016 in article 796-56-1-1 of Act n° 2016-731 of 3 June 2016 strengthening provisions for the fight against organized crime, terrorism, and their financing, and improving the efficiency and guarantees of the criminal procedure.}}


Concerning the so-called “DNA Photofit” technique, in June 2014, France's highest court, the Court of Cassation, ruled admissible an expert report charged with providing “all useful elements relating to the suspect's visible morphological characteristics” based on stains collected after a rape in an investigation into a series of sexual assaults in Lyon between October 2012 and January 2014. The Court of Cassation's authorization of this practice in DNA analysis was the first in France. For judges and prosecutors, there is now set a legal precedent allowing them to authorize “DNA Photofits” when they consider this could help an investigation.
The technical requirements and implementation of our DQS will be described as four major but related data components: structure, QC, benchmarking, and QA.


In legal terms, the emerging of new technical possibilities and their practical use create conflicting and parallel regimes. On one hand, “DNA Photofits” do not correspond to the legal frameworks devised in the 1990s. It does not provide identification, per se, but is rather an “assistance to the investigation,” as it uses coding DNA. One another hand, as science evolves, the law is falling out of step with the technical and scientific reality. New knowledge shows that some of the markers used by the FNAEG may in fact allow further information to be obtained about people regarding their predisposition to certain diseases, their genetic pathologies, and their “ethnic origin” (by continent or sub-continent).{{efn|For example, according to a study by the Telethon Institute of Genetics and Medicine, D2S1388, one of the markers used by the FNAEG, plays a determining role in the transmission of pseudohyperkalaemia, a rare genetic disease.<ref name="CarellaASecond04">{{cite journal |title=A second locus mapping to 2q35-36 for familial pseudohyperkalaemia |journal=European Journal of Human Genetics |author=Carella, M.; d'Adamo, A.P.; Grootenboer-Mignot, S. et al. |volume=12 |issue=12 |pages=1073–6 |year=2004 |doi=10.1038/sj.ejhg.5201280}}</ref> In 2011, a publication by Chinese researchers highlighted the association between marker D21S11-28.2 and coronary heart disease.<ref name="HuiNovel11">{{cite journal |title=Novel association analysis between 9 short tandem repeat loci polymorphisms and coronary heart disease based on a cross-validation design |journal=Atherosclerosis |author=Hui, L.; Jing, Y.; Rui, M.; Weijian, Y. |volume=218 |issue=1 |pages=151–5 |year=2011 |doi=10.1016/j.atherosclerosis.2011.05.024 |pmid=21703622}}</ref> A team of Portuguese researchers<ref name="PereiraPop11">{{cite journal |title=PopAffiliator: online calculator for individual affiliation to a major population group based on 17 autosomal short tandem repeat genotype profile |journal=International Journal of Legal Medicine |author=Pereira, L.; Alshamali, F.; Andreassen, R. et al. |volume=125 |issue=5 |pages=629–36 |year=2011 |doi=10.1007/s00414-010-0472-2 |pmid=20552217}}</ref> has developed an online calculator capable of correlating certain markers used in the FNAEG's DNA samples with individual affiliation to population groups (Sub-Saharan Africa, Eurasia, East Asia, North Africa, Near East, North America, South America, and Central America).}} Moreover, whereas at the FNAEG's inception it was considered unacceptable for the police to use medical information, certain professionals in police or justice now recognize that this information (whether genetic or not) can be useful in investigations (providing information about wanted persons' need for medication or healthcare, or about their physical appearance, etc.). Although there are no changes in the legal framework on this matter, the idea is spreading and the red line is, to some extend, and for some of the professionals, fading.
===Data structure===
NCI's research data collections are particularly focused on enabling programmatic access, required by: (1) NCI core services such as the NCI supercomputer and NCI cloud-based capabilities; (2) community virtual [[Laboratory|laboratories]] and virtual research environments; (3) those that require remote access through established scientific standards-based protocols that use data services; and, (4) increasingly, by international data federations. To enable these different types of programmatic access, datasets must be registered in the central NCI catalogue<ref name="NCIDataPortal">{{cite web |url=https://geonetwork.nci.org.au/geonetwork/srv/eng/catalog.search#/home |title=NCI Data Portal |publisher=National Computational Infrastructure}}</ref>, which records their location for access both on the filesystems and via data services.


It is thus obvious that police uses of DNA data providing information about individuals' characteristics raise novel politic-ethical issues.<ref name="M'charekSilent08">{{cite journal |title=Silent witness, articulate collective: DNA evidence and the inference of visible traits |journal=Bioethics |author=M'charek, A. |volume=22 |issue=9 |pages=519-28 |year=2008 |doi=10.1111/j.1467-8519.2008.00699.x |pmid=18959734}}</ref><ref name="MacLeanForensic14">{{cite journal |title=Forensic DNA phenotyping in criminal investigations and criminal courts: Assessing and mitigating the dilemmas inherent in the science |journal=Recent Advances in DNA and Gene Sequences |author=MacLean, C.E.; Lamparello, A. |volume=8 |issue=2 |pages=104-12 |year=2014 |pmid=25687339}}</ref> In particular, it brings into play the issue of what constitutes private data<ref name="ToomApproaching16">{{cite journal |title=Approaching ethical, legal and social issues of emerging forensic DNA phenotyping (FDP) technologies comprehensively: Reply to 'Forensic DNA phenotyping: Predicting human appearance from crime scene material for investigative purposes' by Manfred Kayser |journal=Forensic Science International Genetics |author=Toom, V.; Wienroth, M.; M'charek, A. et al. |volume=22 |pages=e1–e4 |year=2016 |doi=10.1016/j.fsigen.2016.01.010 |pmid=26832996}}</ref>—for certain geneticists, where “DNA Photofits” are concerned, externally visible characteristics do not fall into this category because they are visible.<ref name="KayserForensic15" /> Generally, as stated by some professionals during interviews, the question is “to know until where to go. And where to stop.“ Regarding the FNAEG and French law, in a case heard in June 2017, the European Court of Human Rights (ECHR) ruled that “interference with the applicant's right to respect for his private life had been disproportionate.”{{efn|Case of Aycaguer V. France, 22 June 2017, 8806/12, ECHR, Court (Fifth Section)}} The ECHR judgment ruled against France and underscored that French law regarding DNA date storage should be differentiated “according to the nature and seriousness of the offence committed."{{efn|See legal summary, available at [https://goo.gl/FcyuUM https://hudoc.echr.coe.int/eng#{%22itemid%22:[%22002-11703%22]} }}
This requires the data to be well-organized and compliant with uniform, professionally managed standards and consistent community conventions wherever possible. For example, the climate community Coupled Model Intercomparison Project (CMIP) experiments use the Data Reference Syntax (DRS)<ref name="TaylorCMIP12">{{cite web |url=https://pcmdi.llnl.gov/mips/cmip5/docs/cmip5_data_reference_syntax.pdf |format=PDF |title=CMIP5 Data Reference Syntax (DRS) and Controlled Vocabularies |author=Taylor, K.E.; Balaji, V.; Hankin, S. et al. |publisher=Program for Climate Model Diagnosis & Intercomparison |date=13 June 2012}}</ref>, whilst the National Aeronautics and Space Administration (NASA) recommends a specific name convention for Landsat satellite image products.<ref name="USGSLandsat">{{cite web |url=https://landsat.usgs.gov/what-are-naming-conventions-landsat-scene-identifiers |title=What are the naming conventions for Landsat scene identifiers? |publisher=U.S. Geological Survey |accessdate=23 August 2017}}</ref> The NCI data collection catalogue manages the details of each dataset through a uniform application of ISO 19115:2003<ref name="ISO19115">{{cite web |url=https://www.iso.org/standard/53798.html |title=ISO 19115-1:2014 Geographic information -- Metadata -- Part 1: Fundamentals |publisher=International Organization for Standardization |date=April 2014 |accessdate=25 May 2016}}</ref>, an international schema used for describing geographic information and services. Essentially, each catalogue entry points to the location of the data within the NCI data infrastructure. The catalogue entries also point to the services endpoints such as a standard data download point, data subsetting interface, as well as Open Geospatial Consortium (OGC) Web Mapping Service (WMS) and Web Coverage Services (WCS). NCI can publish data through several different servers, and as such the specific endpoint for each of these service capabilities is listed.


In Germany, a contradictory dialogue between experts took place regarding Forensic DNA Phenotyping revealing public and political debate on the matter.<ref name="BuchananForensic18">{{cite journal |title=Forensic DNA phenotyping legislation cannot be based on “Ideal FDP”—A response to Caliebe, Krawczak and Kayser (2017) |journal=FSI Genetics |author=Buchanan, N.; Staubach, F.; Wienroth, M. et al. |volume=34 |pages=e13–e14 |year=2018 |doi=10.1016/j.fsigen.2018.01.009}}</ref> In France, despite the stakes involved and the spread of new usages of DNA techniques, no public debate has emerged in recent years concerning new uses of DNA in police work. In 2008, a private analysis [[laboratory]] offering indicative geo-genetic tests (''tests d'origine géo-génétique'' or TOGG) providing information about individuals' origin based on their DNA sparked a media debate that complicated the issue<ref name="VaillyThePolitics17">{{cite journal |title=The politics of suspects’ geo-genetic origin in France: The conditions, expression, and effects of problematisation |journal=BioSocieties |author=Vailly, J. |volume=12 |issue=1 |pages=66–88 |year=2017 |doi=10.1057/s41292-016-0028-x}}</ref>; however, the controversy soon died down. A few years later, Ministry of Justice instructions to judges and prosecutors discouraged the use of this technique, with no further debate. Since then, although the Court of Cassation's 2014 decision opened up the possibility of using an unprecedented practice, this has not generated any public debate or controversy.  
NCI has developed a catalogue and directory policy, which provides guidelines for the organization of datasets within the concepts of data collections and data sub-collections and includes a comprehensive definition for each hierarchical layer. The definitions are:


“DNA Photofits” have received some media coverage{{efn|A search conducted on the press database Europresse for the period 2010 to 2018 brought up around 70 pieces published mentioning the terms “DNA Photofits” or “Genetic photofits”.}}, but this has mainly been to underscore the technical process involved, echoing the fiction conveyed by television series that have made the use of genetic techniques in criminal investigations seem commonplace and particularly efficient. Our sociological fieldwork has revealed, however, that there was organized debate among judges and prosecutors between 2013 and 2014. At the time, the investigating judge who had for the first time ordered the analysis of the suspect's visible morphological characteristics referred the case to the examining chamber himself, to obtain a verdict on whether the expert report he had requested was legal. Although the examining chamber approved the report, the public prosecutor brought the issue before the Court of Cassation—the highest legal authority in France—in order to ensure the final nature of the decision. The Court of Cassation ruled that a judge could have recourse to such analyses. Following this verdict, several bodies consulted by the Ministry of Justice{{efn|These bodies were the Commission nationale consultative des droits de l'homme (CNCDH – National consultative committee on human rights) and the approval committee for people authorized to conduct identification procedures using DNA profiles in the context of legal proceedings or the extrajudicial procedure for identifying deceased persons.}} provided opinions underscoring the need for this technique to be written into and regulated by the law. This has not been implemented to date. After being authorized for several years under a temporary protocol, familial searches allowing “genetic proximity testing”<ref name="PrainsackGenetic10">{{cite book |chapter=Chapter 2: Key issues in DNA profiling and databasing: Implications for governance |title=Genetic Suspects: Global Governance of Forensic DNA Profiling and Databasing |author=Prainsack, B. |editor=Hindmarsh, R.; Prainsack, B. |publisher=Cambridge University Press |pages=15–39 |year=2010 |isbn=9780521519434}}</ref> were written into law in 2016. However, the Court of Cassation's judgment on DNA analysis to provide “all useful elements relating to a suspect's visible morphological characteristics” has not been brought up for parliamentary debate to be included in the law. There has been no political management of the question at the state level, nor has the issue been included in the general debate organized by the National Consultative Council of Ethics (Comité Consultatif National d'Ethique) in 2018 regarding the revision of laws on bio-ethics.
* A ''data collection'' is the highest in the hierarchy of data groupings at NCI. It is comprised of either an exclusive grouping of data subcollections, or it is a tiered structure with an exclusive grouping of lower tiered data collections, where the lowest tier data collection will only contain data subcollections.


==Conclusion==
* A ''data subcollection'' is an exclusive grouping of datasets (i.e., belonging to only one subcollection) where the constituent datasets are tightly managed. It must have responsibilities within one organization with responsibility for the underlying management of its constituent datasets. A data subcollection constitutes a strong connection between the component datasets, and is organized coherently around a single scientific element (e.g., model, instrument). A subcollection must have compatible licenses such that constituent datasets do not need different access arrangements.
The use of these new technological and scientific techniques plays a significant role in guiding how we engage with the world<ref name="WilliamsSocial17">{{cite journal |title=Social and ethical aspects of forensic genetics: A critical review |journal=Forensic Science Review |author=Williams, R.; Wienroth, M. |volume=29 |issue=2 |pages=145–69 |year=2017 |pmid=28691916}}</ref>, just as it redefines the production of identity translated into information<ref name="AasTheBody06">{{cite journal |title=‘The body does not lie’: Identity, risk and trust in technoculture |journal=Crime, Media, Culture: An International Journal |author=Aas, K.F. |volume=2 |issue=2 |pages=143-158 |year=2006 |doi=10.1177/1741659006065401}}</ref> and structures the way sensitive information about individuals is used and circulated. Despite these stakes, and the initial caution that surrounded the creation of the national automated DNA database, it has not gone hand-in-hand with collective political and ethical debate. This raises questions about the conditions for the existence or for the absence of political controversies that call for further sociological investigations about the framing of the issue and the social and political logic at play.


As the uses of these techniques are developing in police practices, this absence of collective management of the issue refers the professional to forms of local arbitration. Our fieldwork has shown that they are aware that these practices raise issues and therefore devise ethical frameworks for their own use of DNA. As a consequence, in this field, as it is the case in others, ethical issues are addressed in a fragmented manner as endogenous ethical frameworks are “cobbled together” by professionals as a function of their practices and needs. Each institution, laboratory, and in some cases each individual, is crafting a frame and a perimeter of limits to what can be done according to their understanding and appreciation of the legal setting, the practical utility of actions and the ethical constraints perceived.
* A ''dataset'' is a compilation of data that constitutes a programmable data unit that has been collected and organized using a self-contained process. For this purpose it must have a named data owner, a single license, one set of semantics, ontologies, vocabularies, and has a single data format and internal data convention. A dataset must include its version.


The ECHR's recent ruling against France regarding the FNAEG may force lawmakers to reach a verdict on this issue, thereby triggering what seems like necessary public debate on forensic use of DNA. The new possibilities provided by genetic technologies point to the need for promoting dialogue among the various professionals using this technology in police work (forensic teams and geneticists working with them, police investigators, private laboratories, prosecutors, judges, etc.), but also with healthcare professionals—who already have experience of the institutionalized management of ethical considerations relating to their practices in genetics—and, more broadly, in society as a whole.
* A ''dataset granule'' is used for some scientific domains that require a finer level of granularity (e.g., in satellite Earth Observation datasets). A granule refers to the smallest aggregation of data that can be independently described, inventoried, and retrieved as defined by NASA.<ref name="NASAGlossary">{{cite web |url=https://earthdata.nasa.gov/user-resources/glossary#ed-glossary-g |title=Granule |work=EarthData Glossary |accessdate=23 August 2017}}</ref> Dataset granules have their own metadata and support values associated with the additional attributes defined by parent datasets.


==Acknowledgements==
In addition we use the term "data category" to identify common contents/themes across all levels of the hierarchy.
Authors are grateful to Lucy Garnier for translating this article from French.


===Author contributions===
* A ''data category'' allows a broad spectrum of options to encode relationships between data. A data category can be anything that weakly relates datasets, with the primary way of discovering the groupings within the data by key terms (e.g., keywords, attributes, vocabularies, ontologies). Datasets are not exclusive to a single category.
GK is the main contributor. JV is the head of the research programme and collaborated to the writing of the article.


===Funding===
====Organization of data within the data structure====
This research was financed by the National Research Agency (ANR) in France (Project FITEGE, contract: ANR-14-CE29-0014).
NCI has organized data collections according to this hierarchical structure on both filesystem and within our catalogue system. Figure 1 shows how these datasets are organized. Figure 2 provides an example of how the CMIP 5 data collection demonstrates the hierarchical directory structure.


===Conflict of interest statement===
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


==Footnotes==
[[File:Fig1 Evans Informatics2017 4-4.png|700px]]
{{reflist|group=lower-alpha}}
{{clear}}
{|
| STYLE="vertical-align:top;"|
{| border="0" cellpadding="5" cellspacing="0" width="700px"
|-
  | style="background-color:white; padding-left:10px; padding-right:10px;"| <blockquote>'''Figure 1.''' Illustration of the different levels of metadata and community standards used for each</blockquote>
|-  
|}
|}


==References==
==References==
Line 87: Line 95:


==Notes==
==Notes==
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Footnotes were originally numbered but have been converted to lowercase alpha for this version. The link in footnote j had to be applied to Google Shortener because the HUDOC uses invalid characters in their URLs, and this wiki's footnote system breaks when the original URL is used.
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Several URL from the original were dead, and more current URLs were substituted.


<!--Place all category tags here-->
<!--Place all category tags here-->
[[Category:LIMSwiki journal articles (added in 2018)‎]]
[[Category:LIMSwiki journal articles (added in 2018)‎]]
[[Category:LIMSwiki journal articles (all)‎]]
[[Category:LIMSwiki journal articles (all)‎]]
[[Category:LIMSwiki journal articles on forensic science]]
[[Category:LIMSwiki journal articles on data quality]]
[[Category:LIMSwiki journal articles on health informatics‎‎]]
[[Category:LIMSwiki journal articles on informatics‎‎]]

Revision as of 18:23, 13 August 2018

Sandbox begins below

Full article title A data quality strategy to enable FAIR, programmatic access across large,
diverse data collections for high performance data analysis
Journal Informatics
Author(s) Evans, Ben; Druken, Kelsey; Wang, Jingbo; Yang, Rui; Richards, Clare; Wyborn, Lesley
Author affiliation(s) Australian National University
Primary contact Email: Jingbo dot Wang at anu dot edu dot au
Editors Ge, Mouzhi; Dohnal, Vlastislav
Year published 2017
Volume and issue 4(4)
Page(s) 45
DOI 10.3390/informatics4040045
ISSN 2227-9709
Distribution license Creative Commons Attribution 4.0 International
Website http://www.mdpi.com/2227-9709/4/4/45/htm
Download http://www.mdpi.com/2227-9709/4/4/45/pdf (PDF)

Abstract

To ensure seamless, programmatic access to data for high-performance computing (HPC) and analysis across multiple research domains, it is vital to have a methodology for standardization of both data and services. At the Australian National Computational Infrastructure (NCI) we have developed a data quality strategy (DQS) that currently provides processes for: (1) consistency of data structures needed for a high-performance data (HPD) platform; (2) quality control (QC) through compliance with recognized community standards; (3) benchmarking cases of operational performance tests; and (4) quality assurance (QA) of data through demonstrated functionality and performance across common platforms, tools, and services. By implementing the NCI DQS, we have seen progressive improvement in the quality and usefulness of the datasets across different subject domains, and demonstrated the ease by which modern programmatic methods can be used to access the data, either in situ or via web services, and for uses ranging from traditional analysis methods through to emerging machine learning techniques. To help increase data re-usability by broader communities, particularly in high-performance environments, the DQS is also used to identify the need for any extensions to the relevant international standards for interoperability and/or programmatic access.

Keywords: data quality, quality control, quality assurance, benchmarks, performance, data management policy, netCDF, high-performance computing, HPC, fair data

Introduction

The National Computational Infrastructure (NCI) manages one of Australia’s largest and more diverse repositories (10+ petabytes) of research data collections spanning datasets from climate, coasts, oceans, and geophysics through to astronomy, bioinformatics, and the social sciences.[1] Within these domains, data can be of different types such as gridded, ungridded (i.e., line surveys, point clouds), and raster image types, as well as having diverse coordinate reference projections and resolutions. NCI has been following the Force 11 FAIR data principles to make data findable, accessible, interoperable, and reusable.[2] These principles provide guidelines for a research data repository to enable data-intensive science, and enable researchers to answer problems such as how to trust the scientific quality of data and determine if the data is usable by their software platform and tools.

To ensure broader reuse of the data and enable transdisciplinary integration across multiple domains, as well as enabling programmatic access, a dataset must be usable and of value to a broad range of users from different communities.[3] Therefore, a set of standards and "best practices" for ensuring the quality of scientific data products is a critical component in the life cycle of data management. We undertake both QC through compliance with recognized community standards (e.g., checking the header of the files to make sure it is compliant with community convention standard) and QA of data through demonstrated functionality and performance across common platforms, tools, and services (e.g., verifying the data to be functioning with designated software and libraries).

The Earth Science Information Partners (ESIP) Information Quality Cluster (IQC) has been established for collecting such standards and best practices and then assisting data producers in their implementation, and users in their taking advantage of them.[4] ESIP considers four different aspects of information quality in close relation to different stages of data products in their four-stage life cycle[4]: (1) define, develop, and validate; (2) produce, access, and deliver; (3) maintain, preserve, and disseminate; and (4) enable use, provide support, and service.

Science teams or data producers are responsible for managing data quality during the first two stages, while data publishers are responsible for the latter two stages. As NCI is both a digital repository, which manages the storage and distribution of reference data for a range of users, as well as the provider of high-end compute and data analysis platforms, the data quality processes are focused on the latter two stages. A check on the scientific correctness is considered to be part of the first two stages and is not included in the definition of "data quality" that is described in this paper.

NCI's data quality strategy (DQS)

NCI developed a DQS to establish a level of assurance, and hence confidence, for our user community and key stakeholders as an integral part of service provision.[5] It is also a step on the pathway to meet the technical requirements of a trusted digital repository, such as the CoreTrustSeal certification.[6] As meeting these requirements involves the systematic application of agreed policies and procedures, our DQS provides a suite of guidelines, recommendations, and processes for: (1) consistency of data structures suitable for the underlying high-performance data (HPD) platform; (2) QC through compliance with recognized community standards; (3) benchmarking performance using operational test cases; and (4) QA through demonstrated functionality and benchmarking across common platforms, tools, and services.

NCI’s DQS was developed iteratively through firstly a review of other approaches for management of data QC and data QA (e.g., Ramapriyan et al.[4] and Stall[7]) to establish the DQS methodology and secondly applying this to selected use cases at NCI which captured existing and emerging requirements, particularly the use cases that relate to HPC.

Our approach is consistent with the American Geophysical Union (AGU) Data Management Maturity (DMM)SM model[7][8], which was developed in partnership the Capability Maturity Model Integration (CMMI) Institute and adapted for their DMMSM[9] model for applications in the Earth and space sciences. The AGU DMMSM model aims to provide guidance on how to improve data quality and consistency and facilitate reuse in the data life cycle. It enables both producers of data and repositories that store data to ensure that datasets are "fit-for-purpose," repeatable, and trustworthy. The Data Quality Process Areas in the AGU DMMSM model define a collaborative approach for receiving, assessing, cleansing, and curating data to ensure "fitness" for intended use in the scientific community.

After several iterations, the NCI DQS was established as part of the formal data publishing process and is applied throughout the cycle from submission of data to the NCI repository through to its final publication. The approach is also being adopted by the data producers who now engage with the process from the preparation stage, prior to ingestion onto the NCI data platform. Early consultation and feedback has greatly improved both the quality of the data as well as the timeliness for publication. To improve the efficiency further, one of our major data suppliers is including our DQS requirements in their data generation processes to ensure data quality is considered earlier in data production.

The technical requirements and implementation of our DQS will be described as four major but related data components: structure, QC, benchmarking, and QA.

Data structure

NCI's research data collections are particularly focused on enabling programmatic access, required by: (1) NCI core services such as the NCI supercomputer and NCI cloud-based capabilities; (2) community virtual laboratories and virtual research environments; (3) those that require remote access through established scientific standards-based protocols that use data services; and, (4) increasingly, by international data federations. To enable these different types of programmatic access, datasets must be registered in the central NCI catalogue[10], which records their location for access both on the filesystems and via data services.

This requires the data to be well-organized and compliant with uniform, professionally managed standards and consistent community conventions wherever possible. For example, the climate community Coupled Model Intercomparison Project (CMIP) experiments use the Data Reference Syntax (DRS)[11], whilst the National Aeronautics and Space Administration (NASA) recommends a specific name convention for Landsat satellite image products.[12] The NCI data collection catalogue manages the details of each dataset through a uniform application of ISO 19115:2003[13], an international schema used for describing geographic information and services. Essentially, each catalogue entry points to the location of the data within the NCI data infrastructure. The catalogue entries also point to the services endpoints such as a standard data download point, data subsetting interface, as well as Open Geospatial Consortium (OGC) Web Mapping Service (WMS) and Web Coverage Services (WCS). NCI can publish data through several different servers, and as such the specific endpoint for each of these service capabilities is listed.

NCI has developed a catalogue and directory policy, which provides guidelines for the organization of datasets within the concepts of data collections and data sub-collections and includes a comprehensive definition for each hierarchical layer. The definitions are:

  • A data collection is the highest in the hierarchy of data groupings at NCI. It is comprised of either an exclusive grouping of data subcollections, or it is a tiered structure with an exclusive grouping of lower tiered data collections, where the lowest tier data collection will only contain data subcollections.
  • A data subcollection is an exclusive grouping of datasets (i.e., belonging to only one subcollection) where the constituent datasets are tightly managed. It must have responsibilities within one organization with responsibility for the underlying management of its constituent datasets. A data subcollection constitutes a strong connection between the component datasets, and is organized coherently around a single scientific element (e.g., model, instrument). A subcollection must have compatible licenses such that constituent datasets do not need different access arrangements.
  • A dataset is a compilation of data that constitutes a programmable data unit that has been collected and organized using a self-contained process. For this purpose it must have a named data owner, a single license, one set of semantics, ontologies, vocabularies, and has a single data format and internal data convention. A dataset must include its version.
  • A dataset granule is used for some scientific domains that require a finer level of granularity (e.g., in satellite Earth Observation datasets). A granule refers to the smallest aggregation of data that can be independently described, inventoried, and retrieved as defined by NASA.[14] Dataset granules have their own metadata and support values associated with the additional attributes defined by parent datasets.

In addition we use the term "data category" to identify common contents/themes across all levels of the hierarchy.

  • A data category allows a broad spectrum of options to encode relationships between data. A data category can be anything that weakly relates datasets, with the primary way of discovering the groupings within the data by key terms (e.g., keywords, attributes, vocabularies, ontologies). Datasets are not exclusive to a single category.

Organization of data within the data structure

NCI has organized data collections according to this hierarchical structure on both filesystem and within our catalogue system. Figure 1 shows how these datasets are organized. Figure 2 provides an example of how the CMIP 5 data collection demonstrates the hierarchical directory structure.


Fig1 Evans Informatics2017 4-4.png

Figure 1. Illustration of the different levels of metadata and community standards used for each

References

  1. Wang, J.; Evans, B.J.K.; Bastrakova, I. et al. (2014). "Large-Scale Data Collection Metadata Management at the National Computation Infrastructure". Proceedings from the American Geophysical Union, Fall Meeting 2014: IN14B-07. 
  2. "The FAIR Data Principles". Force11. https://www.force11.org/group/fairgroup/fairprinciples. Retrieved 23 August 2017. 
  3. Evans, B.J.K.; Wyborn, L.A.; Druken, K.A. et al. (2016). "Extending the Common Framework for Earth Observation Data to other Disciplinary Data and Programmatic Access". Proceedings from the American Geophysical Union, Fall General Assembly 2016: IN22A-05. 
  4. 4.0 4.1 4.2 Ramapriyan, H.; Peng, G.; Moroni, D.; Shie, C.-L. (2017). "Ensuring and Improving Information Quality for Earth Science Data and Products". D-Lib Magazine 23 (7/8). doi:10.1045/july2017-ramapriyan. 
  5. Atkin, B.; Brooks, A.. "Chapter 8: Service Specifications, Service Level Agreements and Performance". Total Facilities Management. Wiley. ISBN 9781405127905. 
  6. "Data Repositories Requirements". CoreTrustSeal. https://www.coretrustseal.org/why-certification/requirements/. Retrieved 24 October 2017. 
  7. 7.0 7.1 Stall, S.; Downs, R.R.; Kempler, S.J. (2016). "AGU's Data Management Maturity Model". Auditing of Trustworthy Data Repositories. SciDataCon 2016. https://www.scidatacon.org/2016/sessions/100/. 
  8. Stall, S.; Hanson, B.; Wyborn, L. (2016). "The American Geophysical Union Data Management Maturity Program". Proceedings from the eResearch Australasia Conference 2016: 72. https://eresearchau.files.wordpress.com/2016/03/eresau2016_paper_72.pdf. 
  9. "Data Management Maturity (DMM)". CMMI Institute LLC. https://cmmiinstitute.com/store/data-management-maturity-(dmm). 
  10. "NCI Data Portal". National Computational Infrastructure. https://geonetwork.nci.org.au/geonetwork/srv/eng/catalog.search#/home. 
  11. Taylor, K.E.; Balaji, V.; Hankin, S. et al. (13 June 2012). "CMIP5 Data Reference Syntax (DRS) and Controlled Vocabularies" (PDF). Program for Climate Model Diagnosis & Intercomparison. https://pcmdi.llnl.gov/mips/cmip5/docs/cmip5_data_reference_syntax.pdf. 
  12. "What are the naming conventions for Landsat scene identifiers?". U.S. Geological Survey. https://landsat.usgs.gov/what-are-naming-conventions-landsat-scene-identifiers. Retrieved 23 August 2017. 
  13. "ISO 19115-1:2014 Geographic information -- Metadata -- Part 1: Fundamentals". International Organization for Standardization. April 2014. https://www.iso.org/standard/53798.html. Retrieved 25 May 2016. 
  14. "Granule". EarthData Glossary. https://earthdata.nasa.gov/user-resources/glossary#ed-glossary-g. Retrieved 23 August 2017. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Several URL from the original were dead, and more current URLs were substituted.