Journal:Risk assessment for scientific data

From LIMSWiki
Revision as of 17:58, 15 December 2020 by Shawndouglas (talk | contribs) (Saving and adding more.)
Jump to navigationJump to search
Full article title Risk assessment for scientific data
Journal Data Science Journal
Author(s) Mayernik, Matthew S.; Breseman, Kelsey; Downs, Robert R.; Duerr, Ruth; Garretson, Alexis; Hou, Chung-Yi,
EDGI and ESIP Data Stewardship Committee[a]
Author affiliation(s) National Center for Atmospheric Research, Environmental Data & Governance Initiative, Columbia University,
Ronin Institute for Independent Scholarship, George Mason University
Primary contact Email: mayernik at ucar dot edu
Year published 2020
Volume and issue 19(1)
Article # 10
DOI 10.5334/dsj-2020-010
ISSN 1683-1470
Distribution license Creative Commons Attribution 4.0 International
Website https://datascience.codata.org/articles/10.5334/dsj-2020-010/
Download https://datascience.codata.org/articles/10.5334/dsj-2020-010/galley/944/download/ (PDF)

Abstract

Ongoing stewardship is required to keep data collections and archives in existence. Scientific data collections may face a range of risk factors that could hinder, constrain, or limit current or future data use. Identifying such risk factors to data use is a key step in preventing or minimizing data loss. This paper presents an analysis of data risk factors that scientific data collections may face, and a data risk assessment matrix to support data risk assessments to help ameliorate those risks. The goals of this work are to inform and enable effective data risk assessment by: a) individuals and organizations who manage data collections, and b) individuals and organizations who want to help to reduce the risks associated with data preservation and stewardship. The data risk assessment framework presented in this paper provides a platform from which risk assessments can begin, and a reference point for discussions of data stewardship resource allocations and priorities.

Keywords: risk assessment, data preservation, data stewardship, metadata

Introduction

At the “The Rescue of Data At Risk” workshop held in Boulder, Colorado on September 8 and 9, 2016[b], participants were asked the following question: “How would you define ‘at-risk’ data?” Discussions on this point ranged widely and touched on several challenges, including lack of funding or personnel support for data management, natural and political disasters, and metadata loss. One participant’s organization’s definition of risk, however, stood out: “data were considered to be at-risk unless they had a dedicated plan to not be at-risk.” This simple statement vividly depicts how data’s default state is being in a state of risk. In other words, ongoing stewardship is required to keep data collections and archives in existence.

The risk factors that a given data collection or archive may face vary, depending on the data’s characteristics, the data’s current environment, and the priorities and resources available at the time. Many risks can be reduced or eliminated by following best practices codified as certifications and guidelines, such as the CoreTrustSeal Data Repository Certification[1], as well as the ISO 16363:2012 standard, which defines audit and certification procedures for trustworthy digital repositories.[2] Both the CoreTrustSeal certification and ISO 16363:2012 are based on the ISO 14721:2012 standard that defines the reference model for an open archival information system (OAIS).[3] But these certifications can be large and complex. Additionally, many of the organizations that hold valuable scientific data collections may not be aware of these standards, even if the organizations are potentially resourced to tackle the challenge.[4] Further, the attainment of such certifications does not necessarily reduce the risks to data that are outside of the scope of a particular certification instrument.

This paper presents an analysis of data risk factors that stakeholders of scientific data collections and archives may face, and a matrix to support data risk assessments to help ameliorate those risks. The three driving questions for this analysis are:

  • How do stakeholders assess what data are at risk?
  • How do stakeholders characterize what risk factors data collections and/or archives face?
  • How do stakeholders make the associated risks more transparent, internally and/or externally?

The goals of this work are to inform and enable effective data risk assessment by: a) individuals and organizations who manage data collections, and b) individuals and organizations who want to help to reduce the risks associated with data preservation and stewardship. Stakeholders for these two activities include producers, stewards, sponsors, and users of data, as well as the management and staff of the institutions that employ them.

Background

This project was coordinated through the Data Stewardship Committee within the Earth Science Information Partners (ESIP), a non-profit organization that exists to support collection, stewardship, and use of earth science data, information, and knowledge.[c] The immediate motivation for the project stemmed from the Data Stewardship Committee members engaging with groups who were undertaking grass-roots “data rescue” initiatives after the 2016 U.S. presidential election. At that time, a number of loosely organized and coordinated efforts were initiated to duplicate data from U.S. government organizations to prevent potential politically motivated data deletion or obfuscation.[5][6] In many cases, these initiatives specifically focused on duplicating government-hosted earth science data.

ESIP Data Stewardship Committee members wrote a white paper to provide the earth science data centers’ perspective on these grassroots “data rescue” activities.[7] That document described essential considerations within the day-to-day work of existing federal and federally-funded earth science data archiving organizations, including data centers’ constant focus on documentation, traceability, and persistence of scientific data. The white paper also provided suggestions for how those grassroots efforts might productively engage with the data centers themselves.

One point that was emphasized in the white paper was that the actual risks faced by the data collections may not be transparent from the outside. In other words, “data rescue” activities may have in fact been duplicating data that were at minimal risk of being lost.[8] This point, and the white paper in general, was well received by people inside and outside of these grass-roots initiatives.[9][10] Questions then came back to the ESIP Data Stewardship Committee about how to understand what data held by government agencies were actually at-risk.

The analysis presented in this paper was initiated in response to these questions. Since then, these grassroots “data rescue” initiatives have had mixed success in sustaining and formalizing their efforts.[11][12][13]The intention of our paper is to enable more effective data risk assessment broadly. Rescuing data after they have been corrupted, deleted, or lost can be time- and effort-intensive, and in some cases it may be impossible.[14] Thus, we aim to provide guidelines to any individual or organization that manages and provides access to scientific data. In turn, these individuals and organizations can better assess the risks that their data face and characterize those risks.

When discussing risk and, in particular, data risk, it is useful to ask "what is the objective that is being challenged by the possible risk factors?" With regard to data, in general, discussions of risk might presume that “risks” threaten the current or future access to data by the potential data users. Currently, continuing public access to and use of scientific data is particularly relevant in light of recent open data and open science initiatives. In this regard, risks for scientific data include factors that could hinder, constrain, or limit current or future data use. Identifying such data use risk factors offers further analysis opportunities to prevent, mitigate, or eliminate the risks.

Data risk assessment

Risk assessment is a regular activity within many organizations. In a general sense, risk management plans are complementary to project management plans. (Cervone 2006) Organizational assessment of digital data and information collections is likewise not new. (Maemura, Moles & Becker 2017) The analysis presented in this paper builds on prior work in a number of areas: 1) research on data risks, 2) data rescue initiatives within government agencies and specific disciplines, 3) CODATA and RDA working groups and meetings, 4) trusted repository certifications, and 5) knowledge and experience of the ESIP Data Stewardship Committee members. Table 1 summarizes data risk factors that emerge from these knowledgebases. The list of risk factors shown in Table 1 is not meant to be exhaustive. Rather, it provides a useful illustration of the diverse ways in which data sets, collections, and archives might encounter risks to data usability and accessibility. The rest of this section details further key insights from the five areas of prior work noted above.

Table 1. Risk factors for scientific data collections
Risk factor Description
1. Lack of use Data are rarely accessed and dubbed "unwanted," thus getting thrown away.
2. Loss of funding for archive The whole archive loses its funding source.
3. Loss of funding for specific datasets Specific datasets lose their funding source.
4. Loss of knowledge around context or access Data owners lose individuals—e.g., due to retirement or death—who know how to access the data or know the metadata associated with these data that make the data useable to others.
5. Lack of documentation and metadata Data cannot be interpreted due to lack of contextual knowledge.
6. Data mislabeling Data are lost because they are poorly identified (either physically or digitally).
7. Catastrophes Fires, floods, wars, human conflicts, etc. destroy data and/or their owners.
8. Poor data governance Uncertain or unknown decision making processes impede effective data management.
9. Problems with legal status for data ownership and use Uncertain, unknown, or restrictive legal status limits the possible uses of data.
10. Media deterioration Physical media deterioration prevents data from being accessed (paper, tape, or digital media).
11. Missing files Data files are lost without any known reason.
12. Overdependence on a single service provider Problems arise from having a single point of failure, particularly if a vital service provider goes out of business.
13. Accidental deletion Data are accidentally deleted by staff error.
14. Lack of planning Lack of planning puts data collections at risk of being susceptible to unexpected events.
15. Cybersecurity breach Data are intentionally deleted or corrupted via a security breach, e.g., via malware.
16. Overabundant data Difficulty dealing with too much data results in a reduction in value or quality of whole collections.
17. Political interference Data is deleted or made inaccessible due to uncontrollable political decisions.
18. Lack of provenance information Data cannot be trusted or understood because of a lack of information about data processing steps, or about data stewardship chains of trust.
19. File format obsolescence Data cannot be accessed due to lack of knowledge, equipment, or software for reading a specific file format.
20. Storage hardware breakdown Data is lost due to a sudden and catastrophic malfunction of storage hardware.
21. Bit rot and data corruption Digital data on storage hardware gradually becomes corrupted due to an accumulation of non-critical failures (bits flipping) in a data storage device.

Research on data risks

A range of studies have explored the kinds of risks that scientific data may face, and potential ways to mitigate specific risk factors. Many of these studies touch on practices that are typical of scientific data archives. Metadata, for example, can be considered both a risk factor and a mitigation strategy. Insufficient metadata is itself a potential factor that can reduce the discoverability, usability, and preservability of data, particularly in situations where direct human knowledge of the data is absent.[15] In fact, many data rescue projects find that the “rescue” efforts must be targeted much more toward metadata than data.[16][17] This might be the case for a couple of reasons. First, insufficient or missing metadata might prevent data from being usable regardless of the condition of the data themselves. Examples include missing column headers in tabular data that prevent a user from knowing what the data are representing, and insufficient provenance metadata that prevent users from trusting the data due to lack of context about data collection and quality control. Second, metadata are also central to documenting and mitigating risks as they manifest, while preventing risks from becoming problematic in the future. (Anderson et al. 2011) For example, documenting data ownership and usage rights is an essential step in mitigating risk factor #9, “Problems with legal status for data ownership and use,” from Table 1.

Different kinds of metadata might be necessary to reduce specific data risks. For example, specifications of file format structures are a critical type of metadata for mitigating risks associated with digital file format obsolescence. Open specifications complement other critical mitigation practices and tools related to file format obsolescence. As one example, keeping rendering software available is an important way to retain access to particular file formats, but this typically also requires maintaining documentation of how the rendering software works.[18]

Other risk factors (listed in Table 1) relate to the sustainability and transparency of the archiving organization. These factors are important in ensuring the accessibility of the data and the trustworthiness of the archive. As Yakel et al.[19] note, “[t]rust in the repository is a separate and distinct factor from trust in the data.” For people outside of the repository, “institutional reputation appears to be the strongest structural assurance indicator of trust.”[19] In essence, effective communication about data risks and steps taken to eliminate problems is helpful in ensuring users that the archive is trustworthy.[20]

Data that face extreme or unusual risks, however, may not be manageable via typical data curation workflows. Downs and Chen[21] note that dealing with some data risk factors “may well require divergence from regular data curation procedures, as tradeoffs may be necessary.” For example, Gallaher et al.[22] undertook an extensive project to recover, reconstruct, and reprocess data from early satellite missions into modern formats that are usable by modern scientists. This project involved dealing with degrading and fragile magnetic tapes, extracting data from the tapes’ unusual format, and recreating documentation for the data. Additionally, natural disasters, fires, and floods also present unpredictable risk factors to data collections of all kinds. While these kinds of events can be planned for and steps can be taken to prevent the occurrence of some of them (e.g., fires), they can still cause major data loss and/or require significant recovery effort.

Mitigating risks, of whatever kind, takes effort and resources. The time required to create metadata, re-format files, create contingency plans, and communicate these efforts to user communities can be considerable. This time investment can be the greatest barrier to performing risk assessment and mitigation activities.[23] Putting focus on assessment of data risk factors may mean that “certain priorities need to be re-ordered, new skills acquired and taught, resources redirected, and new networks constructed.”[24] It can be possible to automate some components of risk assessment[25], but most of the steps require human effort. This intensive effort is vividly illustrated by the many data rescue initiatives that have taken place within government agencies and other kinds of organizations over the past few decades.

Data rescue initiatives within government agencies and specific disciplines

Legacy data are data collected in the past with different technologies and data formats than those used today. These data often face the largest numbers of risk factors that could lead to data loss. A wide range of government agencies and other organizations have conducted legacy data rescue initiatives to modernize data and make them more accessible and usable for today’s science. Each data rescue project typically faces many different kinds of data risks. For example, a recent satellite data rescue effort had to address the “loss of datasets, reconciliation of actual media contents with metadata available, deviation of the actual data format from expectations or documentation, and retiring expertise.”[26] Data rescue projects typically involve work to prevent future risk factors from manifesting, in addition to modernizing data for accessibility and usability. For example, data rescue projects migrate data to less endangered data formats, and create new metadata and quality control documentation.[27]

CODATA/RDA working groups & meetings

Relevant professional organizations, including the International Council for Science (ICSU) Committee on Data for Science and Technology (CODATA) and the Research Data Alliance (RDA), also have been actively identifying improvements for data stewardship practices that can reduce potential risks to data. For example, the former Data At Risk Task Group (DAR-TG), of CODATA, raised awareness about the value of heritage data and described the benefits obtained from several data rescue projects.[24] This group also organized the 2016 “Rescue of Data At Risk” workshop mentioned in the introduction of this paper. That workshop led to a document titled “Guidelines to the Rescue of Data At Risk.”[28] Subsequently, the Data Rescue Interest Group[29] of the Research Data Alliance (RDA), spawned from the CODATA DAR-TG, also focuses on efforts to increase awareness of data rescue projects.

Repository certifications and maturity assessment

Many data repositories have conducted self-assessments and external assessments to document their compliance with the standards for trusted repositories and attain certification of their capabilities and practices for managing data. In addition to emphasizing organizational issues, repository certification instruments, such as ISO 16363[2] and CoreTrustSeal[1] certification, also focus on digital object management and infrastructure capabilities. Engaging in such assessments offers benefits to repositories and their stakeholders. A key benefit is the identification of areas where improvements have been completed or need to be completed to reduce risks to data.[1] In an examination of perceptions of repository certification, Donaldson et al.[26] found that process improvement was often reported by repository staff as a benefit of repository certification.

In addition to (or complementary to) formal certifications, data repositories may conduct data stewardship maturity assessment exercises to help in identifying data risks and informing data risk mitigation strategies.[30] “Maturity” is used in the sense presented by Peng et al.[31], refering to the level of performance attained to ensure preservability, accessibility, usability, transparency/traceability, and sustainability of data, along with the level of performance in data quality assurance, data quality control/monitoring, data quality assessment, and data integrity checks. Maturity at the institutional (or archive) level in areas such as policy, funding, and infrastructure does not necessarily translate to comprehensive maturity at the dataset level.[32] Data stewardship maturity assessment should therefore be performed both at the institutional level and at the dataset level. It is recognized that performing stewardship maturity assessments can be time-consuming and resource-intensive. However, the stewardship organizations are encouraged to perform self-assessment using a “stage by stage” or “a la carte” approach.[33] Ultimately, both formal certifications and informal maturity assessments help organizations not only gain self-awareness, but also identify better solutions for their data that might be at risk of being lost or rendered unusable.

Developing a data risk assessment matrix

Risk assessment is a well-established field, with 30 to 40 years of history.[34][35] However, the practice of applying risk assessment methodologies to scientific data collections is less formally established, though regular audits and reviews of data management systems are common in some organizations.[36]

The starting point for this project was to establish a process for categorizing the data risk factors shown in Table 1. The initial idea of our effort was that if data risk factors could be categorized into a logical structure, it would allow data managers to assess the risks to their data collections via a set of predefined and consistent categories. To develop a logical categorization, we held a session to conduct a “card sorting” exercise at the 2018 ESIP Summer Meeting, which took place in July 2018 in Tucson, Arizona. “Card sorting” is an established method for developing categorizations of concepts, vocabulary terms, or web sites.[37][38] Following the card sorting methodology, participants in the 2018 ESIP meeting session were provided the list of data risks in Table 1 and asked to complete the following task: “Looking at the list of data risk factors, how would you group these factors, based on the categories you would define?”

Approximately 15 attendees engaged in the exercise. We used a combination of an online card sorting tool and hand-written recommendations to collect the completed card sorting categorizations. Following the completion of the exercise, the results were displayed in front of the session participants and a group discussion took place. The outcome of the card sorting exercise and subsequent discussion was a clear recognition that there could be many valid and useful ways of categorizing data risks. No single method for categorizing the risk factors would be sufficient to cover the diverse organizations and situations within which data collections exist. Depending on the situation(s) a data curation organization or individual is facing, they may need to categorize data risks in different ways. This characteristic is common in risk assessments generally, as risk prioritization and categorizations are dependent on the phenomena being assessed, the characteristics of the situation, and the goals of the organizations or people performing the assessment.[39]

Through subsequent discussion and analysis of the data risk assessment literature noted above, we identified at least ten different ways that data risk factors could be assessed. Many of these categorization methods are applicable to risk assessments of any kind.[40] The list below is not meant to be exhaustive, and some methods are likely related. Data risk factors could be categorized or prioritized according to the methods listed in Table 2.

Table 2. Methods for categorizing data risks
Categorization method Description
Severity of risk How much impact could this risk factor have on the data itself, regardless of the current importance of data to the user?
Likelihood of occurrence How likely is the risk factor to occur?
Length of recovery time How long would it take to recover data or re-establish data accessibility?
Impact on user How significantly would data users be impacted by data loss or loss of data accessibility?
Who is responsible for addressing the problem Who has the expertise and responsibility to mitigate or respond to particular risk factors?
Cause of problem What caused a data risk factor to occur?
Degree of control How much control does an organization or individual have over whether a risk factor is present or will occur?
Proactive vs. reactive response Should risk factors be mitigated via preventative measures, or should they be responded to upon occurrence?
Nature of mitigation What steps must be taken or processes put in place to prevent a risk, or mitigate a risk after it has occurred?
Resources required for mitigation What time, money, or personnel resources will be necessary to mitigate risk factors?

The lists shown in Tables 1 and 2 offer characteristics on which data risk assessments can be built. Combining the categorization methods from Table 2 with the selected risk factors from Table 1 leads to a risk assessment matrix, as shown in Table 3. This figure shows an example of a selection of specific data risk factors and the categorization methods. Depending on the situation or data collection being assessed, different risk factors and/or categorization methods may be more applicable than the ones shown in Table 3. Those conducting a data risk assessment can then use the matrix as a way to organize, prioritize, or potentially quantify the selected risks according to the categorization methods that are most relevant for the specific case at hand. (The next section provides more detailed illustrations of the use of the data risk assessment matrix. Appendix I, at the end of this article, shows the full data risk assessment template, with all risks and categorization methods from Tables 1 and 2.)

Table 3. Example of a blank data risk assessment matrix, after selection of specific risk factors and categorization methods of interest
Risk factors Categorization methods
Severity of risk Likelihood of occurrence Cause of problem Required mitigation resources
Lack of use
Loss of knowledge
Lack of docs and metadata
Catastrophes
Poor data governance
Media deterioration

Application of the data risk assessment matrix

Three case studies are described below, in which the data risk assessment matrix was used to develop a better understanding of data risks for particular resources. These cases enable evaluation of the data risk assessment framework presented in this paper, clarifying its strengths and weaknesses, and pinpointing the situations in which it can be most useful.[41]

Case 1 – NCAR Library analog data collection

The National Center for Atmospheric Research (NCAR) Library maintains an analog data collection that consists of about 300 datasets in support of atmospheric and meteorological research conducted by NCAR scientists. These assets are largely compilations of measurements and statistics published by national and international meteorological services and other kinds of government entities. Many of these assets have been in the NCAR Library’s collections for decades, and most were minimally cataloged when they were first brought into the collection. As such, the current usage of the collection is minimal. A prior assessment done by the NCAR Library and a student assistant sought to identify individual assets that were of higher potential value and interest for current science. This assessment effort resulted in a modernization prioritization based on a geographic and temporal framework, and improved metadata records for about five percent of the collection.[42] This effort did not, however, include any kind of risk assessment related to the physical assets themselves.

The data risk assessment matrix was therefore helpful in conducting a second-level priority analysis for these NCAR Library analog data assets. We used the matrix as a way to identify which risk factors were most important for these materials, and to characterize the mitigation efforts that were needed for each risk factor. In particular, we focused the risk assessment on the data assets that were previously identified as having high geospatial and temporal interest. The NCAR Library use of the matrix involved a series of steps:

Step 1 – A number of risk factors listed in the matrix were identified as being of most importance, with the focus being on factors that prevented or impeded the use of these data within current scientific studies. The most immediate risk factors were identified to be the “lack of use” and the “lack of documentation/metadata” for these assets. Other risks that were secondary in immediacy, but still potentially important, were data mislabeling, the questionable legal status for ownership and use, media deterioration, lack of planning, and poor data governance.
Step 2 – The second step was to identify which categorization methods shown in the matrix were most applicable/appropriate for the NCAR Library’s management and maintenance of this collection. The methods selected were: a) length of recovery time, b) who is responsible for addressing the problem, c) nature of mitigation, and d) resources required for mitigation.
Step 3 – The third step was to fill in the boxes in the matrix for the risk factors and categorization methods. For example, for the “length of recovery time” question, we used a simple 1–3 scale to indicate relative differences in how long it would take to mitigate the two most important risk factors: “lack of use” and the “lack of documentation/metadata.” As one example, some data assets were published by international agencies and therefore have title pages and documentation that are not in English. In turn, due to the lack of relevant foreign language expertise in the NCAR Library staff, developing new metadata for these resources will take more effort than for those assets that were published by English-speaking countries. For the “resources required for mitigation” categorization method, a numerical scale was not as appropriate. Instead, we filled in the matrix with text descriptions of the resources required to mitigate the risk factors. An example entry under the “lack of documentation and metadata” risk factor was: “We would need to create new metadata for the library catalog, then transform to ISO for inclusion in NCAR DASH Search, with the added challenge of needing to look at microfilm files (no current working reader in Library).”

In summary, the matrix was very useful as “something to think with.” In other words, it jump-started the process for doing the risk assessment because the NCAR Library staff did not need to spend time developing a comprehensive list of risk factors that may apply for these data, or brainstorm about how to categorize those risks. The risk factor matrix provided a ready-made starting point for the assessment. Because the matrix does not dictate how the cells should be filled in, the NCAR Library staff made decisions about how to apply the matrix for each categorization method that was chosen. The matrix structure could potentially be applied or customized to create a prioritization rubric, by supporting the creation of a numeric scoring process for categories where that is appropriate.

Case 2 – Mohonk Preserve Daniel Smiley Research Library

Mohonk Preserve is a land trust and nature preserve in New Paltz, New York covering more than 8,000 acres of a northern section of the Appalachian Mountains known as the Shawangunk Mountains. Mohonk Preserve’s conservation science division, the Daniel Smiley Research Center (DSCR), is affiliated with the Organization of Biological Field Stations (OBFS) and acts as a NOAA Climate Observation Center. DSRC staff and citizen scientists carry out a variety of long-term monitoring projects and manage an extensive archive of historical observations. The archive houses 60,000 physical items, 9,000 photographs, 86 years of natural history observations, 123 years of daily weather data, and a research library of legacy titles. The physical items include more than 3,000 herbarium specimens, 107 bird specimens, 140 butterfly specimens, 139 mammal specimens, 400 arthropod specimens, and over 14,000 index cards with handwritten and typed observations. The digitization process of the archive holdings is ongoing, but the packaging and publishing of datasets in the Environmental Data Initiative is a priority.[43][44][45] These data and natural history collections underpin the Mohonk Preserve’s land management and stewardship and have been crucial to an increasing number of scientific publications[46][47][48][49], but the collections remain underutilized.


Footnotes

  1. We list EDGI and the ESIP Data Stewardship Committee as authors due to the contributions of many individuals from both organizations to the work described in this paper. The named authors are the individuals involved in each organization who contributed directly to the paper’s text.
  2. The workshop was organized under the auspices of the Research Data Alliance (RDA) and the Committee on Data (CODATA) within the International Science Council.
  3. See https://wiki.esipfed.org/Preservation_and_Stewardship.

References

  1. 1.0 1.1 1.2 CoreTrustSeal Standards and Certification Board (2020). "CoreTrustSeal". https://www.coretrustseal.org/. 
  2. 2.0 2.1 "ISO 16363:2012 - Space data and information transfer systems — Audit and certification of trustworthy digital repositories". International Organization for Standardization. February 2012. https://www.iso.org/standard/56510.html. 
  3. "ISO 14721:2012 - Space data and information transfer systems — Open archival information system (OAIS) — Reference model". International Organization for Standardization. September 2012. https://www.iso.org/standard/56510.html. 
  4. Maemura, E.; Moles, N.; Becker, C. (2017). "Organizational assessment frameworks for digital preservation: A literature review and mapping". JASIST 68 (7): 1619–37. doi:10.1002/asi.23807. 
  5. Dennis, B. (13 December 2016). "Scientists are frantically copying U.S. climate data, fearing it might vanish under Trump". The Washington Post. https://www.washingtonpost.com/news/energy-environment/wp/2016/12/13/scientists-are-frantically-copying-u-s-climate-data-fearing-it-might-vanish-under-trump/. 
  6. Varinsky, D. (11 February 2017). "Scientists across the US are scrambling to save government research in 'Data Rescue' events". Business Insider. https://www.businessinsider.com/data-rescue-government-data-preservation-efforts-2017-2. 
  7. Mayernik, M.S.; Downs, R. R.; Duerr, R. et al. (4 April 2017). "Stronger together: The case for cross-sector collaboration in identifying and preserving at-risk data". FigShare. https://esip.figshare.com/articles/journal_contribution/Stronger_together_the_case_for_cross-sector_collaboration_in_identifying_and_preserving_at-risk_data/4816474/1. 
  8. Lamdan, S. (2018). "Lessons from DataRescue: The Limits of Grassroots Climate Change Data Preservation and the Need for Federal Records Law Reform". University of Pennsylvania Law Review Online 166 (1). https://scholarship.law.upenn.edu/penn_law_review_online/vol166/iss1/12. 
  9. Cornelius, K.B.; Pasquetto, I.V. (2018). "‘What Data?’ Records and Data Policy Coordination During Presidential Transitions". Proceedings from iConference 2018: Transforming Digital Worlds: 155–63. doi:10.1007/978-3-319-78105-1_20. 
  10. McGovern, N.Y. (2017). "Data rescue: Observations from an archivist". ACM SIGCAS Computers and Society 47 (2): 19–26. doi:10.1145/3112644.3112648. 
  11. Allen, L.; Stewart, C.; Wright, S. (2017). "Strategic open data preservation: Roles and opportunities for broader engagement by librarians and the public". College & Research Libraries News 78 (9): 482. doi:10.5860/crln.78.9.482. 
  12. Chodacki, J. (2017). "Data Mirror-Complementing Data Producers". Against the Grain 29 (6): 13. doi:10.7771/2380-176X.7877. 
  13. Janz, M.M. (2017). "Maintaining Access to Public Data: Lessons from Data Refuge". Against the Grain 29 (6): 11. doi:10.7771/2380-176X.7875. 
  14. Pienta, A.M.; Lyle, J. (2017). "Retirement in the 1950s: Rebuilding a Longitudinal Research Database". IASSIST Quarterly 42 (1): 12. doi:10.29173/iq19. 
  15. Kichener, W.K.; Brunt, J.W.; Helly, J.J. et al. (1997). "Nongeospatial metadata for the ecological sciences". Ecological Applications 7 (1): 330–42. doi:10.1890/1051-0761(1997)007[0330:NMFTES]2.0.CO;2. 
  16. Knapp, K.R.; Bates, J.J.; Barkstrom, B. et al. (2007). "Scientific Data Stewardship: Lessons Learned from a Satallite–Data Rescue Effort". Bulletin of the American Meteorological Society 88 (9): 1359–62. doi:10.1175/BAMS-88-9-1359. 
  17. Hsu, L.; Lehnert, K.A.; Goodwillie, A. et al. (2015). "Rescue of long-tail data from the ocean bottom to the Moon: IEDA Data Rescue Mini-Awards". GeoResJ 6: 108–114. doi:10.1016/j.grj.2015.02.012. 
  18. Ryan, H. (2014). "Occam’s Razor and File Format Endangerment Factors" (PDF). Proceedings of the 11th International Conference on Digital Preservation: 179–88. https://www.nla.gov.au/sites/default/files/ipres2014-proceedings-version_1.pdf. 
  19. 19.0 19.1 Yakel, E.; Faniel, I.; Krisberg, A. et al. (2013). "Trust in Digital Repositories". International Journal of Digital Curation 8 (1): 143–56. doi:10.2218/ijdc.v8i1.251. 
  20. Yoon, A. (2016). "Data reusers' trust development". JASIST 68 (4): 946-956. doi:10.1002/asi.23730. 
  21. Downs, R.R.; Chen, R.S. (2017). "Chapter 12: Curation of Scientific Data at Risk of Loss: Data Rescue and Dissemination". Curating research data - Volume one: Practical strategies for your digital repository. Association of College and Research Libraries. pp. 263–77. doi:10.7916/D8W09BMQ. 
  22. Gallaher, D.; Campbell, G.G.; Meier, W. et al. (2015). "The process of bringing dark data to light: The rescue of the early Nimbus satellite data". GeoResJ 6: 124–34. doi:10.1016/j.grj.2015.02.013. 
  23. Thompson, C.A.; Robertson, D.; Greenberg, J. (2014). "Where Have All the Scientific Data Gone? LIS Perspective on the Data-At-Risk Predicament". College & Research Libraries 75 (6): 842-861. doi:10.5860/crl.75.6.842. 
  24. 24.0 24.1 Griffin, R.E.; CODATA Task Group ‘Data At Risk’ (DAR-TG) (2015). "When are Old Data New Data?". GeoResJ 6: 92–97. doi:10.1016/j.grj.2015.02.004. 
  25. Graf, R.; Ryan, H.M.; Houzanme, T. et al. (2016). "A Decision Support System to Facilitate File Format Selection for Digital Preservation". Libellarium 9 (2): 267–74. doi:10.15291/libellarium.v9i2.274. 
  26. 26.0 26.1 Poli, P.; Dee, D.P.; Saunders, R. et al. (2017). "Recent Advances in Satellite Data Rescue". Bulletin of the American Meteorological Society 98 (7): 1471–1484. doi:10.1175/BAMS-D-15-00194.1.  Cite error: Invalid <ref> tag; name "PoliRecent17" defined multiple times with different content
  27. Levitus, S. (2012). "The UNESCO-IOC-IODE "Global Oceanographic Data Archeology and Rescue" (GODAR) Project and "World Ocean Database" Project". Data Science Journal 11: 46–71. doi:10.2481/dsj.012-014. 
  28. Research Data Alliance (24 March 2017). "Guidelines to the Rescue of Data At Risk". https://www.rd-alliance.org/guidelines-rescue-data-risk. 
  29. Research Data Alliance (14 August 2019). "Data Rescue IG". https://rd-alliance.org/groups/data-rescue.html. 
  30. Faundeen, J. (2017). "Developing Criteria to Establish Trusted Digital Repositories". Data Science Journal 16: 22. doi:10.5334/dsj-2017-022. 
  31. Peng, G.; Privette, J.L.; Kearns, E.J. et al. (2015). "A Unified Framework for Measuring Stewardship Practices Applied to Digital Environmental Datasets". Data Science Journal 13: 231–53. doi:10.2481/dsj.14-049. 
  32. Peng, G. (2018). "The State of Assessing Data Stewardship Maturity – An Overview". Data Science Journal 17: 7. doi:10.5334/dsj-2018-007. 
  33. Peng, G.; Milan, A.; Ritchey, N.A. et al. (2019). "Practical Application of a Data Stewardship Maturity Matrix for the NOAA OneStop Project". Data Science Journal 18 (1): 41. doi:10.5334/dsj-2019-041. 
  34. National Research Council (1983). Risk Assessment in the Federal Government: Managing the Process. National Academies Press. doi:10.17226/366. ISBN 9780309033497. https://www.nap.edu/catalog/366/risk-assessment-in-the-federal-government-managing-the-process. 
  35. Aven, T. (2016). "Risk assessment and risk management: Review of recent advances on their foundation". European Journal of Operational Research 253 (1): 1–13. doi:10.1016/j.ejor.2015.12.023. 
  36. Ramapriyan, H.K. (29 July 2017). "NASA's EOSDIS, Trust and Certification". FigShare. https://esip.figshare.com/articles/presentation/NASA_s_EOSDIS_Trust_and_Certification/5258047/1. 
  37. Zimmerman, D.E.; Akerelrea, C. (2002). "A group card sorting methodology for developing informational Web sites". Proceedings of the 2002 IEEE International Professional Communication Conference: 437-445. doi:10.1109/IPCC.2002.1049127. 
  38. "Card Sorting". Usability.gov. U.S. General Services Administration. 2019. https://www.usability.gov/how-to-and-tools/methods/card-sorting.html. 
  39. Slovic, P. (1999). "Trust, emotion, sex, politics, and science: Surveying the risk-assessment battlefield". Risk Analysis 19: 689–701. doi:10.1023/A:1007041821623. 
  40. Cervone, H.F. (2006). "Project risk management". OCLC Systems & Services: International digital library perspectives 22 (4): 256-262. doi:10.1108/10650750610706970. 
  41. Becker, C.; Maemura, E.; Moles. N. (2020). "The Design and Use of Assessment Frameworks in Digital Curation". JASIST 71 (1): 55–68. doi:10.1002/asi.24209. 
  42. Mayernick, M.S.; Huddle, J.; Hou, C.-Y. et al. (2017). "Modernizing Library Metadata for Historical Weather and Climate Data Collections". Journal of Library Metadata 17 (3–4): 219–39. doi:10.1080/19386389.2018.1440927. 
  43. Mohonk Preserve; Belardo, C.; Feldsine, N. et al. (2 August 2018). "History of Acid Precipitation on the Shawangunk Ridge: Mohonk Preserve Precipitation Depths and pH, 1976 to Present". Environmental Data Initiative Data Portal. doi:10.6073/pasta/734ea90749e78613452eacec489f419c. https://portal.edirepository.org/nis/mapbrowse?packageid=edi.225.2. 
  44. Mohonk Preserve; Forester, A.; Huth, P. et al. (2 August 2018). "Mohonk Preserve Ground Water Springs Data, 1991 to Present". Environmental Data Initiative Data Portal. doi:10.6073/pasta/928feed7ee748509ab065de7e3791966. https://portal.edirepository.org/nis/mapbrowse?packageid=edi.230.1. 
  45. Mohonk Preserve; Feldsine, N.; Forester, A. et al. (10 July 2019). "Mohonk Preserve Amphibian and Water Quality Monitoring Dataset at 11 Vernal Pools from 1931-Present". Environmental Data Initiative Data Portal. doi:10.6073/pasta/864aea25998b73c5d1a5b5f36cb6583e. https://portal.edirepository.org/nis/mapbrowse?packageid=edi.398.1. 
  46. Cook, B.I.; Cook, E.R.; Huth, P.C. et al. (2008). "A cross‐taxa phenological dataset from Mohonk Lake, NY and its relationship to climate". International Journal of Climatology 28 (10): 1369-1383. doi:10.1002/joc.1629. 
  47. Cook, B.I.; Cook, E.R.; Anchukaitis, K.J. et al. (2010). "A Homogeneous Record (1896–2006) of Daily Weather and Climate at Mohonk Lake, New York". Journal of Applied Meteorology and Climatology 49 (3): 544–555. doi:10.1175/2009JAMC2221.1. 
  48. Charifson, D.M.; Huth, P.C.; Thompson, J.E. et al. (2015). "History of Fish Presence and Absence Following Lake Acidification and Recovery in Lake Minnewaska, Shawangunk Ridge, NY". Northeastern Naturalist 22 (4): 762-781. doi:10.1656/045.022.0411. 
  49. Richardson, D.C.; Charifson, D.M.; Stanson, V.J. et al. (2017). "Reconstructing a trophic cascade following unintentional introduction of golden shiner to Lake Minnewaska, New York, USA". Inland Waters 6 (1): 29–33. doi:10.5268/IW-6.1.915. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. The original article lists references in alphabetical order; however, this version lists them in order of appearance, by design.