Journal:Wrangling environmental exposure data: Guidance for getting the best information from your laboratory measurements

From LIMSWiki
Revision as of 22:26, 13 April 2020 by Shawndouglas (talk | contribs) (Saving and adding more.)
Jump to navigationJump to search
Full article title Wrangling environmental exposure data: Guidance for getting the best information from your laboratory measurements
Journal Environmental Health
Author(s) Udesky, Julia O.; Dodson, Robin E.; Perovich, Laura J.; Rudel, Ruthann A.
Author affiliation(s) Silent Spring Institute
Primary contact Email: Use journal website to contact
Year published 2019
Volume and issue 18
Article # 99
DOI 10.1186/s12940-019-0537-8
ISSN 1476-069X
Distribution license Creative Commons Attribution 4.0 International
Website https://ehjournal.biomedcentral.com/articles/10.1186/s12940-019-0537-8
Download https://ehjournal.biomedcentral.com/track/pdf/10.1186/s12940-019-0537-8 (PDF)

Abstract

Background: Environmental health and exposure researchers can improve the quality and interpretation of their chemical measurement data, avoid spurious results, and improve analytical protocols for new chemicals by closely examining lab and field quality control (QC) data. Reporting QC data along with chemical measurements in biological and environmental samples allows readers to evaluate data quality and appropriate uses of the data (e.g., for comparison to other exposure studies, association with health outcomes, use in regulatory decision-making). However many studies do not adequately describe or interpret QC assessments in publications, leaving readers uncertain about the level of confidence in the reported data. One potential barrier to both QC implementation and reporting is that guidance on how to integrate and interpret QC assessments is often fragmented and difficult to find, with no centralized repository or summary. In addition, existing documents are typically written for regulatory scientists rather than environmental health researchers, who may have little or no experience in analytical chemistry.

Objectives: We discuss approaches for implementing quality assurance/quality control (QA/QC) in environmental exposure measurement projects and describe our process for interpreting QC results and drawing conclusions about data validity.

Discussion: Our methods build upon existing guidance and years of practical experience collecting exposure data and analyzing it in collaboration with contract and university laboratories, as well as the Centers for Disease Control and Prevention. With real examples from our data, we demonstrate problems that would not have come to light had we not engaged with our QC data and incorporated field QC samples in our study design. Our approach focuses on descriptive analyses and data visualizations that have been compatible with diverse exposure studies, with sample sizes ranging from tens to hundreds of samples. Future work could incorporate additional statistically grounded methods for larger datasets with more QC samples.

Conclusions: This guidance, along with example table shells, graphics, and some sample R code, provides a useful set of tools for getting the best information from valuable environmental exposure datasets and enabling valid comparison and synthesis of exposure data across studies.

Keywords: exposure science, environmental epidemiology, environmental chemicals, environmental monitoring, quality assurance/quality control (QA/QC), data validation, exposure measurement, measurement error

Background

Chemical measurements play a critical role in the study of links between the environment and health, yet many researchers in this field receive little if any training in analytical chemistry. The growing interest in measuring and evaluating health effects of co-exposure to a multitude of chemicals[1][2] makes this gap in training increasingly problematic, as the task at hand becomes ever-more complicated (i.e., analyzing for more and for new chemicals of concern). If steps are not taken throughout samples collection and analysis to minimize and characterize likely sources of measurement error, the impact on the interpretation of these valuable measurements can vary along the spectrum from false negative to false positive, as we will illustrate with real examples from our own data.

Some important considerations when measuring and interpreting environmental chemical exposures have been discussed in other peer-reviewed articles or official guidance documents. For example, a recent document from the Environmental Protection Agency (EPA) provides citizen scientists with guidance on how to develop a field measurement program, including planning for the collection of quality control (QC) samples.[3] The Centers for Disease Control and Prevention (CDC) also gives guidance related to collection, storage, and shipment of biological samples for analysis of environmental chemicals or nutritional factors.[4] To assess the quality of already-collected data, LaKind et al. (2014) developed a tool to evaluate epidemiologic studies that use biomonitoring data on short-lived chemicals, with a focus on critical elements of study design such as choice of analytical and sampling methods.[5] The tool was recently incorporated into “ExpoQual,” a framework for assessing suitability of both measured and modeled exposure data for a given use (“fit-for-purpose”).[6] Other useful guidance has been published, for example on automated quality assurance/quality control (QA/QC) processes for sensors collecting continuous streams of environmental data[7] and for establishing an overall data management plan, including documentation of metadata and strategies for data storage.[8]

Despite these helpful documents, there is still a lack of readily accessible, practical guidance on how to interpret and use the results of both field and laboratory QC checks to qualify exposure datasets (i.e., flag results for certain compounds or certain samples that are imprecise, estimated, or potentially over- or under-reported), and this gap is reflected in the environmental health literature. While the vast majority of environmental health studies report robust findings based on high-quality measurements, questions about measure validity have led to confusion and lack of confidence in some topic areas. For example, a number of studies have measured rapidly metabolized chemicals such as phthalates and bisphenol A (BPA) in blood or other non-urine matrices, despite the fact that urine is the preferred matrix for these chemicals. Phthalates and BPA are present at higher levels in urine and, when the proper metabolites are measured, there is less concern about contamination from external sources, including contamination from plastics during specimen collection.[9]

More commonly, however, exposure studies simply do not adequately report on QA/QC or describe how QC results informed reporting and interpretation of the data. In the context of systematic review and weight of evidence approaches, not reporting on QA/QC may result in a study being given less weight. For example, the risk of bias tool employed in case studies of the Navigation Guide method for systematic review includes reporting of certain QA/QC results in its criteria for a “low risk of bias” rating (e.g., reference Lam et al.[10]). When we applied the Navigation Guide's QA/QC criterion to 30 studies of biological or environmental measurements that we included in a recent review of environmental exposures and breast cancer[11], we found that more than half either did not report QA/QC details that were required for a “low risk of bias” assessment, or if they did report QA/QC, they did not interpret or use them adequately to inform the analysis (e.g., reported poor precision but did not discuss how/whether this could affect findings) (see Additional file 1 for details). Similarly, when LaKind et al. applied their study quality assessment tool to epidemiologic literature on BPA and neurodevelopmental and respiratory health, they found that QA/QC issues related to contamination and analyte stability were not well-reported.[12] Of note, several of the studies in our breast cancer review that did not provide adequate QA/QC information had their samples analyzed at the CDC Environmental Health Laboratory. It is helpful to include summaries of QA/QC assessments in published work, even if researchers are using a well-established lab, because this provides a useful standard for comparing QA/QC in other studies.

Over many years of collecting and interpreting environmental exposure data, we have developed a standard approach for (1) using field and laboratory QA/QC to validate and qualify chemical measurement data for environmental samples and (2) presenting our QC findings in our research publications (e.g., reference Rudel et al.[13]). These methods are based on data validation procedures from the EPA, Army Corps of Engineers, and U.S. Geological Survey[14][15][16][17], as well as the guidance of the many experienced chemists with whom we have collaborated. In this commentary, we compile our methods into a practical guide, focusing on how to use the information to make decisions about data usability and how to make the information transparent in publications. Our guide is organized in three sections, presenting questions to consider during study design, implementation, and data analysis. We describe key elements of QA/QC, including for assessing precision, accuracy, and sample contamination, and we include suggested graphics (Additional files 2 and 4), and table shells (Additional file 2) that clearly present QC data, emphasizing how it may affect interpretation of study measurements. Minimizing and characterizing potential errors requires close collaboration between the researchers who may have designed the study and plan to analyze the data and the chemists performing the analysis. As such, our guidance also includes example correspondence (Additional file 2) to help establish this relationship at the start of a project.

We present a detailed approach based on our own studies, acknowledging that this is an example, not a one-size-fits-all approach. Every study is unique and some will require specialized quality assessment not covered here. Still, we anticipate that many environmental health scientists will find this example to be a useful framework for building their own processes.

About the wrangling guide

Our guide is organized by a series of questions that we ask when we start a new study, and then ask again when we receive measurement data from the lab. Key QA/QC concepts are introduced in the section on study design, and they are more thoroughly addressed in sections concerning study implementation and data interpretation.

Not every question is relevant to every study; for example, researchers working with a lab to develop a new analytical method will need to focus more on method validation and quality control than those using a well-established method and credentialed lab. Still, controlling for issues related to sample collection and transport remain important in the latter scenario, as does variation in method performance and/or sources of contamination when samples are analyzed at the laboratory in multiple batches. Our guidance is most relevant to targeted organic chemical analyses, which use liquid or gas chromatography, often in combination with mass spectrometry, to determine whether a pre-defined set of chemicals are present in samples. QA/QC approaches for non-targeted methods, where tentative identities are established by matching to a library of mass spectra such as the National Institute of Standards and Technology (NIST) database[18], are addressed elsewhere.[19]

This guide is not a set of rules, but rather establishes a framework for evaluating and reporting QC data for chemical measurements in environmental or biological samples. While it may be most useful to environmental health scientists who have little or no experience in analytical chemistry, we hope that researchers with a range of experience will find it helpful to consult our approach for evaluating and presenting QC data in publications.

Because the number of QC samples available is often limited by budgetary constraints, many of the methods we use rely on visualization and conservative action (i.e., removing chemicals from our dataset or qualifying their interpretation unless there is evidence that the analytical method was accurate and precise) rather than on statistical methods. Whether statistical methods are incorporated or not, tabulating, visualizing, and communicating about QA/QC for environmental exposure measurements is important in order to reveal systematic error in the laboratory[20] or in the field, supporting future use of the data.[6]

Study design

What can we measure and how?

One of our first priorities when designing a new study is to consult with a chemist to establish an analyte list and method for analysis.

Chemical identities

Given the complexity of chemical synonyms, it is helpful to be as specific as possible when communicating about the chemicals to be analyzed. One approach is to send the lab a list of the chemical names (avoiding the use of trade names, which can be imprecise), Chemical Abstracts Service (CAS) numbers, and configurations (e.g., branched or linear, if relevant) of all desired analytes (see Additional file 1 for example correspondence). For biomonitoring, it is also important to determine if the parent chemical or metabolites will be targeted.

Matrix

Another consideration in developing the analyte list is what type of samples are available (if working with stored samples) or will be collected. As discussed previously, certain biological matrices are preferred over others for measurement, depending on the chemicals (e.g., reference Calafat et al.[9]). Matrix type is also relevant for environmental samples; for example, physical chemical properties like the octanol air partitioning coefficient inform whether an analyte is more likely to be found in air or dust.[21]

Method

The process of determining a final list of analytes will differ depending on whether the lab has an established method or is developing a new method, and whether it is targeted to a few chemicals with similar structure versus many chemicals with different properties (different polarities, solubilities, etc.). Targeting a broad suite of chemicals may limit the degree of precision and accuracy that can be achieved for each individual chemical, and the lab may need to invest substantial effort to develop a multi-residue method—that is, a method that can analyze for many chemicals at once—and determine a final list of target chemicals with acceptable method performance. In any case, a new method should be validated to characterize performance measures—precision, accuracy, expected quantitation and method detection limits, and the range of concentrations that can be quantitated with demonstrated precision and accuracy—before analyzing study samples. If the lab already has an established method for the chemicals of interest, the research team should review method performance measures to ensure they are consistent with study objectives.

Quantification method

The method of quantification affects the types of QC data that are expected from the lab. Three common approaches include external calibration, internal calibration and isotope dilution (a form of internal calibration). External calibration, where the response (i.e., chromatogram peak) from the sample is compared to the response from calibration standards containing known amounts of the analytes of interest, is a simple method that can be used for a variety of different analyses. However, results can be influenced by interference from other chemicals present in the sample matrix and resulting fluctuations in the analytical instrument response.[22] With internal calibration, on the other hand, one or more labeled compounds—either one of the targeted analytes or a closely related compound—are added to each of the samples just before they are injected into the instrument for analysis and used to correct for variation in the instrument response. The internal standard must be similar to the target compounds in physical chemical properties (e.g., a labeled polychlorinated biphenyl should not be used to represent a brominated diphenyl ether). Finally, for isotope dilution methods—which are the most accurate—labeled isotopes for each of the target compounds are added to samples prior to extraction. Additional internal standards are added to the samples just prior to injection to monitor loss of the labeled isotopes, and the analytical software then corrects for loss during sample extraction and for effects of the sample matrix (e.g., presence of other compounds in the sample that interfere with the analysis).[22] Many laboratories that analyze chemical levels in blood, urine, or tissues (e.g., the CDC National Exposure Research Laboratory) use isotope dilution quantification. However, isotopically labeled standards are not available for every compound and may be cost-prohibitive. If quantification is by internal or external calibration, researchers will likely need to review and report more extensive QC data from the lab compared to when using isotope dilution, as discussed in the section on study implementation.

Sensitivity of the method

Another important factor in selecting a method is to make sure it is sensitive enough to detect the anticipated concentrations in the field samples (samples submitted to the lab) down to levels that are relevant to the research question. For example, commercial labs measuring environmental chemicals may establish reporting limits to meet the needs of occupational or regulatory safety compliance testing; these limits may be much higher than levels that are meaningful for research questions about general population exposure and could result in most data being reported as non-detect or qualified as estimated and imprecise. On the other hand, lower reporting limits generally translate to more expensive testing, so researchers have the opportunity to balance sensitivity and cost.

How to minimize sample contamination?

There are ample opportunities for sample contamination during collection, storage, shipment, and analysis, especially when targeting ubiquitous chemicals commonly encountered in consumer products, home and office furnishings, or laboratory equipment. An important aspect of method validation is to check for contamination of samples during field activities, from collection containers, during transport and storage, and during laboratory extraction and analysis (see discussion of blanks in the section on study implementation). The CDC’s guidance on sample collection and management identifies some possible sources of contamination when analyzing for common chemicals like plastics chemicals, antimicrobials, and preservatives in blood or urine. Key considerations, depending on the particular chemicals being targeted, include selecting appropriate collection containers (e.g., glass containers if analyzing for plastics chemicals), avoiding the use of urine preservatives (e.g., when analyzing for parabens, BPA), and providing adequate instructions to participants collecting their own samples (e.g., avoid using antimicrobial soaps or wipes during collection).[4] As noted previously, contamination can also be minimized in biomonitoring of some chemicals by measuring a metabolite rather than parent chemical, and possibly by measuring a conjugated rather than free form of the metabolite.[9] In some cases, the lab may need to pre-screen collection containers or other sampling materials to see if they contain any target chemicals. For example, when we used polyurethane foam (PUF) sorbent to collect air samples for analysis of flame retardants, plastics chemicals, and preservatives, we asked the lab to pre-screen the PUF matrix for target analytes. Another important precaution was to ship the samplers wrapped in aluminum foil that had been baked in a muffle furnace to ensure it was clean and uncoated.

How will the lab report the data?

Three key elements of data typically reported by the lab are the identity of the chemical, the reporting limit for each chemical and sample, and how much of each chemical is present in each sample. Sometimes an additional measure is needed to normalize mass of chemical per sample, for example, grams of urinary creatinine, urine specific gravity, grams of serum lipid, or cubic meters of air (see work by LaKind et al.[5] for discussion of issues related to matrix adjustment and presentation of measurements).

Chemical identities

It is helpful to request in advance that the lab report CAS numbers and configurations (if relevant) along with chemical names (see Additional files 2 and 3 for example reporting requests).

Reporting limits

Common terms used by laboratories to discuss reporting limits include "instrument detection limit" (IDL), "method detection limit" (MDL) and "limit of quantitation" (LOQ). The IDL and MDL are both related to the level of an analyte that can be detected with confidence that it is truly present. The IDL captures the smallest true signal (change in instrument response when an analyte is present) that can be distinguished from background noise (variation in the instrument response to blank samples), while the MDL takes into account additional sources of error introduced during sample preparation (e.g., the extraction process, possible concentration or dilution of samples) and thus is higher than the IDL. The MDL is also often referred to as the limit of detection (LOD) or detection limit (DL). The LOQ, on the other hand, describes the lowest mass or concentration that can be detected with confidence in the amount detected. The reporting limit (RL) or method reporting limit (MRL), which is either the lowest value that the lab will report or the lowest value that the lab will report without flagging the data as estimated, is often (but not always) the same as the quantitation limit or LOQ.

Before submitting samples for analysis, it is helpful to find out (1) the methods and terminology that the laboratory will use to describe reporting limits (LOD, LOQ, etc.) and (2) whether reporting limits will be consistent within a chemical or whether limits could vary between samples or batches. Equally critical is to clarify how the lab will report non-detects. Several different values could appear in the amount or concentration fields for non-detects, including but not limited to zeroes, the detection limit, the reporting limit, or “ND.”

Amount

Another important point to discuss in advance with the laboratory is how they will report values for compounds with a confirmed identity but measured at levels below what accurately can be quantitated. For example, when measuring chemicals of emerging interest, we ask laboratories to report estimated values below the RL and we flag them during data analysis. This practice has some limitations[23] but is preferable to falsely reducing variance in the dataset by treating estimated values below the RL as equivalent to non-detects below the detection limit. Non-detects can present significant data analysis challenges, and while a discussion of the best available methods and the problems with common approaches such as substituting the RL, RL/2, or zero for non-detects is beyond the scope of this commentary, it is a critical issue, and we refer the reader to several helpful resources.Cite error: The opening <ref> tag is malformed or has a bad name[24][25][26] Reporting estimated values is not standard practice for many laboratories, so it is important to raise this issue early on (see Additional file 2 for example correspondence). If the lab reports data qualifier flags, it may be necessary to clarify the interpretation of those flags, including but not limited to which flags distinguish non-detects from detects above the MRL and estimated values. In other words, it is best not to make assumptions.

Study implementation

What QA/QC is needed?

QA/QC occurs both inside and outside the analytical laboratory (see Table 1). Field QC samples, namely blanks and duplicates, capture the sum of contamination and measurement error from collection, storage, transport, and laboratory sources. We base the number of QC samples we collect in the field on budget and our sample size, generally aiming for at least 20% QC samples (e.g., if collecting 80 field samples, then collect 16 field QC samples), though a higher percentage is needed in small studies. Lab analysts should be blinded to the identity of field QC samples whenever possible. Maintaining blinding can be challenging, so it is worth putting some thought into sample names (e.g., QC samples should not have obviously different IDs than other samples, should not be labeled with a “D” for duplicate or “B” for blank). Logs retained at the site must contain sufficient information to allow the data analysts to identify field QC samples and sample types.

Table 1. Summary of QC sample types, interpretation, and possible actions
QA/QC concept Measure Interpretation Possible actions
Accuracy

• Lab control sample recoveries and/or matrix spike recoveries
• Certified reference material
• Isotope dilution quantification

Measures whether the analytical method produces accurate quantification for each compound. Matrix spike recovery evaluates matrix effects on accuracy, such as interferences. Isotope dilution is the most rigorous approach to generating accurate measurements in biomonitoring.

• Drop compounds with inaccurate quantification from the data analysis, discuss with lab whether improvements can be made for future analyses.
• If problems are modest and batch-specific, include batch as a covariate in regression model.

References

  1. Wild, C.P. (2005). "Complementing the genome with an "exposome": the outstanding challenge of environmental exposure measurement in molecular epidemiology". Cancer Epidemiology, Biomarkers & Preventions 14 (8): 1847–50. doi:10.1158/1055-9965.EPI-05-0456. PMID 16103423. 
  2. Carlin, D.J.; Rider, C.V.; Woychik, R. et al. (2013). "Unraveling the health effects of environmental mixtures: An NIEHS priority". Environmental Health Perspectives 121 (1): A6–8. doi:10.1289/ehp.1206182. PMC PMC3553446. PMID 23409283. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3553446. 
  3. U.S. Environmental Protection Agency (March 2019). "Quality Assurance Handbook and Guidance Documents for Citizen Science Projects". U.S. Environmental Protection Agency. https://www.epa.gov/citizen-science/quality-assurance-handbook-and-guidance-documents-citizen-science-projects. Retrieved 28 May 2019. 
  4. 4.0 4.1 Centers for Disease Control and Preventions (March 2018). "Improving the Collection and Management of Human Samples Used for Measuring Environmental Chemicals and Nutrition Indicators" (PDF). Centers for Disease Control and Prevention. https://www.cdc.gov/biomonitoring/pdf/Human_Sample_Collection-508.pdf. Retrieved 28 May 2019. 
  5. 5.0 5.1 LaKind, J.S.; Sobus, J.R.; Goodman, M. et al. (2014). "A proposal for assessing study quality: Biomonitoring, Environmental Epidemiology, and Short-lived Chemicals (BEES-C) instrument". Environment International 73: 195–207. doi:10.1016/j.envint.2014.07.011. PMC PMC4310547. PMID 25137624. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310547. 
  6. 6.0 6.1 LaKind, J.S.; O'Mahony, C.; Armstrong, T. et al. (2019). "ExpoQual: Evaluating measured and modeled human exposure data". Environmental Research 171: 302-312. doi:10.1016/j.envres.2019.01.039. PMID 30708234. 
  7. Campbell, J.L.; Rustad, L.E.; Porter, J.H. et al. (2013). "Quantity is Nothing without Quality: Automated QA/QC for Streaming Environmental Sensor Data". BioScience 63 (7): 574–585. doi:10.1525/bio.2013.63.7.10. 
  8. Michener, W.K. (2015). "Ten Simple Rules for Creating a Good Data Management Plan". PLoS Computational Biology 11 (10): e1004525. doi:10.1371/journal.pcbi.1004525. PMC PMC4619636. PMID 26492633. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4619636. 
  9. 9.0 9.1 9.2 Calafat, A.M.; Longnecker, M.P.; Koch, H.M. et al. (2015). "Optimal Exposure Biomarkers for Nonpersistent Chemicals in Environmental Epidemiology". Environmental Health Perspectives 123 (7): A166–8. doi:10.1289/ehp.1510041. PMC PMC4492274. PMID 26132373. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4492274. 
  10. Lam, J.; Koustas, E.; Sutton, P. et al. (2014). "The Navigation Guide - Evidence-based medicine meets environmental health: Integration of animal and human evidence for PFOA effects on fetal growth". Environmental Health Perspectives 122 (10): 1040-51. doi:10.1289/ehp.1307923. PMC PMC4181930. PMID 24968389. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4181930. 
  11. Rodgers, K.M.; Udesky, J.O.; Rudel, R.A. et al. (2018). "Environmental chemicals and breast cancer: An updated review of epidemiological literature informed by biological mechanisms". Environmental Research 160: 152–82. doi:10.1016/j.envres.2017.08.045. PMID 28987728. 
  12. LaKind, J.S.; Goodman, M.; Barr, D.B. et al. (2015). "Lessons learned from the application of BEES-C: Systematic assessment of study quality of epidemiologic research on BPA, neurodevelopment, and respiratory health". Environment International 80: 41–71. doi:10.1016/j.envint.2015.03.015. PMID 25884849. 
  13. Rudel, R.A.; Dodson, R.E.; Perovich, L.J. et al. (2010). "Semivolatile endocrine-disrupting compounds in paired indoor and outdoor air in two northern California communities". Environmental Science & Technology 44 (17): 6583-90. doi:10.1021/es100159c. PMC PMC2930400. PMID 20681565. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2930400. 
  14. Wentworth, N. (28 September 2001). "Review of Guidance on Data Quality Indicators (EPA QA/G-5i)" (PDF). U.S. Environmental Protection Agency. http://colowqforum.org/pdfs/whole-effluent-toxicity/documents/g5i-prd.pdf. Retrieved 28 May 2019. 
  15. U.S. Environmental Protection Agency (December 1994). "Laboratory Data Validation Functional Guidelines For Evaluating Organics Analyses". U.S. Environmental Protection Agency. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=20012TGE.TXT. Retrieved 28 May 2019. 
  16. U.S. Army Corps of Engineers (30 June 2005). "Guidance for Evaluating Performance-Based Chemical Data" (PDF). U.S. Army Corps of Engineers. https://www.publications.usace.army.mil/Portals/76/Publications/EngineerManuals/EM_200-1-10.pdf?ver=2013-09-04-070852-230. Retrieved 28 May 2019. 
  17. Geboy, N.J.; Engle, M.A. (7 September 2011). "Quality Assurance and Quality Control of Geochemical Data: A Primer for the Research Scientist". Open-File Report 2011–1187. U.S. Geological Survey. https://pubs.usgs.gov/of/2011/1187/. Retrieved 28 May 2019. 
  18. NIST (19 June 2014). "NIST Standard Reference Database 1A v17". https://www.nist.gov/srd/nist-standard-reference-database-1a-v17. Retrieved 28 May 2019. 
  19. Ulrich, E.M.; Sobus, J.R.; Grulke, C.M. et al. (2019). "EPA's non-targeted analysis collaborative trial (ENTACT): Genesis, design, and initial findings". Analytical and Bioanalytical Chemistry 411 (4): 853–66. doi:10.1007/s00216-018-1435-6. PMID 30519961. 
  20. Lötsch, J. (2017). "Data visualizations to detect systematic errors in laboratory assay results". Pharmacology Research & Perspectives 5 (6): e00369. doi:10.1002/prp2.369. PMC PMC5723702. PMID 29226627. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5723702. 
  21. Dodson, R.E.; Camann, D.E.; Morello-Frosch, R. et al. (2015). "Semivolatile organic compounds in homes: Strategies for efficient and systematic exposure measurement based on empirical and theoretical factors". Environmental Science & Technology 49 (1): 113-22. doi:10.1021/es502988r. PMC PMC4288060. PMID 25488487. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4288060. 
  22. 22.0 22.1 U.S. Environmental Protection Agency (December 2015). "Method 8000D: Determinative Chromatographic Separations" (PDF). U.S. Environmental Protection Agency. https://www.epa.gov/sites/production/files/2015-12/documents/8000d.pdf. Retrieved 16 August 2019. 
  23. Helsel, D.R. (2012). Statistics for Censored Environmental Data Using Minitab and R (2nd ed.). Wiley. ISBN 9780470479889. 
  24. Helsel, D. (2010). "Much ado about next to nothing: Incorporating nondetects in science". Annals of Occupational Hygiene 54 (3): 257-62. doi:10.1093/annhyg/mep092. PMID 20032004. 
  25. Newton, E.; Rudel, R. (2007). "Estimating correlation with multiply censored data arising from the adjustment of singly censored data". Environmental Science & Technology 41 (1): 221–8. doi:10.1021/es0608444. PMC PMC2565512. PMID 17265951. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2565512. 
  26. Shoari, N.; Dubé, J.S. (2018). "Toward improved analysis of concentration data: Embracing nondetects". Environmental Toxicology and Chemistry 37 (3): 643-656. doi:10.1002/etc.4046. PMID 29168890. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, spelling, and grammar. We also added PMCID and DOI when they were missing from the original reference.