Journal:Use of middleware data to dissect and optimize hematology autoverification

From LIMSWiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Full article title Use of middleware data to dissect and optimize hematology autoverification
Journal Journal of Pathology Informatics
Author(s) Starks, Rachel D.; Merrill, Anna E.; Davis, Scott R.; Voss, Dena R.; Goldsmith, Pamela, J.; Brown, Bonnie S.; Kulhavy, Jeff; Krasowski, Matthew D.
Author affiliation(s) University of Iowa Hospitals and Clinics
Primary contact Log-in required
Year published 2021
Volume and issue 12
Page(s) 19
DOI 10.4103/jpi.jpi_89_20
ISSN 2153-3539
Distribution license Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
Website https://www.jpathinformatics.org/text.asp?2021/12/1/19/313145
Download https://www.jpathinformatics.org/temp/JPatholInform12119-643471_175227.pdf (PDF)

Abstract

Background: Hematology analysis comprises some of the highest volume tests run in clinical laboratories. Autoverification of hematology results using computer-based rules reduces turnaround time for many specimens, while strategically targeting specimen review by technologist or pathologist.

Methods: Autoverification rules had been developed over a decade at an 800-bed tertiary/quarternary care academic medical central laboratory serving both adult and pediatric populations. In the process of migrating to newer hematology instruments, we analyzed the rates of the autoverification rules/flags most commonly associated with triggering manual review. We were particularly interested in rules that on their own often led to manual review in the absence of other flags. Prior to the study, autoverification rates were 87.8% (out of 16,073 orders) for complete blood count (CBC) if ordered as a panel and 85.8% (out of 1,940 orders) for CBC components ordered individually (not as the panel).

Results: Detailed analysis of rules/flags that frequently triggered indicated that the immature granulocyte (IG) flag (an instrument parameter) and rules that reflexed platelet by impedance method (PLT-I) to platelet by fluorescent method (PLT-F) represented the two biggest opportunities to increase autoverification. The IG flag threshold had previously been validated at 2%, a setting that resulted in this flag alone preventing autoverification in 6.0% of all samples. The IG flag threshold was raised to 5% after detailed chart review; this was also the instrument vendor's default recommendation for the newer hematology analyzers. Analysis also supported switching to PLT-F for all platelet analysis. Autoverification rates increased to 93.5% (out of 91,692 orders) for CBC as a panel and 89.8% (out of 11,982 orders) for individual components after changes in rules and laboratory practice.

Conclusions: Detailed analysis of autoverification of hematology testing at an academic medical center clinical laboratory that had been using a set of autoverification rules for over a decade revealed opportunities to optimize the parameters. The data analysis was challenging and time-consuming, highlighting opportunities for improvement in software tools that allow for more rapid and routine evaluation of autoverification parameters.

Keywords: algorithms, clinical laboratory information system, hematology, informatics, middleware

Introduction

In the realm of laboratory information system (LIS) and/or middleware software, autoverification refers to the use of computer-based rules to determine the appropriate release of laboratory test results. With the expansion of data management systems in the lab, autoverification is now a routine practice in core clinical laboratories[1][2][3][4], where the use of well-designed autoverification rules improves both quality and efficiency.[1][2][4] Over the years, autoverification rules have been described in detail for clinical chemistry, blood gas, and coagulation analysis, often achieving autoverification rates of >90%.[5][6][7][8][9][10][11][12]

In contrast, published studies regarding the application of autoverification in hematopathology are more limited.[13][14] Zhao et al. describe the implementation of autoverification rules in hematology analysis in a multicenter setting with 76%–85% autoverification rates.[14] The necessity of manual review of peripheral blood smears precludes achieving the high autoverification rates seen in clinical chemistry. On the other hand, high rates of manual review may place a strain on limited laboratory resources and delay turnaround time without adding clinical value. In 2005, The International Consensus Group for Hematology (ICGH) issued guidelines to establish a uniform set of criteria for manual review of automated hematology testing.[15][16][17][18] The proposed criteria for manual review includes quantitative and qualitative parameters. Pratumvinit et al. optimized the ICGH guidelines to significantly reduce their review rates and increase autoverification.[18] The basic qualitative criteria used for manual review are well-established; however, the specific quantitative cutoffs to trigger manual review are largely set by the individual laboratory, with some recommendations for individual parameters provided by instrument vendors or published literature.[7][15][16][19][20][21] Individual laboratories ideally should optimize their own set of rules to maintain both quality and efficiency within their own context of instrumentation, staffing, and patient population. However, data analysis on specific flags and their clinical impact may be quite challenging to assess.

In this study, we evaluated autoverification rules at an 800-bed tertiary/quarternary academic medical center core clinical laboratory for a complete blood count (CBC) with white blood cell (WBC) count differential (Diff) and the “a la carte” ordering of individual CBC components. The laboratory had developed and validated autoverification protocols over a decade. Feedback from laboratory staff suggested that some rules were resulting in manual review without clear clinical benefit. We therefore sought opportunities for improvement by assessing the flags that most frequently held specimens for manual review. Our analysis also illustrates some of the data analytical challenges associated with evaluating hematology autoverification.

Methods

Institutional details

The present study was performed at an approximately 800-bed tertiary/quaternary care academic medical center. The medical center services included pediatric and adult inpatient units, multiple intensive care units (ICUs), a level I trauma-capable emergency treatment center, and outpatient services. Pediatric and adult hematology/oncology services include both inpatient and outpatient populations. For the purpose of this study, patients 18 years and older were classified as adults, with pediatric patients < 18-years old. The data in the study were collected as part of a retrospective study approved by the university Institutional Review Board (protocol #201801719) covering the time period from January 1, 2018, to July 31, 2018. This study was carried out in accordance with the Code of Ethics of the World Medical Association (declaration of Helsinki).

Data extraction and analysis

The electronic health record (EHR) throughout the retrospective study period was Epic (Epic Systems, Inc., Madison, Wisconsin, USA), which has been in place since May 2009. The middleware software was Data Innovations (DI) Instrument Manager (DI, Burlington Vermont, USA) version 8.14, with the autoverification rules predominantly contained within the DI middleware.[5][22] The LIS was Epic Beaker Clinical Pathology.[23] Data were extracted from DI using Microsoft Open Database Connectivity (Microsoft Corporation, Redmond, Washington, USA) and analyzed using Microsoft Excel. Instrument flag data were retrieved from the analyzer and required extensive data cleanup and manual review to assure integrity. One major challenge is that the error messages concatenate on one another in a variety of combinations. Additional File 1 (see Notes at bottom) shows an example of the data, de-identified to remove identifying data fields related to accession number, dates/times, and personnel performing the testing. The flag fields are not transmitted to Epic Beaker Clinical Pathology[23] nor are the operation identification numbers that specify who reviewed, released, and rejected results. These fields would be needed to calculate percent autoverification in the LIS if that were a goal.

Instrument flags

In our laboratory, instrument flags are generated either from the automated hematology instrument manufacturer (Sysmex, America) or by our own laboratory-validated rules built in middleware (summarized in Table 1, and indicating origin of rule). These flags are either global (i.e., applied to every sample) or patient-specific (e.g., a patient known to have previous samples that required special handling or analysis). When a sample triggers a flag, several outcomes are possible: (1) automatically release the CBC component results but hold the WBC Diff for manual review, (2) hold both the CBC and WBC Diff for manual review, and (3) release all results to the LIS/EHR without manual review (assuming no other flags intervene). For example, the flag for the presence of immature granulocytes (IG) above a set percentage will hold only the WBC Diff and release the CBC, while the thrombocytopenia flag will hold both the CBC and WBC Diff for manual review. IGs on manual review include metamyelocytes, myelocytes, and promyelocytes. Critical value flags, in the absence of other flags, do not preclude autoverification; notification of the clinical services for critical values is by telephone per protocol.


Tab1 Starks JPathInfo2021 12.jpg

Table 1. Flags for manual review of complete blood cell count and white blood cell count differential tests

Automated analyzers

Automated hematology testing was performed by a Sysmex XN-9000 hematology analyzer with a fully automated hematology slide preparation and staining system (Sysmex America, Inc., Lincolnshire, Illinois, USA). This instrument performs platelet (PLT) enumeration either by disruption of electrical current (PLT-I) or by a flow cytometric method using a fluorescent oxazine dye (PLT-F). Briefly, for the PLT-F method, the dye binds to platelet organelles, is then irradiated by laser beam, and the corresponding forward-scattered light and side-scattered fluorescence are plotted.[24] The PLT-F method better distinguishes between platelets and fragmented red blood cells.[24][25][26] During the timeframe for the present study, PLT-F used higher cost reagents than PLT-I (approximately 50% more at onset of project).

Results

Volume of testing and frequency of flags

Over a six-month period, a total of 132,432 specimens had CBC with or without WBC Diff or an a la carte order for individual CBC components (PLT, hemoglobin, and hematocrit). Manual review by a technologist was performed on 10,314 of those specimens (7.8%). During this period, a total of 53,396 instrument flags were triggered (note that an individual specimen may trigger up to 15 flags), with 80.3% of samples not associated with any flag. Overall, 9.7% of specimens triggered a single flag, 5.0% triggered two flags, and < 1% of samples triggered five or more flags (Figure 1a).

Pediatric ICUs (including both neonatal and pediatric units) had the highest percentage of flagged samples, with one or more flags on 52.5% of specimens (Figure 1b). Adult and pediatric non-ICU inpatient units had 29.6% and 28.4% samples, respectively, with at least one flag. Adult hematology/oncology services, which include both an inpatient bone marrow transplant unit and outpatient clinics, had a 28.8% rate of samples with one or more flags. Rate of sample flags was much lower in outpatient (excluding hematology/oncology), emergency department, and operating room locations, at approximately 10% or less in both adult and pediatric populations.


Fig1 Starks JPathInfo2021 12.jpg

Figure 1. The number of samples during a six-month period without an associated flag (80.3%) or with one to four flags are shown in (a). The distribution of samples by patient care area for adult and pediatric patients is shown in (b). Heme/Onc: Hematology/Oncology, ICU: Intensive care unit, ED: Emergency department, OR: Operating room

Frequently triggered flags

To analyze the patterns of flags that frequently triggered manual review for both WBC and PLT parameters, we began by reviewing WBC parameters. This was limited to a 30-day period of analysis due to the extensive nature of data cleanup and manual review for the middleware and instrument data. We looked at two outcomes: (1) flags that would release the CBC while holding the WBC Diff for manual review and (2) flags that would hold both the CBC and WBC Diff for manual review. In the first category of releasing the CBC and holding the WBC Diff for manual review, the IG present flag represented 9.6% of flags during a 30-day review period (20,576 samples and 1,980 flags) (Figure 2a). The next most frequently triggered flag was the WBC abnormal scattergram at 5.3% (1,087 flags), followed by abnormal lymphocytes or blasts flag at 4.7% (962 flags) (Figure 2a). These top three most frequently triggered flags are instrument flags, with the ≥2% IG cutoff specified by the laboratory (discussed in more detail below).

For platelets, the PLT-I method was the main methodology used to generate a platelet count, with PLT-F used in certain circumstances. Samples were run for PLT-F based on the following flags: (1) PLT-I <70 k/mm3 (“thrombocytopenia”), (2) 50% change in either direction within the last seven days (“delta failure”), (3) pediatric inpatients and pediatric hematology/oncology clinic patients (due to known higher rate of red blood cell fragmentation and other specimen challenges), and/or (4) platelet abnormal distribution flag on the hematology analyzer. For 20,576 samples and 1,637 flags during the review period, we identified PLT-I <70 k/mm3 as accounting for 8.0% of flags that were holding both the CBC and WBC Diff to re-run for PLT-F (Figure 2b). The next most frequently triggered flags to hold CBC and WBC Diff for manual review were PLT clumps (2.2%, 460 flags) and PLT delta failure (1.7%, 349 flags) (Figure 2b).


Fig2 Starks JPathInfo2021 12.jpg

Figure 2. The most frequently triggered flags that resulted in manual review of WBC differential while automatically releasing the CBC during a 30-day period are shown in (a), with IG Present as the only flag triggered in 9.6% of samples. In (b), the six-most frequently triggered flags that hold both the CBC and WBC differential for manual review are shown, with the most frequently triggered flag Thombocytopenia, Rerun PLT-F, 8.0%. IG: Immature granulocytes, Abn WBCs: Abnormal white blood cells, Abn Lymphs/Blasts: Abnormal lymphocytes or blasts, Lymphs: Lymphocytes, MCV: Mean corpuscular volume, PLT: Platelet, HGB: Hemoglobin

Most frequently triggered single flag

Next, we examined the samples during a six-month period that had only a single flag. By far, the IG flag (intended to detect metamyelocytes, myelocytes, and promyelocytes) was the most frequently triggered single-flag, representing 6.0% of flags (3,200 samples) (Figure 3a). The left shift and the abnormal lymphocyte/blasts flags both represent 0.80% (each 425 flags), while 0.37% of single flags (199 flags) was due to the WBC abnormal scattergram (Figure 3a). All four flags are generated by instrument rules. The left shift flag primarily detects bands and metamyelocytes. In 1.1% of samples, the IG and left shift flags occurred together and were the only flags present (608 flags) (Figure 3a).


Fig3 Starks JPathInfo2021 12.jpg

Figure 3. When a single flag for manual review was triggered, the four most frequent rules identified are shown, including a potential overlap of parameters in IG Present and Left Shift in (a). Shown in (b) is the difference in manual review rates when the IG cutoff is changed from ≥2% (804 samples) to ≥5% (234 samples). IG: Immature granulocytes, Abn Lymphs or Blasts: Abnormal lymphocytes or blasts, WBC Abn Scatter: White blood cell abnormal scattergram

Optimization of immature granulocyte flag

The IG flag data prompted us to perform a more detailed review of the clinical utility of this flag. The IG flag had been set for ≥2% based on a validation study performed on an earlier generation of hematology analyzer used in the laboratory. The instrument vendor recommended a default trigger for the IG flag at 5%, while a range of 3–5% IG has been reported in the literature.[27][28][29] In order to assess the effect on our patient population if we changed the IG parameter to ≥5%, we performed detailed chart review on CBC samples that had only the IG rule triggered.

In a 30-day period, 804 samples underwent manual review due solely to the IG flag with the rule set to trigger at ≥2%; of those reviewed, only 29.1% (234 samples) had an IG of ≥5% (Figure 3b, above). Of the 570 samples with <5% IG but ≥2%, most came from inpatient units, with a breakdown of 412 inpatients (72.3%), 145 outpatients (25.4%), and 13 emergency department patients (2.3%). Within the 570 samples, manual chart review identified 4.7% samples from 27 unique patients with promyelocytes (0.9–2.0%) and one with blasts (0.9%). All of these samples were from patients on inpatient or adult hematology/oncology services and were follow-up specimens from patients already worked-up and being followed for hematologic issues. Fourteen patients with promyelocytes identified were positive for malignancy, six of which were simultaneously receiving chemotherapy. Seventeen of the 27 patients identified with promyelocytes were receiving daily CBCs during an inpatient encounter. The data were then analyzed to see how the IG estimate compared to the identification of metamyelocytes, myelocytes, and promyelocytes in these specimens by a technologist. Manual review of the 570 samples led to lower %IG in 91.1% of samples and higher %IG in only 8.6% of samples. Thus, the IG flag appears to over-estimate relative to manual slide review.

Extrapolating from the one month of data, samples with <5% IG but ≥2% comprise an estimated 6,840 samples per year. Given that chart review of this subset did not identify any case where the manual review led to the identification of promyelocytes or blasts that had not already been identified in previous laboratory studies, we made a decision to raise the IG threshold to 5% to match the manufacturer recommendation. Thereafter, the IG parameter, if present as the only flag, only triggered manual review if 5% or greater. The change in this threshold did not impact measurement of other flags.

Decreased review and re-running of complete blood counts with PLT-F

Based on the data and support from the published literature, the laboratory made the decision to switch to the PLT-F method instead of the PLT-I method. Similar to the change in IG threshold, the switch to PLT-F method had the highest impact on inpatient samples, with a breakdown of 59.2% inpatient (15.1% of which was ICUs), 31.0% outpatient, and 9.8% emergency department samples during the period of the study. The biggest impact on autoverification resulted from not needing to perform PLT-F for PLT-I <70 k/mm3.

Overall impact of changes

In combination with the above-mentioned change in IG threshold, autoverification rates increased. Figure 4 compares the autoverification rates before and after the changes in PLT-F and IG threshold. The percent increase in autoverification was 5.7% for CBC as a panel and 4.0% for individual CBC components. This translates to an estimated absolute reduction in manual review of 13,266 CBC panels and 1,248 CBC individual components per year. This has substantial impact on turnaround time for individual samples, since average turnaround time for manual differential is about 90 minutes, depending on staffing levels and competing workload. Average time to actually perform manual differential depends on complexity of pathologic findings and technologist experience but is typically five to fifteen minutes. Using ten minutes as an approximate average time for review, the reduction would translate to nearly a full-time equivalent position (approximately 100 hours/year or nearly 300 eight-hour shifts).


Fig4 Starks JPathInfo2021 12.jpg

Figure 4. Comparison of platelet-related flags with the switch to universal use of platelet by fluorescent method (PLT-F) method shown in (a) and autoverification rates for complete blood count and individually ordered complete blood count-components in (b)

Discussion

There is a growing body of literature related to the development and optimization of autoverification rules in hematopathology.[13][14] This complements investigations of autoverification for clinical chemistry, blood gas, and coagulation analysis.[5][6][7][8][9][10][11][12] Hematopathology presents particular challenges for autoverification in that rules are intended for a range of purposes, including review of abnormal cells that might be misidentified or missed by instruments (e.g., blasts, Sezary cells), detection of phenomena that can distort analysis (e.g., RBC agglutination and platelet clumping), and unusual changes in quantitative parameters (e.g., dramatic decrease or increase in hemoglobin/hematocrit).[13][14][30] Some of the flags are associated with phenomena that might be a pre-analytical sample issue or a pathological process in the patient.[14][18][31][32][33][34] A primary challenge for autoverification in hematopathology is to balance efficiency and turnaround time while performing manual review for samples, where the review is likely to provide clinical benefit.[4][14][31][33][35] This is especially a challenge for laboratories that analyze a high percentage of samples from patients with hematologic abnormalities, especially those who undergo repeated laboratory analysis over time.

In the present study, we evaluated autoverification rules that had been developed over years in our core clinical laboratory. In this process, we were confronted with rules that had been adopted per manufacturer recommendation (especially instrument flags) and those that had been developed and validated over years into an autoverification rule set. We were particularly looking for rules and thresholds that might represent “low hanging fruit” in generating the high frequency of flags but with low clinical value.

A central challenge identified in our study is the difficulty in extracting and analyzing specific data for autoverification. Our laboratory uses middleware software for most of the autoverification rules. Data retrieval required running a third-party application every month to capture middleware data prior to off-site archival (where the extraction would be more difficult). As described in the methods, the data required extensive cleanup and formatting to be able to drill down to specific flags for patient specimens.

Operational improvements were facilitated by our analysis. The two main changes that were implemented based on the autoverification analysis were to increase the IG flag cutoff requiring manual review from 2% to 5% and to switch to the PLT-F method for all PLT counts. Ironically, the default manufacturer recommendation for the IG flag of 5% was a choice that minimized unnecessary manual intervention, as we did not identify any clear clinical advantage in the lower threshold that had been set based on experience with an earlier generation of hematology analyzer. The autoverification analysis related to platelets demonstrated the improved efficiency and lower rerun rates with the PLT-F method that can better distinguish between platelets and fragmented RBCs.[24][25][36][37][38] Given that our laboratory receives many pediatric samples, including from hematology/oncology patients, use of PLT-F minimized repeat analysis for specimens that often contain low sample volumes. The rule changes reported in the present study have now been in place, and we are not aware of any clinical issues arising from these changes.

Future directions to pursue include the development of software that more easily enables analysis of autoverification rates and the impact of specific rules and flags. This may be with commercial vendor and/or home-grown software development. A data warehouse is a possibility. In the present study, such a warehouse would need to be able to access the DI database, or the DI database would need to be regularly duplicated to a different server. To allow for reliable evaluation of auto-verification, the data warehouse would ideally have discrete data for specimen comments/flags and operator identification (which could indicate manual versus auto-verification). One practical challenge to this approach would be trying to avoid causing latency issues on the production server. Given limited resources and competing informatics projects, we have not yet pursued such a project. For laboratories seeking to further increase autoverification rates, even identifying one or two rules associated with a high rate of triggering manual review may allow for a significant increase in autoverification while maintaining high quality patient care.

Acknowledgements

The authors would like to thank staff within the University of Iowa Hospitals and Clinics Department of Pathology core laboratory and the University of Iowa Health Care Information Systems who helped provide the support for autoverification, middleware, and laboratory information system issues over the years.

Financial support

None.

Conflict of interest

There are no conflicts of interest.

References

  1. 1.0 1.1 Crolla, Lawrence J.; Westgard, James O. (1 September 2003). "Evaluation of rule-based autoverification protocols". Clinical leadership & management review: the journal of CLMA 17 (5): 268–272. ISSN 1527-3954. PMID 14531220. https://pubmed.ncbi.nlm.nih.gov/14531220. 
  2. 2.0 2.1 Jones, Jay B. (1 March 2013). "A strategic informatics approach to autoverification". Clinics in Laboratory Medicine 33 (1): 161–181. doi:10.1016/j.cll.2012.11.004. ISSN 1557-9832. PMID 23331736. https://pubmed.ncbi.nlm.nih.gov/23331736. 
  3. Pearlman, Eugene S.; Bilello, Leonard; Stauffer, Joseph; Kamarinos, Andonios; Miele, Rudolph; Wolfert, Marc S. (1 July 2002). "Implications of autoverification for the clinical laboratory". Clinical leadership & management review: the journal of CLMA 16 (4): 237–239. ISSN 1527-3954. PMID 12168427. https://pubmed.ncbi.nlm.nih.gov/12168427. 
  4. 4.0 4.1 4.2 Torke, Narayan; Boral, Leonard; Nguyen, Tracy; Perri, Angelo; Chakrin, Alan (1 December 2005). "Process improvement and operational efficiency through test result autoverification". Clinical Chemistry 51 (12): 2406–2408. doi:10.1373/clinchem.2005.054395. ISSN 0009-9147. PMID 16306113. https://pubmed.ncbi.nlm.nih.gov/16306113. 
  5. 5.0 5.1 5.2 Krasowski, Matthew D.; Davis, Scott R.; Drees, Denny; Morris, Cory; Kulhavy, Jeff; Crone, Cheri; Bebber, Tami; Clark, Iwa et al. (2014). "Autoverification in a core clinical chemistry laboratory at an academic medical center". Journal of Pathology Informatics 5 (1): 13. doi:10.4103/2153-3539.129450. ISSN 2229-5089. PMC 4023033. PMID 24843824. https://pubmed.ncbi.nlm.nih.gov/24843824. 
  6. 6.0 6.1 Sediq, Amany Mohy-Eldin; Abdel-Azeez, Ahmad GabAllahm Hala (1 September 2014). "Designing an autoverification system in Zagazig University Hospitals Laboratories: preliminary evaluation on thyroid function profile". Annals of Saudi Medicine 34 (5): 427–432. doi:10.5144/0256-4947.2014.427. ISSN 0975-4466. PMC 6074554. PMID 25827700. https://pubmed.ncbi.nlm.nih.gov/25827700. 
  7. 7.0 7.1 7.2 Onelöv, Liselotte; Gustafsson, Elisabeth; Grönlund, Eva; Andersson, Helena; Hellberg, Gisela; Järnberg, Ingela; Schurow, Sara; Söderblom, Lisbeth et al. (1 October 2016). "Autoverification of routine coagulation assays in a multi-center laboratory". Scandinavian Journal of Clinical and Laboratory Investigation 76 (6): 500–502. doi:10.1080/00365513.2016.1200135. ISSN 1502-7686. PMID 27400327. https://pubmed.ncbi.nlm.nih.gov/27400327. 
  8. 8.0 8.1 Randell, Edward W.; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David (1 June 2018). "Strategy for 90% autoverification of clinical chemistry and immunoassay test results using six sigma process improvement". Data in Brief 18: 1740–1749. doi:10.1016/j.dib.2018.04.080. ISSN 2352-3409. PMC 5998219. PMID 29904674. https://pubmed.ncbi.nlm.nih.gov/29904674. 
  9. 9.0 9.1 Randell, Edward W.; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David (1 May 2018). "Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay". Clinical Biochemistry 55: 42–48. doi:10.1016/j.clinbiochem.2018.03.002. ISSN 1873-2933. PMID 29518383. https://pubmed.ncbi.nlm.nih.gov/29518383. 
  10. 10.0 10.1 Wu, Jie; Pan, Meichen; Ouyang, Huizhen; Yang, Zhili; Zhang, Qiaoxin; Cai, Yingmu (1 December 2018). "Establishing and Evaluating Autoverification Rules with Intelligent Guidelines for Arterial Blood Gas Analysis in a Clinical Laboratory". SLAS technology 23 (6): 631–640. doi:10.1177/2472630318775311. ISSN 2472-6311. PMID 29787327. https://pubmed.ncbi.nlm.nih.gov/29787327. 
  11. 11.0 11.1 Randell, Edward W.; Yenice, Sedef; Khine Wamono, Aye Aye; Orth, Matthias (1 November 2019). "Autoverification of test results in the core clinical laboratory". Clinical Biochemistry 73: 11–25. doi:10.1016/j.clinbiochem.2019.08.002. ISSN 1873-2933. PMID 31386832. https://pubmed.ncbi.nlm.nih.gov/31386832. 
  12. 12.0 12.1 Wang, Zhongqing; Peng, Cheng; Kang, Hui; Fan, Xia; Mu, Runqing; Zhou, Liping; He, Miao; Qu, Bo (3 July 2019). "Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory". BMC medical informatics and decision making 19 (1): 123. doi:10.1186/s12911-019-0848-2. ISSN 1472-6947. PMC 6609390. PMID 31269951. https://pubmed.ncbi.nlm.nih.gov/31269951. 
  13. 13.0 13.1 13.2 Fu, Qiang; Ye, Congxiu; Han, Bo; Zhan, Xiaoxia; Chen, Kang; Huang, Fuda; Miao, Lisao; Yang, Shanhong et al. (1 April 2020). "Designing and Validating Autoverification Rules for Hematology Analysis in Sysmex XN-9000 Hematology System". Clinical Laboratory 66 (4). doi:10.7754/Clin.Lab.2019.190726. ISSN 1433-6510. PMID 32255287. https://pubmed.ncbi.nlm.nih.gov/32255287. 
  14. 14.0 14.1 14.2 14.3 14.4 14.5 Zhao, X.; Wang, X. F.; Wang, J. B.; Lu, X. J.; Zhao, Y. W.; Li, C. B.; Wang, B. H.; Wei, J. et al. (1 April 2016). "Multicenter study of autoverification methods of hematology analysis". Journal of Biological Regulators and Homeostatic Agents 30 (2): 571–577. ISSN 0393-974X. PMID 27358150. https://pubmed.ncbi.nlm.nih.gov/27358150. 
  15. 15.0 15.1 Buoro, Sabrina; Mecca, Tommaso; Seghezzi, Michela; Manenti, Barbara; Azzarà, Giovanna; Ottomano, Cosimo; Lippi, Giuseppe (1 July 2017). "Validation rules for blood smear revision after automated hematological testing using Mindray CAL-8000". Journal of Clinical Laboratory Analysis 31 (4). doi:10.1002/jcla.22067. ISSN 1098-2825. PMC 6817000. PMID 27709664. https://pubmed.ncbi.nlm.nih.gov/27709664. 
  16. 16.0 16.1 Froom, Paul; Havis, Rosa; Barak, Mira (2009). "The rate of manual peripheral blood smear reviews in outpatients". Clinical Chemistry and Laboratory Medicine 47 (11): 1401–1405. doi:10.1515/CCLM.2009.308. ISSN 1437-4331. PMID 19778287. https://pubmed.ncbi.nlm.nih.gov/19778287. 
  17. Palmer, L.; Briggs, C.; McFadden, S.; Zini, G.; Burthem, J.; Rozenberg, G.; Proytcheva, M.; Machin, S. J. (1 June 2015). "ICSH recommendations for the standardization of nomenclature and grading of peripheral blood cell morphological features". International Journal of Laboratory Hematology 37 (3): 287–303. doi:10.1111/ijlh.12327. ISSN 1751-553X. PMID 25728865. https://pubmed.ncbi.nlm.nih.gov/25728865. 
  18. 18.0 18.1 18.2 Pratumvinit, Busadee; Wongkrajang, Preechaya; Reesukumal, Kanit; Klinbua, Cherdsak; Niamjoy, Patama (1 March 2013). "Validation and optimization of criteria for manual smear review following automated blood cell analysis in a large university hospital". Archives of Pathology & Laboratory Medicine 137 (3): 408–414. doi:10.5858/arpa.2011-0535-OA. ISSN 1543-2165. PMID 23451752. https://pubmed.ncbi.nlm.nih.gov/23451752. 
  19. Barnes, P. W. (2005). "Comparison of performance characteristics between first- and third-generation hematology systems". Laboratory Hematology: Official Publication of the International Society for Laboratory Hematology 11 (4): 298–301. doi:10.1532/lh96.05037. ISSN 1080-2924. PMID 16475477. https://pubmed.ncbi.nlm.nih.gov/16475477. 
  20. Barth, David (1 February 2012). "Approach to peripheral blood film assessment for pathologists". Seminars in Diagnostic Pathology 29 (1): 31–48. doi:10.1053/j.semdp.2011.07.003. ISSN 0740-2570. PMID 22372204. https://pubmed.ncbi.nlm.nih.gov/22372204. 
  21. Rabizadeh, Esther; Pickholtz, Itay; Barak, Mira; Froom, Paul (1 August 2013). "Historical data decrease complete blood count reflex blood smear review rates without missing patients with acute leukaemia". Journal of Clinical Pathology 66 (8): 692–694. doi:10.1136/jclinpath-2012-201423. ISSN 1472-4146. PMID 23505267. https://pubmed.ncbi.nlm.nih.gov/23505267. 
  22. Grieme, Caleb V.; Voss, Dena R.; Davis, Scott R.; Krasowski, Matthew D. (1 March 2017). "Impact of Endogenous and Exogenous Interferences on Clinical Chemistry Parameters Measured on Blood Gas Analyzers". Clinical Laboratory 63 (3): 561–568. doi:10.7754/Clin.Lab.2016.160932. ISSN 1433-6510. PMID 28271676. https://pubmed.ncbi.nlm.nih.gov/28271676. 
  23. 23.0 23.1 Krasowski, Matthew D.; Wilford, Joseph D.; Howard, Wanita; Dane, Susan K.; Davis, Scott R.; Karandikar, Nitin J.; Blau, John L.; Ford, Bradley A. (2016). "Implementation of Epic Beaker Clinical Pathology at an academic medical center". Journal of Pathology Informatics 7: 7. doi:10.4103/2153-3539.175798. ISSN 2229-5089. PMC 4763507. PMID 26955505. https://pubmed.ncbi.nlm.nih.gov/26955505. 
  24. 24.0 24.1 24.2 Tanaka, Yuzo; Tanaka, Yumiko; Gondo, Kazumi; Maruki, Yoshiko; Kondo, Tamiaki; Asai, Satomi; Matsushita, Hiromichi; Miyachi, Hayato (1 September 2014). "Performance evaluation of platelet counting by novel fluorescent dye staining in the XN-series automated hematology analyzers". Journal of Clinical Laboratory Analysis 28 (5): 341–348. doi:10.1002/jcla.21691. ISSN 1098-2825. PMC 6807536. PMID 24648166. https://pubmed.ncbi.nlm.nih.gov/24648166. 
  25. 25.0 25.1 Schoorl, Margreet; Schoorl, Marianne; Oomes, Jeanette; van Pelt, Johannes (1 October 2013). "New fluorescent method (PLT-F) on Sysmex XN2000 hematology analyzer achieved higher accuracy in low platelet counting". American Journal of Clinical Pathology 140 (4): 495–499. doi:10.1309/AJCPUAGGB4URL5XO. ISSN 1943-7722. PMID 24045545. https://pubmed.ncbi.nlm.nih.gov/24045545. 
  26. Wada, Atsushi; Takagi, Yuri; Kono, Mari; Morikawa, Takashi (2015). "Accuracy of a New Platelet Count System (PLT-F) Depends on the Staining Property of Its Reagents". PloS One 10 (10): e0141311. doi:10.1371/journal.pone.0141311. ISSN 1932-6203. PMC 4619826. PMID 26496387. https://pubmed.ncbi.nlm.nih.gov/26496387. 
  27. Eilertsen, Heidi; Hagve, Tor-Arne (1 October 2014). "Do the flags related to immature granulocytes reported by the Sysmex XE-5000 warrant a microscopic slide review?". American Journal of Clinical Pathology 142 (4): 553–560. doi:10.1309/AJCP4V4EXYFFOELL. ISSN 1943-7722. PMID 25239424. https://pubmed.ncbi.nlm.nih.gov/25239424. 
  28. Fernandes, Bernard; Hamaguchi, Yukio (1 September 2007). "Automated enumeration of immature granulocytes". American Journal of Clinical Pathology 128 (3): 454–463. doi:10.1309/TVGKD5TVB7W9HHC7. ISSN 0002-9173. PMID 17709320. https://pubmed.ncbi.nlm.nih.gov/17709320. 
  29. Maenhout, Thomas M.; Marcelis, Ludo (1 July 2014). "Immature granulocyte count in peripheral blood by the Sysmex haematology XN series compared to microscopic differentiation". Journal of Clinical Pathology 67 (7): 648–650. doi:10.1136/jclinpath-2014-202223. ISSN 1472-4146. PMID 24668849. https://pubmed.ncbi.nlm.nih.gov/24668849. 
  30. Lantis, Kay L.; Harris, R. Jayne; Davis, Gerald; Renner, Nancy; Finn, William G. (1 May 2003). "Elimination of instrument-driven reflex manual differential leukocyte counts. Optimization of manual blood smear review criteria in a high-volume automated hematology laboratory". American Journal of Clinical Pathology 119 (5): 656–662. doi:10.1309/VH1K-MV8W-B7GB-7R14. ISSN 0002-9173. PMID 12760283. https://pubmed.ncbi.nlm.nih.gov/12760283. 
  31. 31.0 31.1 Comar, Samuel Ricardo; Malvezzi, Mariester; Pasquini, Ricardo (1 October 2017). "Evaluation of criteria of manual blood smear review following automated complete blood counts in a large university hospital". Revista Brasileira De Hematologia E Hemoterapia 39 (4): 306–317. doi:10.1016/j.bjhh.2017.06.007. ISSN 1516-8484. PMC 5693276. PMID 29150102. https://pubmed.ncbi.nlm.nih.gov/29150102. 
  32. Ike, Samuel O.; Nubila, Thomas; Ukaejiofo, Ernest O.; Nubila, Imelda N.; Shu, Elvis N.; Ezema, Ifeyinwa (23 April 2010). "Comparison of haematological parameters determined by the Sysmex KX - 2IN automated haematology analyzer and the manual counts". BMC clinical pathology 10: 3. doi:10.1186/1472-6890-10-3. ISSN 1472-6890. PMC PMC2873444. PMID 20416068. https://pubmed.ncbi.nlm.nih.gov/20416068. 
  33. 33.0 33.1 Lou, Amy H.; Elnenaei, Manal O.; Sadek, Irene; Thompson, Shauna; Crocker, Bryan D.; Nassar, Bassam A. (1 October 2017). "Multiple pre- and post-analytical lean approaches to the improvement of the laboratory turnaround time in a large core laboratory". Clinical Biochemistry 50 (15): 864–869. doi:10.1016/j.clinbiochem.2017.04.019. ISSN 1873-2933. PMID 28457964. https://pubmed.ncbi.nlm.nih.gov/28457964. 
  34. Sandhaus, Linda M.; Wald, David N.; Sauder, Kenan J.; Steele, Erica L.; Meyerson, Howard J. (1 March 2007). "Measuring the clinical impact of pathologist reviews of blood and body fluid smears". Archives of Pathology & Laboratory Medicine 131 (3): 468–472. doi:10.5858/2007-131-468-MTCIOP. ISSN 1543-2165. PMID 17516750. https://pubmed.ncbi.nlm.nih.gov/17516750. 
  35. Novis, David A.; Walsh, Molly; Wilkinson, David; St Louis, Mary; Ben-Ezra, Jonathon (1 May 2006). "Laboratory productivity and the rate of manual peripheral blood smear review: a College of American Pathologists Q-Probes study of 95,141 complete blood count determinations performed in 263 institutions". Archives of Pathology & Laboratory Medicine 130 (5): 596–601. doi:10.5858/2006-130-596-LPATRO. ISSN 1543-2165. PMID 16683868. https://pubmed.ncbi.nlm.nih.gov/16683868. 
  36. Schapkaitz, E.; Raburabu, S. (1 March 2018). "Performance evaluation of the new measurement channels on the automated Sysmex XN-9000 hematology analyzer". Clinical Biochemistry 53: 132–138. doi:10.1016/j.clinbiochem.2018.01.014. ISSN 1873-2933. PMID 29374555. https://pubmed.ncbi.nlm.nih.gov/29374555. 
  37. Tantanate, Chaicharoen; Khowawisetsut, Ladawan; Pattanapanyasat, Kovit (1 June 2017). "Performance Evaluation of Automated Impedance and Optical Fluorescence Platelet Counts Compared With International Reference Method in Patients With Thalassemia". Archives of Pathology & Laboratory Medicine 141 (6): 830–836. doi:10.5858/arpa.2016-0222-OA. ISSN 1543-2165. PMID 28402168. https://pubmed.ncbi.nlm.nih.gov/28402168. 
  38. Tantanate, Chaicharoen; Khowawisetsut, Ladawan; Sukapirom, Kasama; Pattanapanyasat, Kovit (1 May 2019). "Analytical performance of automated platelet counts and impact on platelet transfusion guidance in patients with acute leukemia". Scandinavian Journal of Clinical and Laboratory Investigation 79 (3): 160–166. doi:10.1080/00365513.2019.1576100. ISSN 1502-7686. PMID 30761915. https://pubmed.ncbi.nlm.nih.gov/30761915. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, spelling, and grammar. In some cases important information was missing from the references, and that information was added. The original document mentions an "Additional File 1"; however, the original doesn't appear to include that file. The reference to Additional File 1 is maintained for this version, but contact the author to acquire the file.