Article Text

Download PDFPDF

Use of key performance indicators in histological dissection
  1. Matthew Griffiths1,
  2. Rachel Gillibrand2
  1. 1 School of Science and Technology, Nottingham Trent University, Nottingham, UK
  2. 2 School of Health and Social Sciences, University of the West of England, Bristol, UK
  1. Correspondence to Dr Matthew Griffiths, Department of Biosciences, Nottingham Trent University, Erasmus Darwin, Clifton Campus, College Drive, Clifton, Nottingham NG11 8NS, UK; matthew.griffiths{at}ntu.ac.uk

Abstract

Aims Reports into standards in the National Health Service and quality in pathology have focused on the way we work in pathology and how to provide assurance that this is of a high standard. There are a number of external quality assurance schemes covering pathology and histopathology specifically; however, there is no scheme covering the process of histological surgical dissection. This is an area undergoing development, emerging from the sole preserve of medically qualified pathologists to a field populated by a number of highly trained biomedical scientists, but remains without any formal quality assurance.

Methods This work builds on Barnes, taking the guidance of the Royal College of Pathologists (RCPath)and Institute of Biomedical Science (IBMS)to form a series of key performance indicators relating to dissection. These were developed for use as an indicator of individual practice, highlighting areas of variation, weakness or strength. Once identified, a feedback event provided opportunities to address these errors and omissions, or to enable areas of strength to be shared.

Results The data obtained from the checklists demonstrate a large variation in practice at the outset of this study. The use of the checklists alone served to reduce this variation in practice, the addition of the training event showed further reduction in variation. The combination of these two tools was an effective method for enhancing standardisation of practice.

Conclusions The results of this work show that training events serve to reduce variation in practice by, and between, dissectors, driving up standards in dissection—directly addressing the needs of the modern pathology service.

  • histopathology
  • diagnostics
  • quality assurance
  • quality control

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

There have been many reviews of practice and standards in the National Health Service in recent years,1–4 the first and last of those being of most importance to the histopathology laboratory. The review into practices at King’s Mill Hospital4 ultimately found no misdiagnosis; however, a number of serious concerns were identified into the funding and organisation of the laboratory, attitude of the trust management to the pathology department, training and equipment; the most significant concern was in regard to the External Quality Assurance (EQA) scheme which gave rise to the original concern. The review reported that the EQA scheme did not adequately consider the sample size and the statistical safety of extrapolating from the data, nor were the results sufficiently meaningful as to provide direction and specificity to their concerns. The report also detailed concerns regarding the methods used by an external agency appointed to investigate the issue; their apparent identification of ‘weak positivity’ of oestrogen receptor (ER) staining, which they claimed had erroneously been reported as negative, was in fact false positives. The original report of ER negative was correct. This gave rise to another investigation, this time by Dr Barnes. The Barnes Report1 covered quality control in pathology and was quite exhaustive in nature. Barnes noted the reports from Francis and Sherwood Forrest, and echoed a number of their findings, calling for open, transparent individual quality data and standardisation of practice in pathology. The national EQA scheme in histopathology is run by UK National External Quality Assessment Service (UKNEQAS)5; the results generated by this scheme are fed back to the individual laboratory along with regional average rates. It is not possible to draw direct parallels between laboratories, nor is it possible for non-members of the scheme to view the data. Where a laboratory fails to meet the minimum standard, a letter is sent to the nominated ‘technical head’ UKNEQAS.5 This clearly does not fit within the framework of openness, accountability and transparency.

This work goes some way to addressing this failing. Barnes1 noted that quality procedures in pathology were no longer fit for purpose and called for individually identifiable, evidence-based, key performance indicators (KPI). This work examines the evidence base for practices in surgical dissection in diagnostic histopathology, and establishes the premise of using KPIs to identify deviations in practice. It looks at how to feedback these data to the personnel involved, with methods of encouraging a collaborative team approach to determining best practice and optimising performance. Further, as biomedical scientist (BMS) led specimen dissection is an emerging field, this work looks to establish an objective standard of practice by providing comparison with the current practice, performed by consultant histopathologists.

This paper investigates the feasibility of developing KPIs to measure adherence to a specified process of histopathological surgical dissection. Data collected using these KPIs will be used to highlight variation in practice by and between individuals working in a histopathology laboratory/setting. Some reflection on the implications of this process and how to address any variation will then be considered.

KPI may be an unfamiliar concept within pathology; however, it is something that is already in use, although informally. Histopathologists reporting colorectal cancer specimens are encouraged to self-audit their reports on an annual basis for the following criteria:

  1. The median number of lymph nodes examined should be greater than 12.

  2. The frequency of serosal involvement should be at least 20% for colonic cancers and 10% for rectal cancers.

  3. The frequency of venous invasion, including intramural (submucosal and intramuscular) and extramural, should be at least 30%.6

While this guidance has been in place since at least 2006, in the second edition of the colorectal cancer reporting dataset guideline document from the Royal College of Pathologists (RCPath),7 there is little evidence that this has been done. There is certainly no transparency around this activity and the results.

A more recent and more structured attempt to make use of KPI in pathology again comes from the RCPath. Reacting to Barnes1 and his calls for quality data, the RCPath produced a response detailing the sort of KPI that could be used to assess the quality of the pathology service.8 This work specified a number of different areas of activity, generally at a departmental level, such as turnaround times, rather than on the standards of individual practice. The RCPath has continued to develop this, and has built these many KPIs into a performance dashboard, monitoring multiple criteria, rather than attempting to reduce this to a single datum.9

An important question is what, if any, steps are, can be or should be taken when good or poor performance is detected. Within pathology, we have a long tradition of professional independence; this has not ensured universal high quality,10 the intention of the published dashboards is to allow public scrutiny, and to provide information to Clinical Commissioning Groups (CCG). While the possibility of patients choosing to use other hospitals, and the risk of CCGs electing to purchase services elsewhere is a strong motivator, it is an indirect one.

A more direct investigation of methods to reward or sanction good and poor practice is outside the scope of this work, but is certainly needed.

The objectives of this research are to:

  • create a set of histopathological dissection KPIs based on the best available professional and scientific guidance;

  • collect performance data in relation to the KPIs from a number of dissectors working in one histopathology setting.

Materials and methods

Four checklists were created to assess the dissection quality of appendix, gallbladder, uterus and colon specimens based on the work of Pronovost et al.11 The items on the checklist were those either mandated in the RCPath tissue pathway, RCPath minimum data set or were required by the local Standard Operating Procedure (SOP).

These checklists were deployed in several rounds of data collection, with the introduction of a training event after the collection of initial baseline data. Approximately 50 checklists for each specimen type were completed, and the data were tabulated against an anonymous unique identifier for the dissector.

A training event was created to feedback and review the findings of the checklists. The training event comprised a short case review by a consultant pathologist highlighting areas of good and poor practice.

The checklists were deployed in a number of rounds with the training event running on a weekly basis. The training event was removed at two points (months 5 and 7) to enable the effectiveness of the checklists to be assessed in isolation.

Results

The checklists were tabulated to demonstrate how each dissector conformed to each point on the list. The data are presented for each specimen type in figure 1.

Figure 1

Mean of average conformance to Standard Operating Procedure (SOP), by specimen type, for all dissectors, all parts. This figure demonstrates the change in means for adherence to SOP for all dissectors in each round, broken down by specimen type. The error bars indicate 1 SD, demonstrating the range of values. The chart clearly shows that conformance to SOP overall increases throughout each round, and a reduction in variance between dissectors is seen, by the reduction in size of the error bars.

The appendix and gallbladder were included as they are relatively simple organs, with a well-established and agreed protocol for handling, description and sampling. Despite this, the above graph shows a tremendous variation in protocol at the outset. The uterus samples comprised a range of specimens from simple atrophic hysterectomy to complex malignancies. Again, the protocols for these are well developed.

Changes over time for all specimen types

The mean for all SOP conformance across all specimen types is plotted (figure 2) for each dissector. The high level of variation seen in the individual parts is evident. The overall trends that can be discerned show that the checklists have the most effect on the most people. When the checklists are withdrawn and other interventions run in their place, the average conformance drops. The use of diagrams alone saw the average conformance drop for one of the BMS and two of the pathologists. Reintroducing the checklists stabilises the drop, and the reintroduction of the training event brings most dissectors to an average of 100% for all KPIs.

Figure 2

This graph plots the overall conformance to Standard Operating Procedure (SOP) for each dissector (as assessed by the checklists and report review), for each stage of the study. While a great deal of variance is seen, a clear trend towards 100% conformance is seen over time for the majority of practitioners over each round. Some variation returns, and for some dissectors the amount of variation is in itself quite variable. BMS, biomedical scientist.

In table 1, we can compare the changes in mean for each round against the baseline, and consider the SD. Also included is a column demonstrating the t-test statistic, comparing each round against the baseline data.

Table 1

Mean of mean conformance to Standard Operating Procedure (SOP), mean of SD within these data and SD of the means—all dissectors, all specimen types, all rounds. The table demonstrates the increase in conformance to SOP over time, and a reduction in SD over time

Discussion

The results from the data collection, indicating the starting point, show considerable variation in the way that individuals handle and describe specimens. The appendix analysis demonstrated a clear split in the BMS group, with different sampling practices between the two groups. The differences highlight variation in practice, which leads to greater error rates and worse patient outcomes.12–15 The data from the pathologists demonstrated a broad conformity; one pathologist was noted to show the same variance in sampling seen in one BMS group. Investigation into the discrepancy revealed that the difference was due to a change in the SOP, of which some people were unaware. The change had been made over a year previously to bring the sampling in line with the RCPath guidelines; however, these staff members were somehow unaware of the change. The results from the gallbladder show a similar split in sampling and description across two groups. This variance with the dissectors revealed a difference in training of the BMS staff. BMS 1, 2 and 5 had been trained by one consultant and BMS 3 and 4 had been trained by a different consultant. The difference related to the site of the sampling—the fundus is the area that should be sampled. BMS 1, 2 and 5 had been doing this; however, BMS 3 and 4 had been taking a transverse section through the body of the gallbladder. The fundus should be sampled as this is considered by the lead gastrointestinal pathologist as the most likely site of any incidental pathology. This triggered a review of the SOP, and the underlying reason for the preferred sampling was stated in the SOP. The updated SOP was communicated to the dissectors in a group meeting and the reason for this explained.

The colorectal data clearly demonstrate the application of the checklist and feedback sessions. There are far more data points for the colorectal than in the other specimen types, indicating the higher level of complexity in these specimens; the wide variation in data seen from the first round of data collection shows how much variation in practice there is. All dissectors should be working in the same way, yet the data clearly show that this is not the case. Even among the BMS dissectors that have been trained by the same people, there is still a variation. This variation completely disappeared from the BMS group with the introduction of the checklists, and this change persisted with the introduction of the training intervention. Two of the pathologists showed the same response, the others showed a more variable response. The two BMS are both in training and working towards an advanced level dissection examination; as such, they are both invested in their training and professional development. The pathologists are used to working under their own direction and making their own individual judgements, rather than following a protocol. As such, it is perhaps unsurprising that their data output is more variable. While there is no direct indication that these variations impacted on diagnosis, it does show unnecessary variation in practice, and extending this to malignant resections would be likely to show a diagnostic impact.

The variations are clear to see, as are the improvements with the introduction of the checklists. Comparison between the baseline and first round shows a distinct reduction in variation. The introduction of the checklists resulted in standardisation in practice. Having the checklists available at the point of dissection appears to have focused attention on the process of dissection and allowed dissectors to think more specifically about the specimen without distraction. Discussion with several of the dissectors has indicated that reading the checklists highlighted requirements that they had not been aware of and that some used the checklist as a memory prompt. The introduction of the checklist allowed dissectors to become aware of what information was being collected. This prompted dissectors to ask questions and check the requirements of the SOP. One of the pathologists felt that the presence of the checklists enabled the dissector to focus on the specific requirements of the complexities of the individual specimen, while using the checklist to ensure they were performing all of the mandatory operations and satisfying the demands of protocol.

The combination of the checklists running with the training event appears to be the most effective, both in terms of the level of reduction in variation and in terms of the level of engagement from the dissectors. Using the checklists to generate anonymous data to demonstrate what is occurring, rather than working from assumptions, but actually being able to demonstrate hard evidence, provides an intellectual ‘hook’ upon which the individual can hang the new ideas and practices. The training event satisfies the BMS’s desire for feedback, and creates a closer working relationship with the pathologists. As the pathologists seldom attend the training sessions run by another pathologist, this provides an opportunity for the BMS to make the pathologist aware of conflicting requirements—for example, they have indicated that a procedure should be performed in a certain way, while their colleague has indicated it must not be done this way. This has sparked discussion between the pathologists, leading to discussion of their intent and motivation behind those decisions, ultimately resulting in a standard approach from the pathologists.

One of the aims at the outset of this research was to investigate the use of KPIs in dissection and see if they could be applied to create a data set indicating performance standards. This has been achieved, the dissection checklists are based on the recommendations of the RCPath and local protocols, with best practice in surgery guidance used as a starting point. The specific data points on the checklist are less important than the checklist itself. As demonstrated in the Pronovost et al’s study,11 checklists can be modified to suit the specific requirements of the individual environment, while still remaining a powerful tool for regional or national data collection. If this work forms the basis of a national framework to address the deficiencies highlighted by Barnes,1 a consensus on a set of key data points would need to be reached. Once these were agreed, the checklist might contain some optional points, and space for local protocol to add points that were important to that specific laboratory or region. As Bosk et al 16 take great pains to point out, there was not one single checklist for the Keystone study,11 there were in fact over 100 versions. The checklists used in their study covered one procedure, the insertion of a central line catheter, and contained five key items. The intensive care unit departments taking part in the study were each encouraged to customise and develop the checklist to address the issues and culture within their own environment. In this study, four distinct checklists have been used for four different specimen types. Clearly, there is room for further development of the dissection checklists, enabling users in other laboratories to customise them according to their own needs is a feature, which has been considered from the outset.

This work has relied heavily on the identification of substandard performance by Barnes.1 It is fitting then to consider how this work seeks to meet some of his recommendations.

2.22. Overall, the quality assurance framework in pathology lacks several key factors without which we cannot say the best interests of the patient are being served.

This work focuses extensively on demonstrating the use of KPIs, demonstrating the facility for transparency and oversight.

4.28. Further consideration must be given to the ways in which individual performance can be assessed, monitored and competence assured.

This work clearly sets out the basis for such a system, and proof of its effectiveness.

Further work ought to consider this variation, and how it might be addressed. While this work clearly demonstrates substantial variation in practice, and shows a decrease in variation with the introduction of the checklists, this is only the first step. Another of Barnes’1 recommendations was that there ought to be a framework that allows for reward and sanction for good and poor performance. Such a mechanism is much needed, and if integrated with the use of the dissection KPI, would meet many of Barnes’s requirements.

Take home messages

  • Histopathological dissection lacks a suitable form of quality control.

  • Key performance indicators (KPI) can be used to demonstrate good and poor performance.

  • Feedback of the KPI data increases standardisation.

  • KPI identifies good practice, which can then be shared within teams.

References

Footnotes

  • This work has been previously submitted as part of a doctoral thesis.

  • Handling editor Runjan Chetty

  • Contributors MG performed the bulk of the work for this investigation. RG provided supervision and extensive critical input to the planning and the writing of this work.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles