Article Text

Download PDFPDF

Benchmarking and the laboratory
  1. M Galloway1,
  2. L Nadin2
  1. 1South Durham Health Care NHS Trust, Bishop Auckland General Hospital, Cockton Hill, Road, Bishop Auckland, County Durham DL14 6AD, UK
  2. 2Clinical Management Unit, Centre for Health Planning and Management, Suite 1.18 Darwin Building, Keele University, Staffordshire ST5 5SP, UK
  1. Dr Galloway mike.galloway{at}


This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service.

  • benchmarking
  • pathology

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In the white paper “The new NHS: modern, dependable”,1 the government indicated that it wished to see an alignment of clinical and financial responsibility within the National Health Service (NHS). For the first time, chief executives are accountable for both the quality and cost effectiveness of the services provided by their trust. In an attempt to improve the cost effectiveness of services there is now a requirement for trusts to publish and benchmark performance on cost and productivity although, at present, this information is not specific to pathology. Clear incentives are being developed to improve performance and efficiency. For example, the efficiency target for NHS trusts for 2000/2001 is directly related to each trust's reference costs—the higher the reference cost the higher the efficiency target. Similarly, on the clinical side, with the development of clinical governance, standards of clinical care are to be developed by the National Institute for Clinical Excellence and the National Service Frameworks.2 The Commission for Health Improvement will then regularly review the performance of trusts against these standards. It is clearly going to be essential for trusts to demonstrate the cost effectiveness and quality of the services that are provided. Benchmarking is one approach that can facilitate this objective.

Benchmarking is the process of measuring products, services, and practices against leaders in a field, allowing the identification of best practices that will lead to sustained and improved performance. Performance may be compared either in a generic way, in which there is a comparison of a process regardless of the industry, or in a functional way, in which there are comparisons within the same industry. An example of a generic benchmarking process would be to compare the speed with which a laboratory reception answers incoming telephone calls compared with the best (or worst!) call centres. For the purpose of this article, we will review two functional benchmarking schemes, namely the Clinical Benchmarking Company's (CBC) Pathology Report and the College of American Pathologists' Q-Probe scheme to illustrate the value of benchmarking to a pathology laboratory.

Clinical Benchmarking Company

CBC is a company jointly owned by the NHS Confederation and Newchurch & Company. The pathology part of CBC is now in its fifth year and its organisation has been described previously.3 The pathology benchmarking team consists of nominees of the Royal College of Pathologists, the Association of Clinical Biochemists, and the Institute of Biochemical Scientists, together with a representative of CBC and two members of staff from Keele University. Each specialty in pathology has its own expert panel from these nominations who are responsible for the questionnaire, information collection, and subsequent report. In view of the increasing trend towards multidisciplinary working, a separate combined biochemistry and haematology report is also produced. There has been a great effort to ensure that participating laboratories of the CBC scheme contribute to the development of the process. The feedback/development meeting is a key part of the annual cycle (fig 1), so that participating laboratories have an opportunity to make contributions to the improvement and refining of the questionnaire and report. In the 1998 report a standard definition of hospital size was introduced (table 1). This allowed comparisons to be made based on similar size and type of laboratories. Four categories (clusters) of hospitals were defined: teaching hospital (teaching hospital cluster), and large (cluster A hospitals), average (cluster B hospitals), and small (cluster C hospitals) non-teaching hospitals. Participating hospitals are also given the opportunity to confirm that they feel they have been allocated to the correct category or cluster. The process of defining the type of hospital is still developing and a further cluster of specialist paediatric hospitals will be included in the report for 2000. Each department that participates receives a copy of the report. It is an extremely detailed report that analyses 10 areas of laboratory performance (table 2) and gives a profile of the cost effectiveness of that department. It is not possible to give a detailed analysis of this report and therefore four areas of the report will be discussed.

Table 1

Clinical Benchmarking Company Pathology Report: definition of size of hospitals

Table 2

Areas of laboratory performance covered in the Clinical Benchmarking Company Pathology Report

Figure 1

Clinical Benchmarking Company Pathology Report: annual cycle for data collection and analysis.


There has been considerable debate at the annual user group meetings to agree on what data should be collected on quality issues within pathology. Although in the initial years there was great enthusiasm to collect data on quality, it was soon apparent that the more data that were collected the greater the workload for participating laboratories. Therefore, over the past two or three years the quality section has become more standardised and several key areas of quality have been included in the questionnaire.


One of the early debates when the CBC scheme started was whether those laboratories that had high productivity levels would be laboratories that provided a poorer quality of service. This has largely been answered by the fact that approximately 95% of departments that participate in this study are accredited by Clinical Pathology Accreditation (UK) Ltd. However, this does leave a small number of laboratories that are unaccredited. This may be a source of further research to determine whether those laboratories that have particularly high productivity levels are the laboratories that have not been accredited.

It has been noted over the past two or three years that laboratories are increasingly applying for other forms of accreditation such as Investors in People, Health Quality Service Accreditation (previously called the Kings Fund Organisational Audit Programme), and accreditation by the Council for Professions Supplementary to Medicine.


The analysis of audit activities shows that audit was undertaken at laboratory rather than departmental level. Approximately one third of laboratories in the 1999 report declared that laboratory wide audit meetings were held regularly, with an average of an audit meeting occurring every six weeks.

Quality standards with purchasers

Sixty per cent of laboratories now have some quality standards written into contracts with purchasing authorities and general practitioners. These quality standards often relate to participation in external quality assurance, turnaround times, and accreditation. Less often, laboratories also have standards for participation in audit, the availability for telephone advise from clinical staff, and the use of electronic data interchange.

Quality control

The participation of departments in the external quality assurance schemes is also documented. It is an increasing requirement for laboratories to declare this participation as part of their clinical governance returns within their trust. Further analysis of expenditure in biochemistry and haematology shows that expenditure for external quality assurance seems to be fairly constant across all clusters. The consequences for smaller hospitals is that the expenditure on external quality assurance as a percentage of total expenditure is therefore larger. The amount that departments spend on internal quality control is more variable and does not seem to be related to the volume of work.

Complaints from general practitioners

Ninety per cent of laboratories have a named person at laboratory level who is responsible for dealing with complaints from general practitioners. Of these, half of laboratories recorded complaints from general practitioners at a laboratory level and the other laboratories documented these at a departmental level.


Test to request ratios

An important part of the CBC study has been to standardise the definition of tests and requests. In 1996, all participants were asked to document their own definition of what constitutes a test and request. Following feedback at the annual feedback/development meeting it was clear that participants wanted a standard definition of tests and requests. This has led to a uniformity in data collection and has facilitated comparisons between departments. These definitions have been in use for each subsequent year and have been reviewed annually at the feedback/development meeting. Departments unable to conform to these definitions have been asked to give the numbers of tests and requests nearest to the set definition and to explain the variation between the counting methods used in the laboratory and that produced by CBC. Since 1998/1999 there has been an important development in report production, whereby the departments unable to count using the standard definition of tests and requests can be identified on the benchmarking charts by a code number without breaking any confidentiality. This has meant that those departments unable to comply with the standard definition of tests and requests can be identified on the charts, which is particularly important when studying any outliers on the test to request ratio charts. This process has been applied to all specialties within pathology, except for histopathology and cytology, where only definitions of requests have been developed (specimens, slides, and blocks make up the test equivalent units). The importance of continuity in data collection is central to the value of the benchmarking process because it makes year on year analysis of departments participating in consecutive years possible.

Within biochemistry other developments have occurred within the definition of tests and requests. In 1997, profiles of tests (for example, liver function tests, urea, and electrolytes) were introduced. For histopathology and cytology, as described above, the key units of measurement are blocks and slides rather than tests. Through discussion at the annual feedback/development meeting this has been refined to break down categories of slides into routine, levels cut, specials, wet preparations, unstained, etc. Unstained slides are not used in the slide to request ratio but are used in measuring medical laboratory scientific officer (MLSO) workload. This method has been adopted as a standard and is now in its third year. The greatest variation in definition of tests has occurred within microbiology. It has taken a great deal of effort on behalf of the expert panel and the annual feedback/development meeting to establish a consistent means of counting workload in terms of tests. Another development that has occurred since 1998 is the more detailed recognition of workload in relation to blood products and components that is now included in the haematology report. Following on from this detailed work of standardising tests and requests it is now possible to undertake some analysis of test to request ratios.

Requests are the demand placed on the laboratory by the users and the laboratory responds with the production of results from a number of tests.3 Therefore, the test to request ratio may be used to assess the demand/supply equation. For example, the user may request a diagnostic investigation, such as liver function test, and this is the demand placed on the laboratory. The laboratory then responds to this demand by supplying the results of several tests as part of the liver function test profile. Any variation in practice can be analysed by looking at the test to request ratios between different laboratories. The variations in test to request ratios (fig 2) that are seen in each of the main disciplines within pathology may be the result of several factors. First, it is possible that the variation results from different methods in defining and counting tests and requests by participating laboratories. However, because most departments now use a standard definition of tests and requests it would be reasonable to look at other areas to explain differences in test to request ratios. Another reason for variation in test to request ratios is the different case mix in each hospital. Again, this can be overcome to a large extent by analysing on a like for like basis using the cluster analysis. The variation in test to request ratios could result from the variation in clinical practice between laboratories. Therefore, what evidence do we have that the variations in test to request ratios are the result of variation in clinical practice?

Figure 2

Test to request ratios for (A) biochemistry, (B) haematology, and (C) microbiology. Each point represents a single department.

The first example of variation in clinical practice is the preparation of blood films. Within the haematology report, the percentage of blood films as a proportion of full blood counts varies irrespective of category of hospital—between 5% and 30%. The criteria that a department uses for preparing blood films has not yet been studied in this report. Variations in practice have also been noted in the biochemistry report. For example, when laboratories were asked to declare the number of tests performed on a request for urea and electrolyte profile the response varied from three to seven tests. When laboratories were asked to declare the number of tests performed on a request for a liver function profile the responses varied from three to eight tests for each profile. A similar analysis in histopathology has shown that the number of slides prepared for each request varies fourfold, and there is a correlation between the number of slides prepared for each request and the cost of each request.3

Therefore, there does seem to be some evidence that the variation in the test to request ratio is, at least in part, explained by a variation in clinical practice in the laboratory. Further work is needed to see whether those laboratories that have a high test to request ratio as a result of the number of tests generated in response to a request provide better patient care in terms of improved patient outcome than those laboratories with a lower test request ratio. Clearly, there are important economic consequences in view of the high volume of laboratory tests in having high test to request ratios, particularly if they have no impact on patient care. In addition, those laboratories that have high test to request ratios might have high productivity, but the economic consequences of any unnecessary testing will have an adverse effect on the total expenditure of that department. Therefore, it is essential that when assessing the cost effectiveness of a laboratory several areas of the pathology report must be analysed together and not in isolation.


Labour productivity can be analysed in several ways. For the purpose of this paper two areas of productivity will be analysed. First, productivity of medical consultants. Each laboratory is asked to declare the number of sessions that consultants work within the laboratory. This is to overcome the difficulty of the correct allocation of sessions for those consultants who have a high clinical workload outside the pathology directorate. This is particularly true in the case of haematologists, but it is increasingly a factor when analysing productivity in biochemistry and microbiology. Figure 3 shows an analysis of productivity of medical consultants in haematology. As can be seen, there are seven haematology departments that have a workload of over 400 000 requests/year/whole time equivalent consultant haematologist. This translates into 2381 reported requests each working day for every one whole time equivalent consultant. These figures are at the extreme end of this workload analysis and seem to indicate that it would not be possible to provide an appropriate clinical input into that service. Similar analyses of productivity have been undertaken for microbiology, histopathology, and biochemistry. It should be possible from this information to derive workload norms similar to that produced by the Royal College of Pathologists for histopathology4 for the other specialities. It is clearly important that if a clinical input is to be maintained in each of the subspecialties in pathology an appropriate number of clinical staff must be provided at the departmental level.

Figure 3

Medical productivity in haematology: requests for each whole time equivalent medical consultant. Each point represents a single department.

Although the CBC report does primarily focus on consultants' workload in relation to number of tests and requests performed, other areas of work are also included in the analysis. For example, the frequency of on call commitments of haematologists has also been analysed. Out of 95 departments, 32 declared that the consultants in these departments are working a one in two rota. A further 22 trusts indicated that the haematologists were on a one in three rota. Again, one would have to question this intensity for on call, particularly if prospective cover for annual leave was also part of this rota. Another example of non-test related productivity has been undertaken in histopathology. An analysis of the number of necropsies performed for each whole time equivalent histopathologist has been studied. In microbiology, an analysis of the workload in relation to infection control has begun over the past year. The provisional analysis of this indicates that a median of three sessions is allocated for each medical whole time equivalent for this work. However, it is recognised that there are areas of the medical consultant's workload that are not being assessed in the CBC report, including clinical liaison with both general practitioners and with medical staff within the hospital.

Productivity is also analysed for non-medical staff and fig 4 shows an analysis of requests for each whole time equivalent MLSO and requests for each whole time equivalent MLSO/medical laboratory assistant (MLA) in microbiology. Clinical scientists have been excluded from this analysis. As can be seen from this graph, there are wide variations in productivity. This is less so when MLAs are included in the analysis. The aim of this analysis is to encourage those laboratories that are extreme outliers to reflect on why their laboratory is in that position.

Figure 4

Non-medical productivity in microbiology: requests for each non-medical/non-clinical scientist whole time equivalent—that is, for each whole time equivalent medical laboratory scientific officer/medical laboratory assistant (open bars) and for each whole time equivalent medical laboratory scientific officer (closed diamonds). Each bar and point represents a department.


MLAs were introduced into pathology in 1989 after the publication of a new salary and grade structure for laboratory staff.5 One of the aims of introducing this new grade was to change the skill mix in the laboratory so that routine tasks previously performed by trained MLSOs could be performed by MLAs. It was assumed that the change in skill mix would reduce overall costs within pathology. Figure 5 shows the percentage of MLAs compared with MLSOs for each of the major pathology disciplines. As can be seen, there is a similar range of percentages of MLAs irrespective of cluster or discipline within pathology. For each discipline in pathology, a further analysis has been undertaken to look at the correlation between percentage of MLAs and the cost for each test and cost for each request. There was no correlation in all four disciplines. Therefore, increasing the proportion of MLAs in a department does not reduce staffing costs. Further work has been undertaken to try to explain this observation. As part of the questionnaire the pathology panel has asked participating laboratories to document the role of the MLA and the MLSO in routine working. There now seems to be a large overlap between the roles of these two groups of staff. Therefore, it seems likely that as MLSOs have been replaced by MLAs, staffing levels have effectively been increased by using the full budget for MLSOs to employ more than one whole time equivalent MLA.

Figure 5

Percentage of medical laboratory assistants to medical laboratory scientific officers for each of the main disciplines within pathology:(A) biochemistry, (B) haematology, (C) microbiology, and (D) histology. Each bar represents a department.


The analysis of working hours shows that histopathology and microbiology have largely retained the traditional working hours of 09.00 to 17.00. Within biochemistry and haematology, however, there has been a great increase in the number of laboratories that are working a partial or full shift system that provides extended opening hours for normal working. These changes are shown in fig 6 and have occurred irrespective of hospital size. Clearly, it is possible to learn from best practice for those laboratories wishing to change their working hours. As part of the CBC system, each laboratory is asked to declare whether they are willing to share a contact name and telephone number with other departments in the same cluster group. This then allows for participant meetings to be held for best practice to be shared.

Figure 6

Total weekly normal working hours in categories 1 (open bars: the department is open for its full repertoire of services with staff working their weekly contractual hours) and 2 (closed bars: the department is open for a restricted service with fewer staff but who are working their weekly contractual hours). (A) Biochemistry, (B) haematology.


Q-Probes are part of the College of American Pathologists' (CAP) programme of studies in quality assurance. The data collection process for each Q-probe is similar to that described for CBC and is summarised in fig 7. For each study, laboratories are asked to collect data over a specified length of time. These data are then submitted to the Q-Probe office at CAP, which performs the analysis and prepares both an individual laboratory report and a summary of the whole study. This summary is normally published in the Archives of Pathology and Laboratory Medicine. Each laboratory's performance assessment is based on benchmarks provided through external peer comparisons. These external peers are hospitals of equivalent size and workload. Linked to Q-Probes, CAP has now developed Q-Tracks, which are aimed at introducing continuous quality improvement by providing laboratories with regular reports on their performance. The main difference between the Q-Probes and Q-Tracks is that Q-Probes on the whole are one off audits that allow laboratories to compare their performance with other laboratories at that particular point in time. Q-Tracks, however, depend on the laboratory submitting data on either a monthly or quarterly basis, and this is followed by the production of monthly or quarterly reports by CAP with trend analysis. CAP also organises a benchmarking scheme that is similar to the CBC Pathology Report, the Laboratory Management Index Programme.

Figure 7

Q-Probes information cycle.

Until recently, there were few laboratories in the UK participating in the Q-Probes programme. There is no equivalent of the Q-Probes programme in the UK. To assess the usefulness of this programme for laboratories in the UK a grant was obtained by the Royal College of Pathologists from the Department of Health to pilot the use of Q-Probes in the UK.6 This pilot study was based on previous work undertaken by the Leicestershire Pathology Service that had first enrolled in the Q-Probes programme in 1995.7

Each annual Q-Probe programme is organised into clinical pathology studies and anatomic pathology studies. Table 3 shows the Q-Probes that were offered in 1999. One of the reasons for the Royal College of Pathologists obtaining a grant was to allow for a larger number of laboratories from the UK to participate. The CAP Q-Probe organisation will provide nation specific subsets of data when seven or more laboratories participate. The laboratory at Bishop Auckland has now participated in the Q-Probe scheme over the past four years.

Table 3

Q-probes offered in 1999

For each of the Q-Probes, study laboratories can act as a pilot site to resolve any problems with the data collection proforma. A pack is then sent to each of the participating laboratories. For example, the routine outpatient test turnaround time Q-Probe was aimed at comparing turnaround times for full blood count, biochemical profile, and thyroid stimulating hormone. In the guidance notes clear inclusion criteria are detailed for the collection of the data. Detailed guidance is provided about what samples should be included to ensure that representative samples are taken throughout the day. Data were also collected about where the sample was taken (for example, outpatient department, phlebotomy room, a room in the laboratory, etc), how the test was ordered, and method of delivering the sample to the laboratory. For this study, data were collected over a four week period. A total of 622 laboratories submitted data. Figure 8 summarises the results from the laboratory at Bishop Auckland compared with equivalent participating laboratories. For this Q-Probe, sufficient UK laboratories participated for an individual UK report to be prepared (fig 9). The result shows the intralaboratory 90% completion turnaround times, which was defined as the turnaround time achieved for 90% of tests from that laboratory. As can be seen, in the overall report the performance at Bishop Auckland showed a fairly average turnaround time compared with other laboratories that were based primarily in the USA. This is in contrast to the UK individual report where the turnaround times at Bishop Auckland were the fastest in each of the areas studied. This is one potential difficulty with Q-Probes: clinical practice can be different in the USA. In particular, the organisation of outpatient clinics in the USA can mean that a much faster turnaround time is required. However, this difference in practice between the UK and the USA is not always the case, and another Q-Probe studied the accuracy of requests for outpatient tests (fig 10). Clearly, there should be no difference in practice between countries in this Q-Probe.

Figure 8

Q-Probes 1997: routine outpatient test turnaround time (TAT), individual laboratory report. The closed diamond shape indicates the results from the laboratory at Bishop Aukland.

Figure 9

Q-Probes 1997: routine outpatient test turnaround time (TAT), UK report. The closed diamond shape indicates the results from the laboratory at Bishop Aukland.

Figure 10

Q-Probes 1998: outpatient test order accuracy, individual laboratory report. The closed diamond shape indicates the results from the laboratory at Bishop Aukland.

The final part of each Q-Probe is an analysis of factors that can lead to improved performance. For the routine outpatient turnaround time the factors that were found to produce longer turnaround times included the use of hand written requests for tests rather than computerised ordering and transport of the samples to an off site laboratory, which is clearly an important issue when laboratory services are being reviewed. Similarly, for the Q-Probe on accuracy of test ordering, the areas that were associated with a higher overall error rate included hand written requests or requests communicated by the telephone to the laboratory, a lack of regular auditing of accuracy of the data entry into the computer by the laboratory, and a higher proportion of occupied beds—a busier hospital.


It is clear that the government intends to improve both the quality and cost effectiveness of services within the NHS. Benchmarking is one way in which trusts in general and laboratories in particular can demonstrate the cost effectiveness and quality of their services. However, it is important to emphasise that benchmarking does not necessarily give the right answer. For example, those laboratories that have highest productivity levels might not provide the ideal benchmark that all laboratories should be aiming to achieve because there may be concerns regarding the quality of the service that can be provided. An example has been given above regarding the difficulty in providing an adequate clinical input to a haematology service with very large workloads. However, the results on benchmarking in these areas should give the laboratory the opportunity to reflect on how they compare with other similar sized laboratories and if that laboratory is an outlier to understand why that should be the case. Even for laboratories that have low productivity levels, the laboratory may be providing an appropriate onsite service and this may be an adequate explanation for the low level of productivity.

However, benchmarking might lead to questions being asked that could indicate what the right answer is. For example, variation in clinical practice may be identified through benchmarking and this may lead to agreement over what should be best practice. Furthermore, using the data derived from the CBC report it should be possible to derive workload norms for each specialty in pathology to ensure that an appropriate level of clinical input into the service can be achieved at departmental level. Currently, the CBC pathology scheme is the only programme that is available at a national level in the UK. We have reviewed the literature and have attempted to identify other benchmarking schemes of a similar size. Although there have been smaller studies reporting variation in clinical practice in the laboratory, it is important that only those studies that are large enough to ensure that their results are generally applicable should be studied to determine what is best practice. The only available study that would fit this category is the report by the Audit Commission on an analysis of pathology services.8, 9 Although this was a national study producing a large amount of data it was limited in its value because it was a one off report and the study has not been repeated. This contrasts with the CBC study, which has an annual cycle to ensure that laboratories that participate are using up to date and comparative data. This scheme is now in its fifth year and the methods of data collection and analysis have been refined and validated over this time. Although the CBC scheme does highlight the cost effectiveness of the services, it also raises questions regarding variation in clinical practice. This has led to attempts to identify evidence to support particular strategies in laboratory diagnostic testing. It remains to be seen whether this can reduce variations in clinical practice in laboratories that participate in this study.

In contrast to CBC, the Q-Probe scheme only studies the quality of laboratory services. At present, this scheme is somewhat limited by the small number of UK laboratories that have participated. However, it is to be hoped that, following the successful pilot study undertaken by the Royal College of Pathologists, a national initiative can be undertaken so that there can be a coordinated participation of UK laboratories in this study. The only alternative would be to establish a similar type of scheme in the UK, but a considerable infrastructure would be needed to support such a development. One possible option would be for the UK External Quality Assurance Scheme to take on some of the work undertaken by Q-Probes.

In summary, as a result of government policy benchmarking is here to stay. The recent round of NHS reforms has increased accountability and pathologists will need to demonstrate that they are providing a cost effective and high quality service. Benchmarking schemes described in this article are one way in which this objective can be achieved.


We are grateful to the Clinical Benchmarking Company and Professor R Dyson for allowing us to reproduce data from the biochemistry, haematology, histopathology, and microbiology reports.

Contact details

Clinical Benchmarking Company, 25 Christopher Street, London EC2A 2BS, UK. Tel: 020 7422 0220. Q-Probes, College of American Pathologists (

Conflict of interest

The authors are members of the Clinical Benchmarking Company pathology panel.