Article Text

Download PDFPDF

Managing demand for laboratory tests: a laboratory toolkit
Free
  1. Anthony A Fryer1,
  2. W Stuart A Smellie2
  1. 1Department of Clinical Biochemistry, Keele University School of Medicine, University Hospital of North Staffordshire NHS Trust, Stoke-on-Trent, Staffordshire, UK
  2. 2Department of Biochemistry, Bishop Aukland General Hospital, Bishop Aukland, UK
  1. Correspondence to Dr Tony Fryer, Department of Clinical Biochemistry, Keele University Institute of Science and Technology in Medicine, University Hospital of North Staffordshire NHS Trust, Hartshill, Stoke-on-Trent, ST4 7PX, UK; anthony.fryer{at}uhns.nsh.uk

Abstract

Healthcare budgets worldwide are facing increasing pressure to reduce costs and improve efficiency, while maintaining quality. Laboratory testing has not escaped this pressure, particularly since pathology investigations cost the National Health Service £2.5 billion per year. Indeed, the Carter Review, a UK Department of Health-commissioned review of pathology services in England, estimated that 20% of this could be saved by improving pathology services, despite an average annual increase of 8%–10% in workload. One area of increasing importance is managing the demands for pathology tests and reducing inappropriate requesting. The Carter Review estimated that 25% of pathology tests were unnecessary, representing a huge potential waste. Certainly, the large variability in levels of requesting between general practitioners suggests that inappropriate requesting is widespread. Unlocking the key to this variation and implementing measures to reduce inappropriate requesting would have major implications for patients and healthcare resources alike. This article reviews the approaches to demand management. Specifically, it aims to (a) define demand management and inappropriate requesting, (b) assess the drivers for demand management, (c) examine the various approaches used, illustrating the potential of electronic requesting and (d) provide a wider context. It will cover issues, such as educational approaches, information technology opportunities and challenges, vetting, duplicate request identification and management, the role of key performance indicators, profile composition and assessment of downstream impact of inappropriate requesting. Currently, many laboratories are exploring demand management using a plethora of disparate approaches. Hence, this review seeks to provide a ‘toolkit’ with the view to allowing laboratories to develop a standardised demand management strategy.

  • Education
  • Evidence Based Pathology
  • Health Services Res
  • Information Technology
  • Laboratory Tests

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Like other sectors of the health economy, laboratory medicine is under increasing pressure to remove inefficiencies and reduce costs, while maintaining or indeed improving standards. In the UK, this is reflected in a Department of Health drive to make £500 million in savings in laboratory medicine by a package of measures, including large-scale laboratory reorganisation.1 Attention has focused on laboratory medicine as a potential source of savings, presumably because their costs are perceived as being easily identifiable and quantifiable, despite the fact that expenditure on this area accounts for only 3%–4% of the UK national health budget. However, in an attempt to meet targets such as those outlined in the Carter Report,1 laboratories are increasingly looking at demand management as a means of reducing the cost of unnecessary pathology investigations.2–7 Estimates of inappropriate requesting vary greatly between studies8 ,9 though the Carter Report gave an overall estimate of around 25%.1 This review examines the drivers for demand management, investigates how we define and detect an inappropriate request, and provides tools to limit such requests. It will also discuss the wider impact of implementation of demand management approaches. It provides a series of specific recommendations with a view to helping the reader develop a wide-ranging demand management strategy.

What is demand management?

In addressing these aspects, we must first define what we mean by ‘demand management’. It is certainly not a term restricted to laboratory medicine, and is not new in terms of concept. Indeed, it is a term that also applies to other areas outside laboratory medicine, such as referral rates and Accident and Emergency (A&E) attendances.10–12 In laboratory medicine circles, it is often confused with the term ‘demand control’. Demand control refers to the use of approaches to reduce the volume of requests, while demand management focuses on ensuring appropriate requesting. Hence, the latter has an inbuilt quality aspect and may result in increased, as well as decreased, testing (ie, to reduce overordering, underordering, and misordering of tests).7

Why is it necessary?

The reasons why service users request tests ‘inappropriately’ are manifold and include laboratory as well as patient, requestor and systemic factors. For instance, laboratories may perform tests not requested (such as by reflex testing), provide poor turnaround of test results, do not review the repertoire, collect incorrect samples (in cases where laboratories provide phlebotomy services), or indeed give the impression that testing is easy. From a patient perspective, some patients are unable (or unwilling) to attend for phlebotomy due to factors such as access to such services at suitable times/places, needle-phobia or lack of awareness or understanding of their importance. Similarly, healthcare professionals may inappropriately request tests for a wide range of non-evidence-based medicine reasons including fear of litigation (risk avoidance), lack of experience, uncertainty, lack of awareness of guidelines or the cost of investigations, as a consequence of protocol-based requesting, as a result of patient or peer/supervisor pressure, or lack of awareness of recommended repeat testing intervals.13 Systemic reasons include the inability to access previous results, limitations of laboratory and/or hospital/surgery information technology (IT) systems, and changes to contracts between requestors and the laboratory.2 ,3 ,14–17 Consequently, a robust demand management strategy must address each of these aspects.

Drivers for demand management

The drivers for demand management are addressed in more detail elsewhere.2 ,6 However, one aspect is key; given the above definition, demand management should go beyond the national and local drive to reduce pathology costs.1 ,2 If we are to improve standards by ensuring more appropriate testing, surely this will impact on the wider patient pathway, even if it does result in more testing. Indeed, this should be implicit in the evidence base underpinning guidance on testing. Thus, improvements in appropriateness of requesting should be reflected in (a) savings in the wider health economy, (b) improved clinical outcomes, (c) better patient quality of life (from reduction in unnecessary phlebotomy episodes and improved clinical outcomes) and (d) wider societal benefits such as fewer lost working days.18 However, engaging commissioners in accepting the benefits in these latter areas will require a clearer evidence base. Hence, it is incumbent on all clinical laboratories to develop a broad-based demand management strategy with a view to providing this evidence. It is hoped that this review will facilitate this process.

What is an inappropriate request?

The first stage in implementing demand management strategies is to categorise, define and quantitate the ‘inappropriate request’; a request (implying what is ordered by the requestor) that is made outside some form of agreed guidance (including those requested too late). This definition should be distinct from (a) an ‘inappropriate test’; a test (implying what is performed by the laboratory) that may also result from a laboratory error where the incorrect test is performed on a correct request and (b) an ‘unnecessary request’; a term that excludes those tests that may be inappropriately late compared with an agreed testing frequency.

As implied above, the definition of ‘inappropriate’ also implies reference to some agreed guidance, and this itself can create challenges. Indeed, in some cases, there is marked variability in when some tests should be used, both among laboratories and between requestors, even in identical clinical scenarios.19 Thus, the first step in developing a demand management strategy is to agree appropriate use criteria.

Recommendation 1 : Agree criteria with stakeholders regarding appropriate test utilisation and definitions for inappropriate use.

In its broadest sense, an inappropriate request is one that should not be processed, generally because it is requested in the wrong patient, at the wrong time, in the wrong way, or is for the wrong test.2 Table 1 lists some examples of each of these aspects. While some such examples are clear-cut, others may need reference to published (and hopefully evidence-based) guidance, such as that from the National Institute for Health and Clinical Excellence (NICE) on retest intervals for glycosylated haemoglobin; HbA1c.20 ,21 Others may be agreed locally with clinical colleagues, based on laboratory and/or clinical consensus (which may include a degree of pragmatism), ideally based on evidence-based practises, such as literature reviews, information on analyte half-life, reference change values and so on. However, it should be stated that, although guidance may exist in some of these cases, it is frequently spread among a range of literature sources and often directed at laboratory specialists rather than test requestors.22 In many of the more common situations, the laboratory does not have sufficient clinical information to determine whether a diagnostic test is or is not appropriate. However, whatever the basis for the definitions, it is critical that they are agreed locally with service users.

Table 1

Examples of inappropriate requesting

Recommendation 2 : Devise a strategy for reviewing new guidance as it is published. Consider using a short proforma to identify impacts on the laboratory.

The key, then, is to determine how we might detect such tests and determine their prevalence to assess their suitability as targets for demand management. In this regard, availability of local workload trends is extremely useful at identifying potential targets, but by no means the only starting point. A list of other sources of such information is listed in table 2, along with the possible causes of the inappropriate request. Assessment of variability in practise either between requestors within a local health economy23–25 or between laboratories26 (or even between countries,27 though differences in healthcare structures need to be taken into account) may provide valuable data on potential inappropriate requesting. Certainly, the UK National Pathology Benchmarking Scheme is a useful place for identifying requesting patterns relative to other laboratories,28 and also for assessment of consensus on the composition of common profiles (test panels).

Table 2

Tools for identifying targets for demand management

Recommendation 3 : Develop a consistent strategy for reviewing workload figures.

Obtaining prevalence data can be relatively straightforward when laboratory workload figures are available and easily translated into relevant parameters. Others will require audit data, as illustrated by the work of McDonnell et al 29 and Walker and Crook30 on the appropriateness of tumour markers requests.

Recommendation 4 : Develop an audit schedule that includes assessment of inappropriate requesting.

Demand management tools

The starting point: within-laboratory management of requests

This section examines the simpler options available for the laboratory to manage demand and will look at repertoire review, assessment of duplicate request frequency, vetting requests and appending information to reports.

Repertoire review

The repertoire for the typical clinical laboratory should, theoretically, be reasonably standardised. However, local demographic factors, provision of specialist services (by the laboratory as a reference laboratory, and/or to the laboratory in terms of local clinical specialties) and workload/expertise for more specialist tests make this unrealistic. Consequently, each laboratory will need to constantly review their repertoire to respond to the local needs, national changes and new developments.31 For instance, the relatively recent implementation of the bowel cancer screening programme, along with accumulating evidence questioning the benefit of faecal occult blood testing in symptomatic patients32 has resulted in many laboratories withdrawing this test. Similarly, older cardiac enzyme assays have been progressively replaced by troponin tests, with newer candidates already on the horizon.33

Repertoire review should also cover repatriation of tests referred to other laboratories (or vice versa) in response to local patient population changes, capacity/cost review, network formation and so on. While this will not necessarily change the volume of testing, it may improve turnaround time and/or value for money for the local population, thereby aligning itself with the drive of demand management to reduce inconvenience for patients.34

Recommendation 5 : Review repertoire on a systematic basis, perhaps once every 6 months.

Vetting and within-laboratory alteration of requests

More centrally allied to demand management is the option to vet high cost/low volume tests, often those referred to specialist laboratories, on an individual basis. Using vetting, we have successfully reduced the number and cost of urine toxicology screens from 30 to <5 requests accepted per month (figure 1). Furthermore, through dialogue with the requestor, we discovered that most requests that were accepted required a ‘drugs of abuse’ profile (at a tenth of the cost) rather than a full toxicology screen, resulting in a saving of around £30 000 a year (and requestors receiving the correct test profile). While there is relatively little published work on the cost or clinical effectiveness of this approach (though several laboratories in the UK have evaluated this locally, with some success), Crispin et al 35 have advocated this from a quality improvement perspective in transfusion services.

Figure 1

Change in number of tests sent for urine toxicology screening (including drugs of abuse screens) per month following implementation of request vetting (two phases: October 2005 and June 2006).

Recommendation 6 : Systematically review the top 20 most expensive tests (cost per test) to determine whether they would benefit from vetting.

Many laboratories alter the requests internally, either by using ‘reflex testing’ protocols, or following clinical assessment of the results (‘reflective testing’).36 This is not a new concept. In 1988, Finn et al 37 showed that alteration of thyroid function test requests after they had been made, by either clerical staff implementing computer-based ordering menus or by laboratory staff utilising knowledge-based rules, significantly improved the appropriateness of test requesting. More recently, Srivastava et al 36 has used a combination of these approaches to improve the diagnosis of hypovitaminosis D, hypomagnesaemia, hypothyroidism and hyperthyroidism and haemochromatosis, a method used elsewhere to improve clinical utility of laboratory services.38 ,39

Recommendation 7 : Explore options to utilise IT to apply reflex testing. Examine the possible added clinical value of reflexive testing. Evaluate the effectiveness of such systems to ensure improved test appropriateness.

Duplicate requests and retest intervals

Of increasing utility in clinical laboratories is the identification of ‘minimum retest intervals’ to identify duplicate requests. This will be discussed further below in the section on electronic requesting, but is not limited to this approach. Many laboratory computer systems now allow the identification of a repeat request for specific tests within a user-defined time window. For instance, UK NICE guidance for Type I and Type II diabetes mellitus (DM) recommends HbA1c testing at 2–6-monthly intervals in patients with unstable diabetes, with a measurement made at an interval of less than 3 months being used as an indicator of direction of change rather than as a new steady state.20 ,21 In those with stable diabetic control on unchanging therapy, intervals of 6–12 months are recommended. Such data can be used to identify patients on whom repeat testing is requested within 2 months. We have recently examined the prevalence of such requests and showed that 21.3% of HbA1c repeat requests were received less than 2 months after the previous request.25

In addition to utilising published guidance to define recommended retest intervals, local agreements, analytical data (such as analyte half-life or reference change value) and consensus statements can be utilised.40 These data then allow duplicate requests to be automatically highlighted for laboratory review (either automatically rejected, or subject to individual vetting). Such intervals are either applied universally, or to selected requesters/locations. Table 3 shows some commonly used retest intervals from a recent survey by the UK West Midlands Association for Clinical Biochemistry Demand Management Forum. While this approach may appear straightforward, differences arise between primary and secondary care as routine tests performed in primary care can be simplistically divided into those used for monitoring or diagnosis, whereas, in the acute phase, the minimum retest interval is very dependent on the clinical state of the patient in addition to previous results. Nevertheless, this is a promising area and one which could certainly reduce the tendency to repeat tests at close intervals which may be inconsistent with the kinetics of the test in the disease in question.

Table 3

Common minimum retest intervals

While half of UK laboratories admit to using their laboratory computer system to identify such requests,41 and studies advocate this as an approach to demand management2 ,8 and estimation of its prevalence,25 ,42 ,43 there is a lack of published data on recommended minimum retest intervals or their effectiveness. An exception is the Guidelines & Audit Implementation Network (GAIN) Guidelines on the use of the Laboratory, published in 2008,44 a collaborative venture based in Northern Ireland. These guidelines provide recommended minimum retest intervals for a range of clinical laboratory tests spanning a number of laboratory medicine disciplines, though the underpinning evidence base specifically for the retest interval is unclear. The Clinical Practice Section of the Association for Clinical Biochemistry is currently collating data from UK laboratories to obtain consensus data on retest intervals. At the time of writing, these data are currently being circulated for comment and are due to be published in late 2012. This consensus, while useful in clinical practise, highlight that further research on the evidence for such intervals, and their impact on clinical and other patient outcomes, is required.

Recommendation 8 : Implement laboratory computer-based IT solutions to maximise the identification of duplicate requests within nationally agreed minimum retest intervals. Where possible, application of acceptance/rejection criteria and comments should be automated.

Prevention is better than cure: stopping inappropriate requests before they reach the laboratory

While laboratory-based interventions to curb inappropriate requests are important, they do not have a patient focus; they do not prevent unnecessary phlebotomy episodes, lost work days, car parking fees and so on; nor do they reduce waste of phlebotomy consumables, phlebotomist time, consultations and so on. As the ideal aim of demand management is to optimise appropriate requesting, preventative strategies, largely focused on education, should be the goal. Indeed, it should be central to the clinical laboratory's raison d’être.3 Such education may come in many forms—verbal, written, electronic, to name a few—and may range from simple memoranda and informal discussions to best-practise guidance development, master classes and interpretative comments on test reports. This section will look at the options and their potential usefulness.

Education

There is a wealth of literature on the effect of a wide range of educational interventions on laboratory test utilisation, encompassing both verbal and written initiatives. These may include the general clinical liaison that is critical for the day-to-day operation of the clinical laboratory, simple memoranda and laboratory newsletters/bulletins, to formal educational material and/or training sessions, curriculum development and best practise guidance.2 ,6 ,45 ,46 However, the literature suggests that the effectiveness of such measures is variable.3 ,8 Table 4 lists some examples of educational initiatives, though this is by no means exhaustive. While these suggest that most (but not all) are successful, publication bias might suggest that the effectiveness of educational approaches is more variable than the literature indicates.

Table 4

Examples of educational interventions used in demand management initiatives

However, there is evidence of effectiveness in specific circumstances. For instance, we have previously shown that implementation of a locally agreed procedure, arising out of regular discussions with clinical teams, resulted in an 85% reduction in C-reactive protein requests from emergency admissions wards,58 though interestingly, this has been difficult to maintain following changes to the structure of the emergency portals in recent months. Wang et al 48 showed that development of guidelines on test-requesting by a multidisciplinary team in a coronary care unit resulted in reduction in test utilisation for a range of parameters that was evident a year after the intervention. However, while national guidance exists for many patient pathways, including advice on laboratory testing, their aim is not specifically to limit inappropriate testing.3 This, therefore, necessitates the development of locally agreed (and, therefore, potentially not standardised) guidelines.59 The effectiveness of such initiatives may therefore be limited to specific local circumstances due to variation in local expertise, structures and even personalities. Their reproducibility across many laboratories is therefore difficult to ascertain from the literature, and formal meta analyses are required.

It is also important not to neglect the ‘right process’ component of demand management education. Ensuring samples are collected in the correct manner is an essential facet of the laboratory's duty. The laboratory handbook is often used to address this issue by providing users with information on correct sample containers and volume, special conditions (eg, fasting, time of day, transport arrangements, etc) and correct phlebotomy procedure (to prevent haemolysis, ensure correct order of draw, etc). However, the number of incorrectly collected samples continues to be a major problem in many laboratories, suggesting that the laboratory handbook is more useful for laboratory staff and satisfying regulatory authorities, than influencing practise. Jacobsz et al 60 showed that 1.5% of samples were rejected, only half of which were repeated correctly, and 5% of these had results with critical values. Hence, it is important that further work be carried out on exploring more effective means of reducing this form of waste.

As with any educational intervention, guidance on requesting also needs to be repeatedly reinforced and kept up to date.

The overall view of these initiatives suggests a number of key messages:

  1. Sustainability remains a challenge, though it can be achieved with regular reinforcement.52 It is, however, not an inconsiderable task given the range of scenarios where laboratory tests are required.3

  2. Successful schemes more commonly use a combination of approaches.6 ,61 Solomon et al 45 performed a literature review of educational approaches and highlighted that interventions based on ‘multiple behavioural factors’ had a higher rate of success.

  3. Joint initiatives from laboratories and service users (including patients) are more likely to lead to success, indicating that close laboratory–clinical liaison is critical (see also3).

Recommendation 9 : Cultivate links with all key specialties (including primary care) to allow establishment of suitable educational forums and ensure educational interventions have the backing of clinical teams.

Recommendation 10 : Develop a local educational strategy for requesting, drawing on national guidance and literature-based evidence where possible. Establish an ongoing plan to review its relevance.

Recommendation 11 : Utilise training programmes to address correct sample collection, timing, transport and so on.

Request form design

The design of the request form has long been a potential target for demand management strategies.3 ,8 ,62–64 However, these issues have largely been superseded by the issues arising from the implementation of electronic test requesting, which is addressed in more detail later.

Profiles and test combinations

Allied to request form redesign, is the area of the conventional ‘test profile’. Historically, these have been organ-based profiles such as liver, kidney and thyroid, which provide a set of tests that offer information about the state or functioning of the organ system. Not all of these are specific to the system concerned, and profiles vary considerably. The 2011 report from the National Pathology Benchmarking Service,41 which included approximately 50 UK laboratories, demonstrated that 12 different ‘liver function test’ profiles were used. A consensus report65 has recently been put together proposing to harmonise these different profiles as the additional tests that are included in some are not only a source of extra cost but also one of potential for confusion for doctors moving between different hospitals. This is, however, potentially only the start of a process which could move more towards diagnosis-based testing rather than organ-specific profiles. Many patients presenting with non-specific symptoms, for instance, are investigated for a range of possible diagnoses which may span different organ systems. In addition, monitoring tests could be considerably streamlined—there is, for instance, no clinical need to measure all the tests currently included in a ‘liver function test’ profile if the aim is to detect a change in a liver transaminase in a patient taking a drug which carries a risk of hepatocellular toxicity. This, potentially, significantly reduces the marginal cost of these tests by removing clinically unnecessary analytes. While these are measured in pence per test for most routine tests, the numbers involved make the potential savings from redundant tests worthy of consideration though, in some cases, the continued requirement for analyser maintenance, internal quality control, external quality assurance and so on, need to be taken into account when calculating savings. Winkleman,66 showed that a 10% reduction in utilisation of a single high-volume, low-cost test only released 1.32% of overall costs.67 Hence, the real savings associated with reduced test volumes are often significantly less than initially envisaged.66 ,68

Doing this successfully, however, requires more refined requesting systems than can be offered by traditional paper-based request forms. Far more clinical testing scenarios exist, for instance, in primary care, than could ever be listed on a paper request form, but it is not difficult to imagine how this could be approached by an electronic selection menu. This is a promising opportunity for the electronic requesting systems to offer a default test selection which can be supplemented if necessary, rather than a larger profile being selected by default. As yet, very little has been published on this subject.

A more immediate opportunity exists in hospitals with acute admissions where the list of common reasons for admission is shorter, and is compatible with paper request forms. Again, very little has been published on this subject, although the concept of the admission profile was described 30 years ago.69 This, again, offers the opportunity to select desired tests and ensure that first, important tests are not omitted, but second, by offering a default set of investigations, it potentially reduces the likelihood of inappropriate tests being added. Locally, this has been a problem with requesting of D-dimers in situations in which venous thromboembolic disease is not suspected, but in which the D-dimers are raised for other reasons, producing clinically false positive results.

Recommendation 12 : Examine the possibility of standardising common profiles in line with national benchmarking data.

Recommendation 13 : Consider the potential of implementing symptom-specific profiles.

Test report-based education

While many studies have advocated various approaches to this in an attempt to prevent inappropriate requests, the simplest means within the control of the laboratory is to provide education on appropriate testing either at request (see section on electronic requesting below), or via the final report. For instance, provision of interpretative comments on test reports is not only welcomed by users but has been suggested to influence requesting behaviour and, indeed, patient outcome.70 ,71 Vasikaran72 stated ‘Interpretative commenting should go hand in hand with regular contact with clinicians to develop a dialogue about appropriate testing…’. We have used such reports to identify and reject duplicate requests for HbA1c,25 for instance, and also provide data on when the previous test was performed, its level, and on the laboratory minimum retest interval. Such high-volume tests are amenable to automated comments, while more specialist tests warrant individualised information on appropriateness.71 While, in this latter case, it is difficult to quantify its impact, it is hard to argue that it has no beneficial effect on future requesting behaviour. However, some studies suggest that appending educational messages to reports, while effective, affects requesting behaviour to only a moderate extent,16 unless they interrupt the requestor's routine.3 Furthermore, the variability in content and appropriateness of interpretative comments themselves has led to the development of a peer-assessed external quality assessment scheme in an attempt to improve the comments themselves.73

Recommendation 14 : Develop a strategy to review all automated interpretative comments to include information on test appropriateness.

Feedback on test usage and costs

Various studies have been published showing the effects of feeding back either cost or volume information to users of laboratory services (table 4).6 ,51 ,74–76 The impact of feedback of test costs is likely to depend on the requesting physician's direct responsibility for laboratory expenditure. While it is possible that providing cost information would encourage physicians to concentrate on the quality of their investigations, it is unlikely that this information alone will be the key driver. As stated by Hoey et al,76 ‘A large proportion of tests are ordered because of clinically absolute reasons, which may be insensitive to price’. Hence, while generating comparative test volume or cost data (benchmarking) can be useful to highlight differences between requesting physicians to identify areas of potential high or low use, is not, in itself, sufficient to effect change.77 Furthermore, the complexity and, hence, cost of implementing regular feedback systems may make the resultant benefits insufficiently large to justify them.78 A combination of educational initiatives with facilitating mechanisms has been shown to be one of the more effective ways of influencing demand.14 ,45 Feeding back volume information is relatively straightforward in a primary care setting in which overall case mix between general practises is relatively homogeneous and differences in testing activity are too great to reflect practise demographics.23 This is, however, less straightforward on a hospital basis given the specialties and subspecialties of different consultants, and the multitude of clinical staff that may rotate through departments. In primary care, a group of approximately 20 tests (or profiles) make up the great majority of tests which are performed, most of which fall into the low-cost, high-volume category. In hospitals, however, the test mix is considerably larger, and test costs may differ by a factor of several hundred, so a more targeted approach is needed.

Recommendation 15 : Consider providing feedback on cost and volume data (and where possible, inappropriate request data), particularly to primary care using comparative data (benchmarking) to utilise peer pressure. Consider primary and secondary care feedback on a longitudinal basis to evaluate trends.

Financial incentives and penalties

With the advent of almost universal savings targets for healthcare agencies, the incentive to save on laboratory costs is an inevitable consequence. However, this is not a new phenomenon. For many years, the US model that paid a fixed tariff to physicians based on ‘episodes of care’, or other finance-based contracts between service users and laboratories, gave rise to increased financial awareness, though the savings resulting from this were often seen in ‘big ticket’ cost elements, such as referral rates, rather than laboratory test requesting.6 ,61 A wider review of financial control initiatives is provided elsewhere.6

In the UK, the introduction of the primary care Quality and Outcomes Framework (QOF) has provided general practitioners with financial incentives for performing certain tests in specific groups of patients. While this may potentially lead to increased testing, the argument is that it improves quality and equity of care. However, in terms of test appropriateness, there is sometimes a disconnect between other national guidance and QOF indicators. For instance, QOF guidance on testing frequency for HbA1c in patients with diabetes focuses on ensuring that patients have been tested within the previous 15 months,79 while NICE guidance suggests more frequent testing, especially within those with changing treatment or poor control.20 ,21 Hence, as expected, when we examined the impact of QOF guidance on testing outside NICE guidance for HbA1c, no impact was observed.25 This has not been the case for other examples, such as urine albumin:creatinine ratio testing following the introduction of the quality indicators from the Northern Ireland General Practise QOF for patients with diabetic nephropathy in 2004, where testing doubled following its introduction.80

Recommendation 16 : Consider aligning demand management initiatives to local/national financial incentives and/or penalties for requestors.

Specialty/staff grade limitations

A further way to prevent inappropriate requesting, particularly of the specialist tests, is to limit requesting to specific specialties and/or more senior staff. This approach is allied to the vetting approach described above, and while potentially effective, it is often difficult to police. Smith et al 81 implemented a scheme whereby out-of-hours requests were processed only following a discussion between the requesting consultant and consultant-level laboratory staff. This reduced out-of-hours requests by 40%. A similar approach by Morgan et al 82 reduced the number of out-of-hours calls to the laboratory by 80%, though evidencing the cost-effectiveness of these approaches, particularly given the increase in test volumes in the decades since this was trialled, may be a challenge. In our study on C-reactive protein requesting in the emergency portals, our approach included processing requests for tests only if a consultant's signature was supplied on the request form.58 Since the advent of electronic requesting, specialty- and staff grade-specific requesting is potentially more easily applied, and is therefore addressed in more detail below. However, it should be stated that while this approach can prevent tests being inappropriately requested, if it is not set up and policed carefully, it can result in the opposite effect.

Recommendation 17 : Review specialist tests to determine whether consultant-only and/or specialty-specific requesting would be appropriate.

Guideline and protocol development

National guidance (eg, NICE) provides some information on appropriate testing, though in some cases, this is somewhat hidden among other management and treatment instructions in large documents, spread among a range of sources and/or aimed at laboratory specialists rather than requestors.22 Hence, it may have little impact on appropriateness of requesting.4 For instance, we have shown that guidance on testing frequency for HbA1c in diabetes, though present in NICE documents for both Type I and Type II diabetes, has had little impact on HbA1c undertesting or overtesting.25 Hence, local guidance, or targeted protocols, may be more effective, though this then potentially introduces variability in practise and possible conflicting messages. It is essential, therefore, that such local guidance is based on national data and involves all stakeholders in its development and implementation.48 Ideally, as illustrated by Wang et al 48 and van Wikj et al,64 guideline development will comprise a component of a wider educational strategy. Despite its potential, influencing testing via development of clinical protocols,3 requires a long-term commitment and requires careful liaison between key stakeholders. This contrasts with laboratory-based protocols (eg, test repertoire, reflexive testing procedures, rejection parameters, etc), which are much easier to control, but do not prevent unnecessary phlebotomy episodes. However, there are some suggestions that the effectiveness of clinical guidelines in influencing clinical practise depends on the way in which the guidelines are implemented.83 Furthermore, protocol-driven requesting can, if not implemented and managed carefully, result in an increase rather than a decrease in inappropriate requesting.61 ,84

Whatever approach to education is used, a critical factor in ensuring the success of such approaches is the need to ensure a long-term sustained effect so as to prevent regression to original patterns,14 including taking into account healthcare staff turnover.85 Without careful planning, monitoring and reinforcement, the general impression is that long-term effectiveness is somewhat limited.46 Many studies suggest short-term effects,45 though this may reflect some degree of publication bias. However, this opens the door for opportunities to explore the role of electronic requesting as a means to reinforce educational messages and sustain their effects.

Recommendation 18 : Integrate a systematic review of effectiveness into the demand management strategy. Focus on educational approaches that can be monitored and, if necessary, re-emphasised on a regular basis to inform new staff.

Recommendation 19 : Develop local best-practise guidance, allied to national data, on appropriate testing. Implement an ongoing review to ensure that requestors are aware of the guidance.

The right process: ensuring that the correct sample is collected

Collection of samples into the incorrect sample container represents one of the most widespread reasons for request rejection.60 Traditionally, correct sample collection was dependent on phlebotomist knowledge of sample requirements in the context of an ever-changing test repertoire, or worse still, doctors and nurses regularly reviewing laboratory handbooks on sample requirements (assuming that these themselves are kept up to date). This is challenging, particularly when complex combinations of tests are requested.

In order to ensure that the correct procedure for sample collection is followed (as opposed to the request itself being appropriate), local guidance is essential. Much of this is often provided in the form of a laboratory handbook, though raising awareness of its existence can sometimes be a challenge, particularly on an ongoing basis due to high levels of staff turnover and/or rotation. Laboratory-to-laboratory variation in such protocols may also be a potential cause of inappropriate sample collection, though standardisation in the UK is helped to some degree by the work of the Pathology Harmony group.86

In this aspect of reducing waste, electronic requesting offers useful opportunities. For instance, it may allow requests to automatically include details on the number and types of sample containers required. It also enables the risk of test rejection because of insufficient patient identifier information to be minimised.

Allied to this is the utility of electronic requesting to provide details of correct sampling procedure at the time of request. For instance, the software could be configured to specify that requests for digoxin should be taken 6 h postdose and at steady-state (and give an option to include dose information). In the case of xanthochromia requests, it could also specify order of draw, state that samples should be protected from sunlight during transport, and indicate that concomitant samples for serum total protein and bilirubin are also taken, as recommended in the guidelines.87

Recommendation 20 : Ensure that electronic requesting (where available) includes accurate information on sample requirements, patient identifiers, and criteria for sampling, storage and transport. Regularly review that this is consistent with other local sources of information, such as the laboratory handbook.

The potential of electronic requesting

The widespread introduction of electronic requesting (also referred to as ‘order communications’) provides a range of opportunities to manage demand for laboratory tests. However, as yet, there is very limited published evidence on the impact of introduction of such initiatives. Nevertheless, the potential of the approach has been investigated in the context of imaging. Bairstow et al 88 proposed that an ideal system linking electronic ordering of imaging requests to best practise diagnostic pathways represents the way forward to maximising appropriate referrals. The same could also be suggested for pathology tests.

In the context of minimum retest intervals, electronic requesting has the advantage over laboratory-based interventions, as it prevents such requests at source prior to phlebotomy (with its inconvenience for patients, etc), though smaller number of laboratories are utilising this approach (37% compared with 50% for laboratory identification;41). Electronic requesting can also provide information on repeat testing frequency, either directly at request, or via links to external sources such as BetterTesting.com.89

Recommendation 21 : Utilise electronic requesting, wherever possible, to provide information on retest intervals and previous results with a view to preventing unnecessary phlebotomy. Ensure laboratory information systems and electronically requesting retest intervals are consistent.

Electronic requesting also provides additional opportunities for clinical education; giving the requestor information on best practise around testing at the time of requesting. Similarly, electronic requesting may provide requestors with information regarding the need for certain prerequisites to be met in order to assist in interpretation of the subsequent results, such as the need to wait for a urinary tract infection to resolve before testing for prostatic-specific antigen.90 While requesting tests electronically, using these tools may need requestors to answer specific questions during the process, often using pop-up boxes, the level of detail and the frequency of additional information requests should be kept to a minimum to prevent frustration on the part of the requestor. Again, engagement with healthcare professionals in this regard is the key.

Recommendation 22 : Explore a strategy to utilise electronic requesting as an educational tool. Consider any prerequisites to acceptance of requests.

Recommendation 23 : Involve all stakeholders in determining the content and implementation of electronically requesting educational tools. Evaluate requestor attitudes to any changes made.

An increasing area linking electronic requesting to other more traditional demand management approaches is that of defining combinations of tests (order sets) based on specific symptoms or for particular specialties. While older request form-based approaches allow for limited numbers of profiles (urea and electrolytes, liver function tests, etc), electronic requesting-based systems allow multiple sets to be developed for each specialty, based either on symptoms or, more usefully, on routine monitoring test sets. However, as with protocol requesting described above, implementation of such profiles needs to be monitored closely to prevent misuse and increased inappropriate test usage.61 ,84

Recommendation 24 : Explore the potential of electronic requesting in the application of symptom-specific and specialty-specific test sets. Implement a monitoring strategy to ensure appropriate use.

While the above uses of electronic requesting are being increasingly implemented in laboratory medicine, the effectiveness of these interventions, in terms of (a) changing unnecessary or incorrect requesting behaviour, (b) reducing healthcare costs, (c) in improving (or at least not being detrimental to) clinical outcome and (d) improving the patient experience, is not documented in the scientific literature. However, though the objective evidence regarding the value of electronic requesting in the context of demand management is lacking, it is difficult to argue that it will not improve the quality of requesting.

Does it make a difference? assessing the impact of interventions

This section examines the effectiveness of demand management strategies. While many clinical laboratories have developed at least some initiatives to manage demand,41 as stated above, there is very limited data on the outcome of such initiatives outside the laboratory. This latter aspect is the major focus of this section.

Laboratory impacts

Most studies use request numbers (sometimes translated into costs) as the endpoint to define success, or otherwise, of an intervention.3 ,8 ,45 Mostly, this is examined as a ‘before versus after’ study design, though some use a randomised controlled trial format.16 A systematic review of the effectiveness of individual approaches on request numbers is outside the scope of this article, though such a review would be welcome. Solomon et al 45 provided such a review, though this is in need up updating to take account of the potential of electronic requesting systems. Furthermore, while this type of study is valuable in assessing effectiveness at reducing laboratory workload, it often neglects the wider impacts of inappropriate testing.

Recommendation 25 : Review the clinical and cost effectiveness of demand management strategies. At a local level, ensure that effectiveness can be regularly monitored.

Patient impacts

While laboratories strive to reduce test costs in this age of austerity, there is often little thought of the patients’ views and outcomes. Certainly, reduction in inappropriate requesting will reduce the need for some phlebotomy episodes, and thereby reduce the associated discomfort, inconvenience (time off work, transport costs, etc) and potential anxiety for these patients.2 It is also recognised that inappropriate testing will impact on further follow-up. Thus, it may lead to false positive results and, subsequently, unnecessary further interventions (referrals, investigations, etc),4 which are often associated with significantly greater discomfort, inconvenience and anxiety than the phlebotomy episode alone.

Another key challenge to reducing inappropriate requesting relates to patient attitudes to testing. Thus, with the widespread availability of health information on the internet, patients may put pressure on clinicians to perform tests based on their own ‘research’. In some cases, clinicians may perform tests with limited clinical value so as to be seen to be doing something, or to reassure the patient. Discussion of repeat testing with local patients showed, in our experience, that some patients feel reluctant to reduce testing frequency because such testing gives the perception of ongoing active care. Changing such behaviour through patient education is challenging, with studies showing limited impact in a minority of patients, though use of patient brochures were shown to improve patient self-monitoring.4 Perhaps there is an opportunity to provide data on appropriate testing (eg, frequency of testing, correct procedures for testing, etc) on national patient information websites, such as Lab Tests Online,91 to provide patients with information on what the tests are used for, and on how often they should be tested.

Recommendation 26 : Integrate assessment of patient views into the laboratory demand management strategy. Consider patient-focused leaflets as a tool to address their concerns.

Clinical outcomes and changes to management

A further consideration, which has attracted little attention in the literature, is the impact of demand management strategies on clinical outcome and the management of the patient.2 ,92 However, Wang et al 48 made some assessment of impact of a multicomponent initiative to reduce unnecessary testing in a coronary care unit on outcomes to provide reassurance that reduced testing did not adversely affect outcomes, such as length of stay, readmission rates, mortality and so on, though they acknowledge that the power of the study to detect changes in outcomes was limited.

Tests that are requested correctly but not reviewed and acted upon, could also be perceived as contributing to waste of laboratory resources.14 ,93 A systematic review by Callen et al 94 showed that the failure to follow-up test results is both common and can have major clinical implications for the patient. Miyakis et al,14 showed that 68% of tests in patients admitted in one unit were not considered to have contributed to patient management, while Sandler95 showed that only one-third of emergency tests helped in treatment, and less than one-third helped in diagnosis. In a separate study, Sandler84 and Hampton et al 19 also highlighted that many diagnostic tests are requested even when the diagnosis is already made.

Given that demand management should also focus on inappropriate undertesting as well as overtesting, missed tests may have a significant impact on patient outcomes. Hence, there is a need for data on the impact of testing frequency on such outcomes. In a study looking at the effect of HbA1c requesting frequency, Fu et al 96 showed that testing frequency is inversely associated with diabetic control. This study suggested that the optimal testing frequency to achieve HbA1c below a target of 7% (53 mmol/mol) was four times per year. Turchin et al 97 also showed that frequent HbA1c testing in patients with diabetes resulted in shorter time to target HbA1c level, independently of confounders such as initial HbA1c level, treatment-associated factors, frequency of encounters with healthcare professionals and patient demographics. O'Kane et al 24 showed that there was no correlation between the number of HbA1c tests performed and the proportion of patients in general practises with HbA1c of <58 mmol/mol (<7.5%), raising questions over the impact of testing frequency on management. Such data providing clear links between appropriateness of testing and clinical outcomes are needed for demand management interventions if they are to be widely accepted by clinical staff.

Clinical outcomes-based demand management ties in with the current UK Department of Health drive to measure performance using outcome measures. This is not limited to laboratory test requesting. Brogan et al 12 highlighted the importance of a whole-systems approach for demand management across the health service. They warn of the dangers of focussing on demand management to limit healthcare expenditure alone, as this may be at the expense of clinical outcomes and unmet need, and may represent a false economy. For instance, in our study on HbA1c testing in diabetes, ensuring appropriate testing (ie, removing tests requested too soon and adding in all missed tests) would result in 3.8% more tests, but may result in better diabetic control, reduced complication rates, fewer admissions/referrals to secondary care, improved patient quality of life and, therefore, much larger savings both to the whole healthcare and national economies.23

Recommendation 27 : Consider the wider impacts of demand management strategies. Explore options to audit effects on clinical outcomes, or develop research projects to investigate this.

Patient pathways

Laboratory medicine is an integral part of the whole patient pathway, and it is incumbent on laboratory professionals, managers and clinical staff alike to ensure that patients do not undergo unnecessary investigations, but also do not miss out on critical tests. While this may result in increased overall costs for some tests, the savings to the wider healthcare economy are likely to offset this, and the patient experience will be significantly improved. Hence, studies on inappropriate requesting need to encompass these wider aspects as part of a whole-systems approach.2 ,12

Concluding thoughts

In conclusion, we propose the following:

  1. Clinical laboratories need to develop a systematic approach to demand management.2 ,6 ,98 ,99 In this regard, the recommendations in this review are aimed at helping laboratories formulate such a strategy. Ideally, this strategy should be, where possible, standardised and nationally supported through professional bodies.

  2. Even for individual demand management targets, using a multifaceted approach appears to be most effective.5 ,7 ,14 ,58 ,61 ,83 ,100 In this regard, the tools suggested in this review may form a useful starting point. Readers may also wish to utilise the excellent recent report from the Australian National Coalition of Public Pathology, a project funded under the Australian Government's Quality Use of Pathology Program, or the work by Barth on quality indicators in laboratory medicine, which address many of the issues raised here.5 ,100

  3. Laboratories need to consider a more holistic approach to laboratory testing, liaising closely with clinical colleagues to integrate diagnostics within patient pathways. It is no longer sufficient to use ad hoc, in-house approaches without consideration of all stakeholders, including patients.

There is an urgent need for more research on the effectiveness of demand management strategies (and on repeat testing intervals) on clinical and patient outcomes, as well as on wider cost effectiveness across the whole patient pathway.

While current financial constraints have raised the profile of demand management with the aim of reducing the pressure on laboratory budgets, there is a major opportunity to improve patient care while simultaneously reducing overall healthcare costs.2 However, this will require dialogue between all the key stakeholders, and a national drive to ensure standardisation of approaches. Indeed, there is evidence that this national drive is gathering momentum with the UK Royal College of Pathologists proposing that the implementation of a system of demand management at a local laboratory level, assessable by the national Clinical Pathology Accreditation process, form one of the performance indicators for laboratories.99 This process is certainly not limited to the UK.5 ,7

Acknowledgments

The authors are grateful to colleagues at both sites with whose help some of the initiatives described in this review were made possible.

References

Footnotes

  • Contributors AAF developed the initial concept and instigated the writing of the review. Both authors contributed to the content and agreed on the final version.

  • Competing interests None.

  • Provenance and peer review Commissioned; externally peer reviewed.