Article Text

Download PDFPDF

Internal quality control: best practice
  1. Helen Kinns1,
  2. Sarah Pitkin2,
  3. David Housley1,
  4. Danielle B Freedman1
  1. 1 Department of Clinical Biochemistry, Luton and Dunstable University Hospital NHS Trust, Luton, UK
  2. 2 Department of Clinical Biochemistry, Barts Health NHS Trust, London, UK
  1. Correspondence to Dr Helen Kinns, Department of Clinical Biochemistry, Luton and Dunstable University Hospital NHS Trust, Lewsey Road, Luton, LU4 ODZ, UK; helen.kinns{at}{at}

Statistics from


Laboratory errors have a significant effect on the quality of patient care.1 Indeed, studies have estimated a 2.7%–13% risk of an adverse event occurring following a laboratory error.2 More broadly, such errors can also affect customer confidence, resulting in deterioration of laboratory reputation.2 It is often emphasised that the majority of errors occur within the preanalytical and postanalytical phases. However, 7%–13% of errors in the total testing process occur within the analytical phase.2 Robust internal quality control (IQC) practices are fundamental to detecting errors in the analytical phase and thus improving the quality of patient care.

In 1981, the World Health Organisation defined IQC as “a set of procedures for continuously assessing laboratory work and the emergent results”.3 The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The assumption is that any error should be evident in quality control sample analysis to the same extent as it is in patient samples. By definition, this is not completely true for random errors. However, the error detection rate of an IQC system can be maximised by using (i) appropriate IQC material analysed at (ii) appropriate intervals and interpreted using (iii) appropriate IQC ranges with (iv) appropriate IQC rules. Patient data algorithms (such as delta checks and absurd value recognition) should be used as adjuncts to IQC to aid detection of random error.

An audit of IQC practice conducted in 2006 showed a wide variation in the laboratory approach to implementation, review and troubleshooting of IQC across the UK.4 This variation in practice needs to be addressed so that IQC procedures are compatible with the progression towards pathology harmonisation and the formation of laboratory networks. This article sets out practical guidelines to promote best practice and limit variation at all stages of IQC management.

Selection of IQC material

The success of a quality control procedure depends on the selection of appropriate IQC material. First, in order to minimise matrix effects on the measurement of analytes, IQC material should mimic the composition of patient samples as closely as possible. In the UK, this is more frequently achieved for serum rather than non-serum analytes.4 To optimise the detection of analytical errors, variation in the IQC material also needs to be minimised. Therefore, IQC materials with long-term stability and low vial-to-vial variability should be selected. Commercially produced IQC materials provide the desired long-term stability, but it should be recognised that commutability of results between patient and IQC materials may be affected by pretreatment procedures.5 ,6 Patient pools have the advantage of commutability with patient samples, but the material may have limited stability plus significant batch-to-batch variation.

In order to provide an independent assessment of performance, commercial IQC should be obtained from a different source to the manufacturer of the assay and should not be the same as the calibrator material.7 The IQC material should also be treated in an identical manner to patient samples. Ouweland et al.8 provided a clear example of how IQC can fail to identify an out-of-control situation when key steps in the process are not included in the IQC procedure. In this example, a defect within the dilution phase of HbA1c analysis was not identified by IQC since the quality control material was not subject to this process.

The levels of IQC used should also span the clinically relevant concentrations of the analyte.6 If a decision point is used for a particular analyte, such as the 99th centile cut-off used for troponin interpretation, one of the IQC samples should have a concentration close to this level.

  • When selecting an IQC material, commutability, appropriateness of analyte concentration, stability and vial-to-vial variability should be assessed. Target values and ranges should be assigned locally.

  • IQC material should be obtained from a third-party source and should not be the same material used for calibration.

  • IQC material should be processed in an identical manner to patient samples.

Assigning an IQC range

The audit by Housley et al.4 found that the majority of laboratories used their own data to set the control limits for IQC material. However, there was wide variation in how the control limits were assigned (ie, over how many runs, days and levels). Furthermore, a number of laboratories use manufacturer-based IQC ranges, which is a concern as these ranges are often too broad.4 Ideally, IQC ranges should initially be calculated from a minimum of 20 data points from 20 separate days of stable operation.9 Using data collected over a relatively long period will help identify issues with regards to stability of IQC material and differences in vial-to-vial preparation.10 The provisional values should then be reviewed once data from longer-term stable operation have been obtained.9 It is important that failed IQC values should not be excluded from the cumulative data as this can give a false impression of the precision of the assay.10 The acceptable range of IQC values (eg, whether 2 SD, 3 SD or other) will then depend on the required quality of the assay, as described in the following section.

  • IQC ranges should initially be calculated from a minimum of 20 data points from 20 separate days.

  • The provisional values should then be reviewed once data from longer-term stable operation have been obtained.

  • The acceptable range of IQC values should be determined based on the required quality of the assay.

Design of an IQC system

The majority of laboratories in the UK use the same IQC rule system for all analytes.4 A number of laboratories use a single 12s rule to define an out-of-control situation; however, in most cases, a multirule system is used (in which 12s acts as a warning rule).4 A short definition of the most commonly used IQC rules is provided in table 1. For a full explanation on the use of the single-rule and multirule systems, we direct the reader to Westgard QC website (

Table 1

Summary of the common internal quality control (IQC) rules used for a two-level IQC system12

The normal impression of an assay is assumed to be Gaussian. Therefore, by defining the control limits by ±2 SD, 5% of IQC results will by definition be outside of the range. This means that using a 12s rule to indicate an ‘out-of-control situation’ will lead to false rejection of 5% of runs when a single IQC is used, 9% of runs using two levels of IQC and 14% of runs for which three IQC samples are used. As well as time and cost implications of rerunning falsely rejected samples, this system has also led to a culture of repeating IQC until the desired value is obtained rather than initiating troubleshooting procedures.

The blanket use of the multirule approach (typically 12s/22s/R4s/41s/10x) highlights different issues. First, the multirules should be modified to the number of IQC used. The 12s/22s/R4s/41s/10x approach is based on control numbers of multiples of 2. A rule base for a three-level IQC system might be 13s/2o f32s/R4s/31s/12x. Second, the multirule approach is unnecessarily complicated for very precise assays and adequate error detection can be achieved using a much simpler approach, for example, a single 13s rule, as discussed further below:

  • The 12s rule is a warning rule and should not be used to reject a run.

  • When multirules are used, the rules should be appropriate for the number of IQC samples measured within a run.

IQC systems should verify that the assays achieve the intended quality of results. Therefore, before designing an appropriate IQC strategy, the laboratory needs to define the quality required of their individual assays and the quality that the assay can provide. The former is informed by the total allowable error (TAE) while the latter is informed by assay precision and assay bias. Analytes have different levels of biological variation, and assays will have different levels of precision and accuracy. It therefore follows that the appropriate IQC system will be different depending on the analyte and the assay.

Definition of quality required

TAE is the maximal degree of error that is acceptable for an analyte before clinical decision making based on the result will be affected. Klee describes six different approaches to the definition of TAE.13 The most widely used and accepted technique is that based on biological variation,14 largely due to the practicality of this approach. However, for analytes that guide diagnosis or treatment strategies at a specific level (eg, cholesterol), it is more appropriate to define the TAE by the clinical quality required. This type of allowable error specification is based on the change in analyte concentration that would need to be detected in order to direct appropriate clinical management. A table of biological variation and TAE is available for all the routinely requested analytes within clinical biochemistry and is hosted on the Westgard QC website (last updated 2012).15

Definition of assay bias and imprecision

Definition of the precision and bias of each assay should be available as part of the Clinical Pathology Accreditation (CPA) requirements for method verification; otherwise, precision should be calculated from long-term IQC analysis and bias may be obtained from external quality assessment (EQA) reports.

Definition of analytical quality

Having calculated the TAE, assay precision and assay bias, a simple measure of the standard of the assay can be obtained using the six sigma approach. The sigma metric represents the ability of the assay to meet the quality requirement. It is calculated as follows:Embedded Image

where TEa is the TAE, biasmeas is the bias of the assay and CVmeas is the coefficient of variation (precision) of the assay. All values should be expressed in percentage. For example, an assay with a TAE of 14%, a bias of 2% and a CVmeas of 3% will have a sigma score of 4.

The sigma metric defines how far an assay deviates from perfection and can be used to

  1. Identify assays that require improvement (and those that do not)

  2. Determine the optimal IQC rules for each test

  3. Provide guidance on the frequency of IQC required

A summary of the recommendations by the 2010 convocation of experts on laboratory quality for the use of six sigma to define assay performance and initiate IQC design is given in table 2 and described further in the text below.16

Table 2

Summary of recommendations provided by the 2010 convocation of experts on laboratory quality for the use of six sigma to initiate internal quality control (IQC) design.16

Sigma scale to target assays for improvement

A sigma score of 6σ means that the assay performance is optimal and thus efforts to further improve quality are unwarranted. Assays with a sigma score of 4σ–6σ are defined as suitable for purpose while those with a sigma score of 3σ–4σ should be classified as poor. The generally accepted lower limit of assay quality is a sigma score of 3σ. No amount of IQC will achieve sufficient error detection rates for an assay that has a sigma metric of less than 3σ and thus efforts would be better directed at improving the quality of the test. This may include using duplicate or triplicate analysis to improve the precision of the assay.16

Definition of appropriate IQC

The right IQC procedure has at least 90% probability of detecting medically important errors and a maximum false rejection rate of 5% (but preferably 1% or less).6 The better the assay performance with relation to the required quality, the simpler the IQC rules required to ensure this error detection rate is obtained. The lowest number of control measurements that will provide the desired error detection should be used.

For example, the error rate of an assay functioning at the 5σ level may be as low as 0.002%, and, in general, a single IQC rule will be appropriate for such assays.2 However, many of the assays within the laboratory will have a sigma score of between 3σ–4σ,2 and thus, multirule systems should be used to enable sufficient error detection rates. Assays of low quality (sigma score of 3σ) require the maximum amount of quality control, with 6–8 IQC materials per run.

A number of approaches to defining analyte-specific IQC have been described.17–19Analyte-specific IQC includes specific IQC rules and IQC frequency. The systems are based on the fact that every IQC rule (and known rule combinations) has its own sigma metric (or power function) value and is suitable for use when it has a higher sigma metric (or power function) than the test.17 Operational specification (OPSpec) charts enable visualisation of this relationship and are simple-to-use tools to define the IQC rules appropriate to an assay of known quality.18 Normalised OPSpec charts, for different levels of quality assurance, are available from the Westgard QC website along with detailed instructions on their use.20

Frequency of IQC

An IQC protocol for a continuous process should include both ‘event-driven’ IQC, such as at start-up or after a component has been changed, and ‘non–event-driven’ IQC, which is IQC conducted after predetermined run lengths. Burnett et al 14 suggest that to reduce the cost of IQC for poor performance assays, a multidesign approach should be used whereby maximal IQC is conducted at ‘event-driven’ IQC with less conducted during ‘non–event-driven’ IQC. It has been recognised, however, that many IQC programmes are not compatible with a multisystem design.14

The appropriate run length between IQC analyses is defined “as the interval over which the accuracy and precision are expected to be stable”.9 The run length therefore needs to be determined by each individual laboratory and will be shorter than that stated by the manufacturer since it needs to take into account local variables that affect results.6 Parvin et al.21 suggest a risk-based approach to define IQC frequency. Cooper et al 16 suggest using the six sigma approach described above to design an initial approach to IQC (table 2) and then incorporating a risk assessment to modulate the frequency to an appropriate level. The frequency will therefore take into account the number of samples processed, reagent stability, any minimum requirements for IQC and the impact of an incorrect result on patient care. The design should also incorporate local experience of the piece of equipment and the schedule of the laboratory.

Finally, the 2010 convocation of experts on quality suggest that the frequency of IQC should be defined in terms of time for low-volume tests and in number of samples for high-volume tests,16 while for batch-mode processes, it is recommended that IQC be distributed throughout the batch.6 Any IQC protocol should cover the entire working period of the laboratory, with no compromise of quality in out-of-hours periods.4 Definition of run lengths based on the six sigma or risk-based approach described above will ensure that the number of unacceptable patient results reported when an analyte is out of control is minimised.16 ,21

  • An IQC protocol should be established that ensures that the number of unacceptable patient results reported when an analyte is out of control is minimised. This protocol should cover the entire 24 h working period of the laboratory.

  • Appropriate IQC should be determined with knowledge of the required quality, assay precision and assay bias for each analyte.

  • The lowest number of control measurements that will provide the desired error detection rate should be used.

Patient data

IQC should not be considered a stand-alone procedure for quality assurance. The use of patient data through delta checks, absurd value recognition and anion gap analysis can aid detection of random errors, thus offering an additional quality assurance to results not achievable by IQC. Delta checks may also alert the laboratory to a specimen mix-up, although the positive predictive value of this process is low.22 Furthermore, calculation of patient means (ie, the average of the normal patient data for the day) provides a useful adjunct to IQC for monitoring day-to-day (or week-to-week) consistency of results. The reader is referred to reviews of patient data algorithms for further information.23–25

Practicality of implementation and cost

The cost of quality control includes those associated with preventative action and appraisal, that is, the cost of running IQC, training personnel, etc.3 However, the cost of poor quality control also includes time and materials wasted through repetition of IQC analysis and repetition of patient samples (whether internally or by recall of the patient). If we again consider the 9% chance of a false rejection when two levels of IQC are evaluated using a single 12s rule, then the time and reagent costs of unnecessary action will clearly accumulate. An annual saving of € 7550 was achieved within a single laboratory in IQC material alone following implementation of an IQC system based on six sigma.26 The saving was achieved by 75% reduction in consumption of multicontrol materials compared with when a 12s rule was used to define an out-of-control situation.26 Considerable cost and time savings were also realised via reduction in the number of calibrations and reruns conducted.

There may be concern that the introduction of assay-specific rules will lead to a complicated, unsustainable IQC system. In reality, however, implementation of 3–4 different rule systems is sufficient and practicable.26 ,27 A four-group system was implemented at three locations within the Netherlands, which categorised assays by the six sigma approach26 as described above. Jassam et al 27 used a three-level model, which characterised the assays as optimal, desirable or minimal performance based on the classification by Fraser et al 28 given below:

  1. Optimal performance: TEa<0.125(CVI 2+CVG 2)1/2+1.65(0.25 CVI)

  2. Desirable performance: TEa<0.250(CVI 2+CVG 2)1/2+1.65(0.50 CVI)

  3. Minimal performance: TEa<0.375(CVI 2+CVG 2)1/2+1.65(0.75 CVI)

where CVG is the within-group biological variation (variation between individuals) and CVI is the within-subject biological variation (variation within an individual results).

While the implementation of an assay-specific IQC system inevitably takes time, Hans van Schaik comments that the main challenge is in education of laboratory staff.26 However, the investment in time is quickly repaid by the reduction in false rejection alarms and the associated reagent, IQC and calibrator costs and staff labour.26

IQC within a laboratory network

Following the recommendations by Lord Carter,29 there has been a move towards the formation of laboratory networks so that economies of scale can be realised. Operating as a network requires comparability of results across the different sites, allowing common reference intervals and transferability of results. Indeed, standard F3.5 of the CPA standards for medical laboratories states that “the laboratory shall have a mechanism for ensuring that examinations performed using different procedures or equipment or at different sites give comparable results, in particular, throughout clinically appropriate intervals”.30 In order to achieve this, a robust network-wide quality system needs to be implemented.

Jassam et al.27 recommend assigning a project team to implement a quality system across a network. This team first needs to standardise operating protocols, including IQC material used, frequency of IQC and how target and range values are assigned. Liaison with the manufacturer is also required to coordinate reagent and IQC lot changes across sites. Ideally, identical assay methods should be used to minimise variation. However, variation in assay performance will still exist even when the same analysers and methods are used across different sites. Collation of long-term IQC data from all analysers, including imprecision and bias data, should be used to define the quality of the assays, and then, IQC rules implemented as appropriate to the required quality of each assay, as described previously.26 ,27

Critical to the implementation of a network-wide quality management system is the introduction of software that allows IQC data to be entered, analysed and viewed across the network. Applications such as QC-today (Instrumentation Laboratory) are available that allow input of IQC results from multiple analysers, and definition of assay-specific IQC rules. Jassam et al.27 developed their own software to collate data from all sites within their network, define analytical impression and compare this to allowable error. They were able to demonstrate equivalence for the majority of general chemistry analytes across the different sites. The assays were classified as having optimal, desirable or minimum performance, and IQC implemented appropriate to the standard of assay.

Regular review of assay performance across the network is required to ensure assay quality classifications remain appropriate. Networks also use EQA schemes to monitor comparability of results. However, a network-wide IQC protocol allows real-time evaluation of assay performance and comparability across multiple sites.

  • Protocols for IQC should be coordinated across all sites of a network.

  • Within a network, assay quality should be defined using data collated from all analysers across all sites.

  • Assay quality should be reviewed and adjustments to the IQC system made to reflect improvement or deterioration in quality.

IQC review

Review of IQC data should also be built into the IQC protocol of a laboratory, and the grades of staff responsible for short-term and long-term IQC review should be clearly stated. Housley et al.4 recommend that the review of IQC data should be carried out by state-registered staff members, and any problems escalated, to designated senior staff. It is important that all persons involved in verifying IQC are familiar with how each of the IQC rules work including whether they are across run or within run. The reader is directed to the lessons on the Westgard QC website ( for practical guidance on interpretation of the different QC rules.11

All laboratories should also have a written protocol that defines appropriate action with regards to patient results that may have been affected by an IQC fault.9 This is of particular importance when long-run lengths are used and highlights the need for incorporation of a risk-based approach when determining the frequency of IQC.9

  • An IQC protocol should include procedures for IQC review, with the grades of staff responsible for this clearly stated, and action when patient results may have been affected by failed IQC.


The appropriate reaction to an IQC rejection signal should be to stop analysis, troubleshoot the problem, correct the error and, if possible, implement a fail-safe procedure to prevent the error recurring.31 However, Hyltoft Petersen et al 31 described three typical reaction patterns to rejection signals as (a) repeat IQC, (b) overlook the failed IQC or (c) action every ‘out-of-control’ signal. The repeat IQC and overlook cultures likely arise from the misuse of the 12s rule, incorrectly defined IQC ranges and the lack of appropriate IQC in place (so that clinically insignificant changes are detected). Poor use of the IQC rules can lead to over investigation (if the number of false rejections is high) or under investigation (if error detection is low). As the NCCLS C24-A2 document states, “it is better to define the clinical quality that is necessary in the beginning to guide the planning of IQC strategies rather than be faced with having to make a judgment on the clinical importance of errors during the pressure of daily service”.9 Hence, if the IQC system has been implemented according to the practice described above, the probability of a false rejection is minimised and thus each IQC failure should initiate a problem-solving process.

  • The appropriate reaction to an IQC rejection signal should be to stop analysis, troubleshoot the problem, correct the error and, if possible, implement a fail-safe procedure to prevent the error from recurring.

Schoenmakers et al.17 suggest a series of questions as a means to initiate the appropriate investigation of an IQC failure:

What type of error (random or systematic) has been identified by the IQC failure?

Whether the error is systematic or random may be informed by the type of IQC failure observed. Table 3 provides examples of types of error that IQC rules detect. Random errors may occur, for example, due to a pipetting error or other mechanical variation, detector variation or electrical interference.32 Systematic errors should be classified as either a trend (a gradual decrease in test reliability) or a shift (resulting from an abrupt change). Trending of results can be subtle, thus requiring careful review of the IQC charts. Trends indicate a gradual deterioration in part of the system such as aging of reagent or accumulation of debris affecting a light source or electrode. Shifts in IQC tend to be more obvious on QC charts and result from sudden changes such as a poor calibration, change in temperature or significant maintenance.

Table 3

Identifying random and systematic error

What is unique to the situation?

  • Have new reagent/control materials been used?

  • Has the system been recently calibrated?

  • Has there been recent maintenance?

What is common to the situation?

  • Did multiple chemistries fail IQC? If yes, are the tests similar (eg, all ion selective electrodes or enzymatic)?

  • Is the problem occurring on multiple instruments?

If the issue appears to be isolated, the troubleshooting should focus on what is unique to that assay, such as the reagent, the IQC material or the calibrators. If the problem is broader, then troubleshooting needs to focus on the common factors, for example, a shared reference buffer, shared IQC material, analyser or even environmental issue. Peer group reporting programmes such as the Randox Acusera 24.7 (Randox Laboratories) are emerging that allow IQC data from individual laboratories to be uploaded to a central database facilitating comparison of IQC performance to that of other users. This may aid troubleshooting by indicating whether an issue is common to all users of an assay.

If a reason for IQC failure cannot be established using the set of questions above, a systematic approach can be used to exclude possible causes of the IQC failure, as shown in figure 1.

Figure 1

Troubleshooting internal quality control (IQC) failure. Based on typical manufacturer guidelines.


IQC has a well-established role in the quality assurance of clinical laboratories. However, IQC practice varies considerably between different laboratories. Standardisation of approaches to selection of IQC material, assigning targets and ranges, statistical rule used, IQC review and troubleshooting will improve quality of results and facilitate harmonisation of pathology services. The formation of networked laboratories provides additional challenges for IQC systems, with the need for robust cross-site IQC procedures to ensure comparability of results. We have outlined a number of recommendations to enable laboratory IQC practice to meet the standard required to ensure high-quality laboratory results, and thus, patient safety.

Interactive multiple choice questions

This JCP review article has an accompanying set of multiple choice questions (MCQs). To access the questions, click on BMJ Learning: Take this module on BMJ Learning from the content box at the top right and bottom left of the online article. For more information please go to: Please note: The MCQs are hosted on BMJ Learning—the best available learning website for medical professionals from the BMJ Group. If prompted, subscribers must sign into JCP with their journal's username and password. All users must also complete a one-time registration on BMJ Learning and subsequently log in (with a BMJ Learning username and password) on every visit.



  • Competing interests None.

  • Contributors HK and SP were the primary authors, with contributions from DH and DBF. DH and DBF also oversaw the progress of the article.

  • Competing interests None.

  • Provenance and peer review Commissioned; externally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.