Article Text

Download PDFPDF
The practical examination in chemical pathology: current role and future prospects
  1. Andrew Day
  1. Dr Andrew Day, Chair, RCPath Panel of Examiners in Clinical Biochemistry, Department of Chemical Pathology, Weston General Hospital, Weston super Mare, Somerset BS23 4TQ, UK; andrew.day{at}waht.swest.nhs.uk

Statistics from Altmetric.com

The role of the examinations for membership of the Royal College of Pathologists (MRCPath) has evolved and matured over the 42 years of their existence. No longer viewed as the sole hurdle or ordeal to be overcome at the boundary between training and consultant practice, they have now taken their place as one of several assessments that enable a judgement that a pathologist possesses the competencies required for independent professional practice. For the specialty of chemical pathology, these competencies are defined in the Curriculum for Specialist Training in Chemical Pathology1 by the Royal College of Pathologists (RCPath), through its Joint Committee on Pathology Training. This curriculum has undergone extensive revision in the last 5 years to conform to a modern competency-based model of medical education with clearly defined learning outcomes.2 The UK Postgraduate Medical Education and Training Board (PMETB) has approved this Curriculum, and has also begun the process of approval of all College assessment processes, including examinations, for all the specialties for which the College has responsibility for training.3 The Chemical Pathology Curriculum divides training into four stages (A–D), and the two parts of the MRCPath examination are integral to progression between stages in that trainees must pass Part 1 before progressing to Stage C, and Part 2 before progressing to Stage D (fig 1). Another important recent development, espoused for all medical specialties by the PMETB, is the process of workplace-based assessment (including directly observed practical skills).4 Workplace-based assessment is developing as a formalised method of competency assessment that will complement summative assessment by examination (fig 1) and provide additional evidence towards certification of completion of training. A final key concept is that of blueprinting, whereby the Curriculum is referenced to General Medical Council principles (Good Medical Practice)5 together with definition of how each element of the curriculum is to be assessed (eg, in which part of the MRCPath examination, or as workplace-based assessment).

Figure 1 Structure of specialist training in chemical pathology showing integration of examinations for membership of the Royal College of Pathologists, and workplace-based assessments with the four stages of training. The normal length of the training programme is 5 years, but progression between stages is dependent upon passing the relevant assessment or examination. OSPE, Objective Structured Pathology Examination.

The reader may deduce two key points from the above: first, that the MRCPath examination is just one of several methods of competency assessment; and second, that, via a complex web of procedures, the content of the examination is simply a reflection of the Curriculum, which in turn reflects the attributes required of a consultant. It is therefore no longer (if it ever was) appropriate to assert that a topic should be examined in the MRCPath, but rather to consider whether this topic should be included in the Curriculum, then to define the competencies required to practise in this topic area and how best these competencies should be assessed.

The Curriculum1 is constructed as a set of elements or topics, each of which has a group of competencies described in terms of knowledge, skills and attitudes. A simple illustration, for example in relation to paraproteinaemia, would be as follows:

  • knowledge: pathophysiology and clinical and biochemical sequelae

  • skills: use of methods of diagnosis including serum and urine protein electrophoresis

  • attitudes: appropriate report comments and ability to communicate results appropriately to a range of clinicians.

Knowledge underpins skills and is an essential pre-requisite. Hence the first part of the MRCPath examination, the Part 1 written papers, which are essentially a knowledge test, must be passed before the Part 1 practical examination, which is principally a test of applied knowledge or skill. Part 2 tests more complex skills and professional attitudes in the form of a written component (a dissertation, thesis or portfolio of published papers), and an extended structured oral examination (fig 1). The integration of the MRCPath examinations and other assessments into this hierarchy of levels of competency can usefully be described by reference to “Miller’s Pyramid”6 (fig 2), which illustrates a progression towards the ultimate goal of any clinical assessment system. The ultimate goal is to test ability to integrate all competencies into satisfactory performance in the workplace.

Figure 2 Miller’s Pyramid model of competency-based training,6 showing the hierarchy of levels of competency and their methods of assessment in the curriculum for chemical pathology specialist training.

STRUCTURE OF THE CURRENT PRACTICAL EXAMINATION

The practical examination in clinical biochemistry7 currently comprises three papers. Paper 1 is the Objective Structured Pathology Examination (OSPE), which tests the ability of candidates to handle a wide range of laboratory output (eg, chromatograms) and to work at the laboratory–clinical interface (eg, giving appropriate advice about common investigative strategies or sample requirements). Paper 2 consists of cases and calculations. The first part of this paper assesses ability to provide interpretation of the types of clinical biochemistry abnormalities that may pass across the duty biochemist’s desk. The second part tests ability to handle clinical and laboratory numerical data and to perform simple calculations. Paper 3 is a bench practical exercise in which candidates are required to design, perform and record a practical study to investigate a problem. This is commonly known as the “wet practical” because colorimetric chemistries with spectrophotometry often lend themselves well to exercises of this type, although non-spectrophotometric versions are also possible. Candidates also sit a short (20 min) oral examination that is used to permit candidates whose overall mark in the practical papers is borderline fail due to poor performance in just one paper to be upgraded to pass if there is strong evidence that their overall competency level is clearly that of a pass candidate. This applies only rarely and the large majority of candidates receive no extra marks as a result of their oral performance.

EDUCATIONAL EVALUATION OF THE EXAMINATION

The overall value of any assessment may be evaluated by examining its “utility index” (box 1). This concept, defined by Van der Vleuten,8 consists of five elements: reliability, validity, educational impact, cost and acceptability. Van der Vleuten emphasised that there is no single perfect assessment method, and there is always a trade-off between these elements. The balance between the elements of the utility index will vary depending on the role of the assessment and the design of the assessment should reflect this. As the MRCPath practical examination is a high stakes examination, the focus is on high reliability, whereas educational impact, while a useful by-product, is of secondary importance. One way in which high reliability is achieved is by central marking, which distinguishes the examination from workplace-based assessments of practical skills. In other aspects, the individual practical papers differ from each other and so are considered separately.

Box 1: Van der Vleuten’s utility index8

Utility index = reliability × validity × educational impact × cost × acceptability

(Calculation of the utility index for an assessment requires assignment of a differential weight (w) to each parameter)

  • Reliability: consistency and reproducibility of assessment score

  • Validity: how well an assessment tests what it is designed to test

  • Educational impact: the effect of the assessment system on learning behaviour

  • Cost: financial, infrastructural and examiner resources required

  • Acceptability: overall burden to both assessors and trainees

The Postgraduate Medical Education and Training Board includes feasibility as an additional component of the utility index, as it is implicit in cost and acceptability.

Paper 1 (OSPE) has a relatively high reliability on the basis that the questions have an unequivocal short answer format and that it has low assessor variability because all responses are marked by the same examiners. Reliability is also a function of testing time, however, and it is not clear whether the 2 h duration of this part of the examination is optimal. Validity is moderately high in the sense that it maps across the Curriculum (face validity) and that with 14 stations it is possible to include a representative sample of the subject matter (content validity). However, the 8 min per station format limits the scope of questioning and prevents some curricular topics from being tested. Educational impact is present, although limited by the impossibility of publishing large numbers of specimen questions; this would deplete the useable question bank. Cost is high in terms of the amount of time required to write questions and create the high-quality images used, although final production and staging of the examination “circus” is straightforward. Feedback from candidates and trainers since the inception of the OSPE has consistently been good, indicating its high acceptability.

Paper 2 (Cases and Calculations) requires consideration of its two parts separately. With regards to the cases, reliability is again high given the specificity of the questions and marking by the same examiners, although correct discussion of aspects of a case is dependent on correct initial interpretation and as for the OSPE it is not certain that the 90 min duration of the examination is optimal for reliability. Face validity is also high, as case interpretation is a key skill, although content validity is a little lower given the setting of just six cases. Candidates should be aware that this paper carries 40% of the overall mark for the practical examination, thus emphasising the consequential validity of this paper. Educational impact is good, although this is an activity that is in any case the day to day material of “duty biochemist” practice. Cost is relatively low, as these are common cases that frequently cross the desks of examiners as well as trainees and it is entirely a written paper. Acceptability appears to be high – no adverse feedback is received. The calculations have similar high reliability given that it is possible to set clear and unequivocal questions, and validity is maintained by only setting questions that are of relevance to routine clinical biochemistry practice. This issue of relevance also feeds into educational impact, as the existence of this part of the examination emphasises the ongoing importance of proficiency in calculation for practising clinical biochemists. Cost is low and feasibility is high.

Paper 3 (wet practical) is perhaps the paper that worries candidates the most. This is by design a single open-ended exercise, which might suggest that candidate performance will depend on the practical skills demanded by a particular question and therefore that reliability is low. Analysis of the performance of candidates who have to re-sit the practical (accepting that this is an atypical cross-section) indicates, however, that performance is remarkably constant between sessions and that reliability is therefore actually high. Validity is also often questioned by both trainers and trainees on the basis that most consultant clinical biochemists rarely perform this sort of bench practical work today. The key here, however, is to understand that this paper is not designed to test the skill of using (for example) a spectrophotometer; indeed, candidates undergo training and familiarisation with the instruments a few hours before the paper if this is required. Specific methodological skills (including spectrophotometry) are tested more widely in the other practical papers, most particularly in the OSPE. This paper is designed to assess ability to design, execute and record a series of experiments in order to address a specified analytical problem. Consultants still need to possess these skills, in order to oversee assay evaluation and troubleshooting, even with today’s automated equipment. This is a core part of the Curriculum. The face and consequential validity of the wet practical is therefore high, and its educational impact is significant if it underlines the importance of this aspect of training. The area of greatest concern is the feasibility and cost of this part of the examination, given the requirement for large laboratory facilities, the significant burden on examiners of ensuring that the assays required perform reproducibly and the cost of materials. Acceptability to trainees and trainers is also in question. This paper undoubtedly places candidates under a high degree of stress, which is undesirable, but other areas of (un)acceptability can be overcome if the aims of this paper are more widely understood.

FUTURE PROSPECTS

There are three key drivers. First, the examination must reflect the Curriculum, and will continue to play a major role in trainee assessment, in a way that is complementary to workplace-based assessments, to ensure complete curricular coverage. Thus the examination will need to evolve to reflect any changes in the Curriculum. Second, the internal structure will evolve to clarify the distinction between the principle knowledge test (the written papers) and the skills and attitudes tests (practicals and orals). In consistency with other pathology specialties, the practical components will therefore move into Part 2 (more usefully viewed, perhaps, as awarding Part 1 after satisfactory completion of the written papers). Third, there will be an ever-increasing emphasis on increasing (and demonstrating) the utility of the examination. Aspects that are of lower utility will be modified and formal demonstration of high utility will be central to the introduction of new assessment methods.

This process of review is underway. There is as yet no timescale and adequate notice and transitional arrangements will need to be published widely. Questions under consideration include issues such as the following. How can the OSPE concept be developed to enable assessment of more complex problems? Can other skills such as communication be assessed reliably in the practical examination context? Can a broader cross-section of clinical cases be tested? Can the skills of experiment planning and data analysis that are key to the current wet practical be separated from the practical work itself? If a “virtual” practical exercise is feasible, will there still be a need for direct assessment of practical skills in the examinations context or can this be done in the workplace? What should the overall length of the practical examination be in order to ensure optimal reliability? These issues, among others, also need to be considered in the context of the complex web of interaction between training and assessment, curriculum and specialty practice, and the differing needs of medical and clinical scientist candidates, to ensure that the examination continues to be an efficient and effective part of the evolving training and assessment process.

REFERENCES

Footnotes

  • Competing interests: None.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.