Article Text

PDF
Natural language processing in pathology: a scoping review
  1. Gerard Burger1,2,
  2. Ameen Abu-Hanna2,
  3. Nicolette de Keizer2,
  4. Ronald Cornet2,3
  1. 1Symbiant Pathology Expert Centre, Hoorn, The Netherlands
  2. 2Department of Medical Informatics, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
  3. 3Department of Biomedical Engineering, Linköping University, Linköping, Sweden
  1. Correspondence to Dr Gerard Burger, Department of Pathology, Symbiant Pathology Expert Centre, Maelsonstraat 3, Hoorn, Noord-Holland 1624 NP, The Netherlands; g.t.burger{at}amc.uva.nl

Abstract

Background Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text.

Objective We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents.

Materials and methods Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice.

Results The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation.

Conclusions Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods.

  • COMPUTER SYSTEMS
  • SURGICAL PATHOLOGY
  • REPORTS

Statistics from Altmetric.com

Footnotes

  • Handling editor Runjan Chetty

  • Contributors GB designed the research, performed the construction of the queries, retrieval of papers, analysis of papers based on abstract and full text for inclusion and extraction of information from the literature. He performed analyses on this information, drafted the manuscript and processed input from the other authors. RC supervised the design of the research, contributing to formulating the research questions and the queries for paper selection. He analysed the papers for inclusion and discussed discrepancies with GB. He contributed to the manuscript and provided feedback during the whole stage of manuscript preparation. NdK both reviewed multiple versions of the manuscript, provided input on the research question, the methods and on the results and presentation and interpretation thereof.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.