Article Text

Download PDFPDF
Dual resolution deep learning network with self-attention mechanism for classification and localisation of colorectal cancer in histopathological images
  1. Yan Xu1,
  2. Liwen Jiang2,
  3. Shuting Huang1,
  4. Zhenyu Liu1,
  5. Jiangyu Zhang2
  1. 1School of Information Engineering, Guangdong University of Technology, Guangzhou, China
  2. 2Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
  1. Correspondence to Dr Zhenyu Liu, Guangdong University of Technology, Guangzhou 510006, China; zhenyuliu{at}gdut.edu.cn; Dr Jiangyu Zhang, Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, 510095, China; superchina2000{at}foxmail.com

Abstract

Aims Microscopic examination is a basic diagnostic technology for colorectal cancer (CRC), but it is very laborious. We developed a dual resolution deep learning network with self-attention mechanism (DRSANet) which combines context and details for CRC binary classification and localisation in whole slide images (WSIs), and as a computer-aided diagnosis (CAD) to improve the sensitivity and specificity of doctors’ diagnosis.

Methods Representative regions of interest (ROI) of each tissue type were manually delineated in WSIs by pathologists. Based on the same coordinates of centre position, patches were extracted at different magnification levels from the ROI. Specifically, patches from low magnification level contain contextual information, while from high magnification level provide important details. A dual-inputs network was designed to learn context and details simultaneously, and self-attention mechanism was used to selectively learn different positions in the images to enhance the performance.

Results In classification task, DRSANet outperformed the benchmark networks which only depended on the high magnification patches on two test set. Furthermore, in localisation task, DRSANet demonstrated a better localisation capability of tumour area in WSI with less areas of misidentification.

Conclusions We compared DRSANet with benchmark networks which only use the patches from high magnification level. Experimental results reveal that the performance of DRSANet is better than the benchmark networks. Both context and details should be considered in deep learning method.

  • colorectal cancer
  • image processing, computer-assisted
  • computer-aided design

Data availability statement

No data are available.

Statistics from Altmetric.com

Footnotes

  • YX and LJ are joint first authors.

  • Handling editor Runjan Chetty.

  • ZL and JZ contributed equally.

  • Contributors JZ is the guarantor of this study. ZL and JZ conceived and supervised this study. LJ and JZ collected the digital slides and performed the preprocessing. YX and SH designed and conducted the experiments. YX and LJ performed statistical analyses of the results. YX, ZL and JZ drafted the manuscript. All authors approved the manuscript.

  • Funding This work was supported by the Guangzhou Key Medical Discipline Construction Project Fund, the Guangzhou Science and Technology Plan Project under grant 201907010003, and the Guangdong Provincial Science and Technology Plan Project under grant 2021A0505080014.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.