Multi Capsule Data for Automatic Pathology Evaluation in Capsule Endoscopy

Olivier Rukundo

e-mail: orukundo@gmail.com


Abstract: In capsule endoscopy (CE), existing efforts to automate evaluation of bowel pathologies using narrow artificial intelligence rely on training single capsule data. In the effort to optimize automated evaluation of bowel pathologies, the novel idea is to train multi capsule data from CapsoCam Plus (the only one that stores diagnostic data on-board the capsule) and wireless capsule endoscope of interest.


Introduction: Today, CE is among the newest research areas applicable in gastroenterology that caught the interest of many computer scientist researchers and gastroenterologists [1], [2]. Given that in CE, human or medical experts-based evaluation of bowel pathologies takes tremendous time while remaining prone to incorrect predictions or suboptimal accuracy rate, traditional efforts to automate bowel pathology evaluations were made using narrow artificial intelligence [2], mainly based on machine learning and deep learning [3]. However, in CE, such efforts relied on training single capsule data thus remaining prone to incorrect predictions of bowel pathologies. Therefore, a novel solution based on training multi capsule data is proposed to optimize the automated evaluation of bowel pathologies via improving the predictive ability of the deep learning network of interest with the help of diagnostic data from CapsoCam Plus and wireless capsule endoscope of interest [4], [5]. Note that instead of training single capsule data (or training on data from one type of capsule endoscope), the novel idea is to train multi capsule data (or training data from two types of capsule endoscope, in which one uses the wireless transmission of diagnostic data and another stores data on board the capsule).


Methods: To implement this idea the following must be done:


a) Collecting data from the colon phantom using the CapsoCam Plus: Here, the CapsoCam Plus camera’s line-of-sight is located at 90°degree of the horizontal plane bisecting the bowel lumen. Note that, there exists a colon phantom (e.g., with optical properties) in which, e.g., the sessile and pedunculated polyps can be inserted in any location of the phantom, e.g., using neodymium magnets placed on the outside of the phantom [6].


b) Collecting data from colon phantom using the PillCam, or etc: Here, the PillCam, or OMOM HD Capsule, etc camera’s line-of-sight is located at 0°degree of the horizontal plane bisecting the bowel lumen.


c) Buying (or manufacturing) the artificial colon phantom: Here, the phantom of interest includes: E.g., 23 cm x 23 cm colon phantom. Here, the CE systems of interest includes: E.g., CapsoCam Plus system [7], and PillCam system [8]..

Figure 1: (a) CM-15 Colon Phantom, (b) CapsoCamPlus, (c) PillCam. Photos courtesy of GT Simulators, CapsoVision, and Medtronic, respectively.


Results: In the preliminary experiments, involving the artificial colon, it is expected that multi capsule data-based training outcomes will lead to the detection rate of bowel pathologies equivalent or superior to medical experts-based detection rate (e.g., in terms of sessile and pedunculated polyps, etc). In final experiments, involving both the natural colon (and small intestine), it is also expected that multi capsule data-based training outcomes will lead to the detection rate of bowel pathologies equivalent or superior to medical experts-based detection rate.


Discussion: Imagine training on a dataset (i.e., images or videos) collected using both the CapsoCam Plus (i.e., the only one providing a 360° panoramic view, according to the CapsoVision company) and the PillCam (i.e., providing the field of view of 156°, according to Medtronic company). Features of bowel pathologies missed in the 90°-degree camera’s line-of-sight case can be found in the 0°degree camera’s line-of-sight case or vice-versa thus increasing the spectrum of features that best minimize the loss function. In the future, a 3D simulation of the gastrointestinal/bowel environment equipped with a simulated CE camera may be implemented to ease the CE data collection.


References:

[1] Rukundo, O., Pedersen, M., Hovde, Ø., Advanced Image Enhancement Method for Distant Vessels and Structures in Capsule Endoscopy, Computational and Mathematical Methods in Medicine, vol. 2017, Article ID 9813165, 13 pages, 2017

[2] Ahmed Mohammed, Ivar Farup, Marius Pedersen, Sule Yildirim, Øistein Hovde, PS-DeVCEM: Pathology-sensitive deep learning model for video capsule endoscopy based on weakly labeled data, Computer Vision and Image Understanding, Volume 201, 2020

[3] Ranschaert, E.R., Morozov, S., Algra, P.R., Artificial Intelligence in Medical Imaging - Opportunities, Applications and Risks, XV, 373, Springer Cham, 2019

[4] Introduction to Multimodal Deep Learning: <https://heartbeat.fritz.ai/introduction-to-multimodal-deep-learning-630b259f9291>, Accessed: 2020-01-02

[5] Improved Multimodal Deep Learning with Variation of Information: <https://proceedings.neurips.cc/paper/2014/file/801c14f07f9724229175b8ef8b4585a8-Paper.pdf>, Accessed: 2020-01-02

[6] Colon phantoms with cancer lesions for endoscopic characterization with optical coherence tomography: < https://www.osapublishing.org/boe/fulltext.cfm?uri=boe-12-2-955&id=446760>, Accessed: 2020-01-02

[7] The CapsoCam® Plus System: < https://capsovision.com/physician-resources/capsocam-plusspecifications>, Accessed: 2020-01-02

[8] PILLCAM™ SB 3 SYSTEM: < https://www.medtronic.com/covidien/en-us/products/capsuleendoscopy/pillcam-sb-3-system.html>, Accessed: 2020-01-02