Publications & Software
Azagra, P., Sostres, C., Ferrández, Á., et al. (2023). Endomapper dataset of complete calibrated endoscopy procedures. Scientific Data, 10(1), 671. https://arxiv.org/abs/2204.14240
Dataset: https://www.synapse.org/#!Synapse:syn26707219/wiki/615178
Software: https://github.com/endomapper/EM_Dataset-ToolSegmentation
https://github.com/endomapper/EM_Dataset-PhotometricCalibration
Barbed, O. L., Montiel, J. M., Fua, P., & Murillo, A. C. (2023, October). Tracking adaptation to improve superpoint for 3d reconstruction in endoscopy. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 583-593). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-43907-0_56
https://infoscience.epfl.ch/server/api/core/bitstreams/241b4b88-d223-4c96-8804-8ed1803c0d7f/content
Software: https://github.com/LeonBP/SuperPointTrackingAdaptation
Barbed, O. L., Chadebecq, F., Morlana, J., Montiel, J. M., & Murillo, A. C. (2022, September). Superpoint features in endoscopy. In MICCAI Workshop on Imaging Systems for GI Endoscopy (pp. 45-55). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-21083-9_5
https://arxiv.org/abs/2203.04302
Software: https://github.com/LeonBP/SuperPointEndoscopy
Barbed, O. L., Oriol, C., Millán, P. A., & Murillo, A. C. (2022). Semantic analysis of real endoscopies with unsupervised learned descriptors. In Medical Imaging with Deep Learning.
https://openreview.net/pdf?id=aQchDrGRkM-
Software: https://github.com/LeonBP/VideoSegmentation
Barbed, O.L. (2020). Extracción de características en imágenes de procedimientos médicos con técnicas de deep learning (Feature extraction on medical images with Deep Learning) (Master dissertation, Universidad de Zaragoza).
Batlle, V. M., Montiel, J. M., Fua, P., & Tardós, J. D. (2023, October). Lightneus: Neural surface reconstruction in endoscopy using illumination decline. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 502-512). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-43999-5_48
https://arxiv.org/abs/2309.02777
Software: https://github.com/endomapper/LightNeuS
Batlle, V. M., Montiel, J. M., & Tardós, J. D. (2022, October). Photometric single-view dense 3D reconstruction in endoscopy. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4904-4910). IEEE.
https://doi.org/10.1109/IROS47612.2022.9981742
https://arxiv.org/abs/2204.09083
Batlle, V.M., & Tardós, J. D. Reconstrucción 3D a escala real a partir de imágenes monoculares de endoscopio (Master dissertation, Universidad de Zaragoza). https://zaguan.unizar.es/record/112261/files/TAZ-TFM-2021-1495.pdf
Caramalau, R., Bhattarai, B., & Stoyanov, D. (2023). Federated Active Learning for Target Domain Generalisation.
https://doi.org/10.48550/arXiv.2312.02247
https://arxiv.org/abs/2312.02247
Software: https://github.com/razvancaramalau/FEDALV
Chadebecq, F., Mountney, P., Ahmad, O.F., Kader, R., Lovat, L.B., and Stoyanov, D. (2020) ”Structure-from-motion analysis may generate an accurate automated bowel preparation score”, UEG Journal. Abstract Issue, 8, 765.
https://www.nxtbook.com/ueg/UEG/Abstracts/index.php#/p/764
Daher, R., Vasconcelos, F., & Stoyanov, D. (2023). A temporal learning approach to inpainting endoscopic specularities and its effect on image correspondence. Medical Image Analysis, 90, 102994.
https://arxiv.org/abs/2203.17013
https://doi.org/10.1016/j.media.2023.102994
Elvira, R., Tardós, J. D., & Montiel, J. M. (2024). CudaSIFT-SLAM: multiple-map visual SLAM for full procedure mapping in real human endoscopy.
https://arxiv.org/abs/2405.16932
Gómez-Rodríguez, J. J., Montiel, J. M., & Tardós, J. D. (2024). NR-SLAM: Non-rigid monocular slam. IEEE Transactions on Robotics, 40, 4252-4264.
https://doi.org/10.1109/TRO.2024.3422004
https://arxiv.org/abs/2308.04036
Software: https://github.com/endomapper/NR-SLAM
Gómez-Rodríguez, J. J., Lamarca, J., Morlana, J., Tardós, J. D., & Montiel, J. M. (2021, May). SD-DefSLAM: Semi-direct monocular SLAM for deformable and intracorporeal scenes. In 2021 IEEE international conference on robotics and automation (ICRA) (pp. 5170-5177). IEEE.
https://arxiv.org/abs/2010.09409
https://doi.org/10.1109/ICRA48506.2021.9561512
Software: https://github.com/UZ-SLAMLab/SD-DefSLAM
Huang, B., Wang, Y., Nguyen, A., Elson, D., Vasconcelos, F., & Stoyanov, D. (2024). High-fidelity Endoscopic Image Synthesis by Utilizing Depth-guided Neural Surfaces, Neural Rendering Intelligence Workshop, CVPR Workshops.
https://arxiv.org/pdf/2404.13437
Jin, Y., Yu, Y., Chen, C., Zhao, Z., Heng, P. A., & Stoyanov, D. (2022). Exploring intra-and inter-video relation for surgical semantic scene segmentation. IEEE Transactions on Medical Imaging, 41(11), 2991-3002.
https://arxiv.org/abs/2203.15251
https://doi.org/10.1109/TMI.2022.3177077
Software: https://github.com/YuemingJin/STswinCL.
Lamarca, J., Rodríguez, J. J. G., Tardós, J. D., & Montiel, J. M. (2022). Direct and sparse deformable tracking. IEEE Robotics and Automation Letters, 7(4), 11450-11457.
https://doi.org/10.1109/LRA.2022.3201253
https://arxiv.org/abs/2109.07370
Lamarca, J. (2021). Monocular slam for deformable scenarios (Doctoral dissertation, Universidad de Zaragoza).
https://zaguan.unizar.es/record/110889/files/TESIS-2022-058.pdf
Lamarca, J., Parashar, S., Bartoli, A., & Montiel, J. M. M. (2020). DefSLAM: Tracking and mapping of deforming scenes from monocular sequences. IEEE Transactions on robotics, 37(1), 291-303.
https://doi.org/10.1109/TRO.2020.3020739
https://arxiv.org/abs/1908.08918
Software: https://github.com/UZ-SLAMLab/DefSLAM
Makki, K., & Bartoli, A. (2024, May). Reconstructing the normal and shape at specularities in endoscopy. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.
https://doi.org/10.1109/ISBI56570.2024.10635480
https://arxiv.org/pdf/2311.18299
Makki, K., & Bartoli, A. (2023, April). Normal reconstruction from specularity in the endoscopic setting. In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI) (pp. 1-5). IEEE.
https://doi.org/10.1109/ISBI53787.2023.10230672
Mariyanayagam, D., & Bartoli, A. (2024). The shading isophotes: Model and methods for Lambertian planes and a point light. Computer Vision and Image Understanding, 248, 104135.
https://doi.org/10.1016/j.cviu.2024.104135
https://encov.ip.uca.fr/publications/pubfiles/2024_Mariyanayagam_etal_CVIU_isophote.pdf
Morlana, J., Tardós, J. D., & Montiel, J. M. (2024, October). Topological SLAM in colonoscopies leveraging deep features and topological priors. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 733-743). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-72120-5_68
https://arxiv.org/pdf/2409.16806
Software: github.com/endomapper/ColonSLAM
Morlana, J., Tardós, J. D., & Montiel, J. M. M. (2024, May). ColonMapper: topological mapping and localization for colonoscopy. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6329-6336). IEEE.
https://doi.org/10.1109/ICRA57147.2024.10610426
https://arxiv.org/pdf/2305.05546
Software: github.com/jmorlana/ColonMapper
Morlana, J., & Montiel, J. M. M. (2023, May). Reuse your features: unifying retrieval and feature-metric alignment. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6072-6079). IEEE.
https://doi.org/10.1109/ICRA48891.2023.10160501
https://arxiv.org/abs/2204.06292
Software: github.com/jmorlana/DRAN
Morlana, J., Millán, P. A., Civera, J., & Montiel, J. M. (2021, July). Self-supervised visual place recognition for colonoscopy sequences. In Medical Imaging with Deep Learning.
https://openreview.net/pdf?id=tgkEqYyA12p
Puigvert, J., Batlle, V., Montiel, J., Cantin, R., Fua, P., Tardos, J., & Civera, J. (2023). LightDepth: Single-View Depth Self-Supervision from Illumination Decline. In International Conference on Computer Vision (ICCV). IEEE Computer Soc
https://arxiv.org/pdf/2308.10525
Rau, A., Bano, S., Jin, Y., Azagra, P., Morlana, J., Kader, R., ... & Stoyanov, D. (2024). SimCol3D—3D reconstruction during colonoscopy challenge. Medical Image Analysis, 96, 103195.
https://doi.org/10.1016/j.media.2024.103195
https://arxiv.org/pdf/2307.11261
Rau, A., Bhattarai, B., Agapito, L., & Stoyanov, D. (2023). Bimodal camera pose prediction for endoscopy. IEEE Transactions on Medical Robotics and Bionics, 5(4), 978-989.
https://arxiv.org/abs/2204.04968
Recasens Lafuente, D., Oswald, M. R., Pollefeys, M., & Civera, J. (2024). The Drunkard’s Odometry: Estimating Camera Motion in Deforming Scenes. Advances in Neural Information Processing Systems, 36.
https://arxiv.org/abs/2306.16917
Software: https://github.com/UZ-SLAMLab/DrunkardsOdometry
Recasens, D., Lamarca, J., Facil, J. M., Montiel, J. M. M., & Civera, J. (2021). Endo-depth-and-motion: localization and reconstruction in endoscopic videos using depth networks and photometric constraints. Al IEEE Robotics and Automation Letters.
https://doi.org/10.1109/LRA.2021.3095528
https://arxiv.org/abs/2103.16525
Software: https://github.com/UZ-SLAMLab/Endo-Depth-and-Motion
Recasens, D (2020). Estimación de profundidad con redes neuronales profundas en vídeos de endoscopias (Depth estimation with deep neural networks in endoscopy videos (Master dissertation, Universidad de Zaragoza).
https://zaguan.unizar.es/record/101312/files/TAZ-TFM-2020-1450.pdf
Rodriguez-Puigvert, J., Recasens, D., Civera, J., & Martinez-Cantin, R. (2022, September). On the uncertain single-view depths in colonoscopies. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 130-140). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-16437-8_13
https://arxiv.org/pdf/2112.08906
Sengupta, A., & Bartoli, A. (2021). Colonoscopic 3D reconstruction by tubular non-rigid structure-from-motion. International Journal of Computer Assisted Radiology and Surgery, 16(7), 1237-1241.
https://doi.org/10.1007/s11548-021-02409-x
https://encov.ip.uca.fr/publications/pubfiles/2021_Sengupta_etal_IPCAI_tubular.pdf
Subedi, R., Gaire, R. R., Ali, S., Nguyen, A., Stoyanov, D., & Bhattarai, B. (2023, October). A client-server deep federated learning for cross-domain surgical image segmentation. In MICCAI Workshop on Data Engineering in Medical Imaging (pp. 21-33). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-44992-5_3
https://arxiv.org/abs/2306.08720
Tomasini, C., Riazuelo, L., & Murillo, A. C. (2024, October). Sim2Real in Endoscopy Segmentation with a Novel Structure Aware Image Translation. In International Workshop on Simulation and Synthesis in Medical Imaging (pp. 89-101). Cham: Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-73281-2_9
https://link.springer.com/chapter/10.1007/978-3-031-73281-2_9
Software: https://github.com/ropertUZ/Sim2Real-EndoscopySegmentation
Tomasini, C., Riazuelo, L., Murillo, A. C., & Alonso, I. (2022). Efficient tool segmentation for endoscopic videos in the wild (No. ART-2022-128003).
https://proceedings.mlr.press/v172/tomasini22a/tomasini22a.pdf
Wang, J., Jin, Y., Stoyanov, D., & Wang, L. (2023). FEDDP: Dual personalization in federated medical image segmentation. IEEE Transactions on Medical Imaging, 43(1), 297-308. https://ieeexplore.ieee.org/abstract/document/10194959
https://doi.org/10.1109/TMI.2023.3299206