PUBLICATIONS:

Mohammad Moghimi, Pablo Azagra, Luis Montesano, Ana C Murillo, and Serge Belongie. Experiments on an rgb-d wearable vision system for egocentric activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 597–603, 2014.

Azagra Pablo, Yoan Mollard, Florian Golemo, Ana Cristina Murillo, Manuel Lopes, and Javier Civera. A multimodal human-robot interaction dataset. In FILM Workshop on NIPS 2016, 2016.

Pablo Azagra, Florian Golemo, Yoan Mollard, Manuel Lopes, Javier Civera and Ana C Murillo. A multimodal dataset for object model learning from natural human-robot interaction. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6134–6141. IEEE,2017.

Pablo Azagra, Javier Civera, and Ana C Murillo. Finding regions of interest from multimodal human-robot interactions. In GLU Workshop, Interspeech 2017, 2017.

Pablo Azagra, Ana Cristina Murillo, Manuel Lopes, and Javier Civera. Incremental object model learning from multimodal human-robot interactions. In Workshop on Visually Grounded Interaction and Language (ViGIL) on NeurIPS 2018, 2018

Pablo Azagra, Ana Cristina Murillo and Javier Civera. Incremental Learning of Object Models from Natural Human-Robot Interactions. In IEEE Transactions on Automation Science and Engineering (T-ASE),2020.

Morlana, J., Azagra, P., Civera, J., & Montiel, J. M. Self-supervised Visual Place Recognition for Colonoscopy Sequences. intelligence, 41(7), 1655-1668.

Barbed, O. L., Oriol, C., Millán, P. A., & Murillo, A. C. (2022, April). Semantic analysis of real endoscopies with unsupervised learned descriptors. In Medical Imaging with Deep Learning.

Azagra, P., Sostres, C., Ferrandez, Á., Riazuelo, L., Tomasini, C., Barbed, O. L., ... & Montiel, J. M. (2022). EndoMapper dataset of complete calibrated endoscopy procedures. arXiv preprint arXiv:2204.14240.

CODE:

  • All my code is available on my github account: link.