ICVS2019 Tutorial:

Adaptive Vision for

Human Robot Collaboration

List of talks:

Introduction to the Projective Consciousness Model

Speaker: David Rudrauf, University of Geneva

References:

  • Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
  • Rudrauf, D., Bennequin, D., Landini, G., Granic, I., Friston, K., Williford, K. (2017) A mathematical model of embodied consciousness. Journal of Theoretical Biology. 428: 106-131
  • Rudrauf, D., Bennequin, D., & Williford, K. (2018). The Moon Illusion explained by the Projective Consciousness Model. arXiv preprint arXiv:1809.04414. [not peer reviewed yet!]
  • Rudrauf, D., & Debanné, M. (2018). Building a cybernetic model of psychopathology: beyond the metaphor. Psychological Inquiry.
  • Williford, K., Bennequin, D., Friston, K., Rudrauf, D. (2018) The Projective Consciousness Model and Phenomenal Selfhood. Frontiers in Psychology.

Ecological Interaction in Virtual and Augmented Reality

Speaker: Manuela Chessa, Univerista' di Genova

Slides: [here]

References:

  • Ballestin, G., Solari, F., & Chessa, M. (2018, October). Perception and Action in Peripersonal Space: A Comparison Between Video and Optical See-Through Augmented Reality Devices. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 184-189). IEEE.

Modelling and imitating attentional behaviours in complex tasks

Speaker: Valsamis (Makis) Ntouskos, Sapienza Universita' di Roma

Slides: [here]

Videos: [here]

References:

  • V. Ntouskos, F. Pirri, M. Pizzoli, A. Sinha, B. Cafaro. Saliency Prediction in the Coherence Theory of Attention. BICA 2013
  • M. Qodseya, M. Sanzari, V. Ntouskos, F. Pirri. A3D: A device for studying gaze in 3D. ECCVW 2016 (EPIC)
  • F. Pirri, M. Pizzoli, A. Rudi. A General Method for the Point of Regard Estimation in 3D Space. CVPR 2011
  • L. Mauro, E. Alati, M. Sanzari, V. Ntouskos, G. Massimiani, F. Pirri. Deep execution monitor for robot assistive tasks. ECCVW 2018 (ACVR)
  • E. Alati, L. Mauro, V. Ntouskos, F. Pirri. Anticipating next goal for robot plan prediction. IntelliSys 2019
  • L. Mauro, F. Puja, S. Grazioso, V. Ntouskos, M. Sanzari, E. Alati, F. Pirri. Visual Search and Recognition for Robot Task Execution and Monitoring. APPIS 2018
  • M. Sanzari, V. Ntouskos, F. Pirri. Bayesian image based 3D pose estimation. ECCV 2016
  • M. Sanzari, V. Ntouskos, F. Pirri. Discovery and recognition of motion primitives in human activities. PLOS One 2019
  • E. Alati, L. Mauro, V. Ntouskos, F. Pirri. Help by Predicting What to Do. ICIP 2019

Attention Measurement Technologies for Situation Awareness and Motivation in Human-Robot Collaboration

Speaker: Lucas Paletta

Slides: [here]

References:

  • Paletta, L., et al. (2019), AMIGO - A Socially Assistive Robot for Coaching Multimodal Training of Persons with Dementia, in Korn, O., Ed., Social Robots: Technological, Societal and Ethical Aspects of Human- Robot Interaction, Springer, Human–Computer Interaction Series, DOI 10.1007/978-3- 030-17107-0.
  • Paletta L., Pszeida M., Nauschnegg B., Haspl T., & Marton R. (2020). Stress Measurement in Multi-tasking Decision Processes Using Executive Functions Analysis. In: Ayaz H. (eds) Advances in Neuroergonomics and Cognitive Engineering. AHFE 2019. Advances in Intelligent Systems and Computing, vol 953, pp. 344-356, Springer.
  • Paletta, L., et al. (2019). Gaze based Human Factors Measurements for the Evaluation of Intuitive Human-Robot Collaboration in Real-time. Proc. 24th IEEE Conference on Emerging Technologies and Factory Automation, ETFA 2019, Zaragoza, Spain, September 10-13, 2019.

Introduction to Egovision in Human Robot Interaction

Speaker: Giovanni Maria Farinalla, Università di Catania

Slides: [here]

References:

  • A. Furnari, G. M. Farinella and S. Battiato. Recognizing Personal Locations from Egocentric Videos. IEEE Transactions on Human-Machine Systems, 2017 - https://iplab.dmi.unict.it/PersonalLocations/
  • A. Furnari, S. Battiato, G. M. Farinella, Personal-Location-Based Temporal Segmentation of Egocentric Video for Life logging Applications, Journal of Visual Communication and Image Representation, 2018 - https://iplab.dmi.unict.it/PersonalLocationSegmentation/
  • G. M. Farinella, G. Signorello, S. Battiato, A. Furnari, F. Ragusa, R. Leonardi, E. Ragusa, E. Scuderi, A. Lopes, L. Santo, M. Samarotto. VEDI: Vision Exploitation for Data Interpretation. In 20th International Conference on Image Analysis and Processing (ICIAP), 2019
  • F. Ragusa, L. Guarnera, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella, Localization of Visitors for Cultural Sites Management, In SIGMAP 2018
  • F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella (2019). Egocentric Point of Interest Recognition in Cultural Sites. In International Conference on Computer Vision Theory and Applications (VISAPP)
  • F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella, Egocentric Visitors Localization in Cultural Sites, ACM Journal on Computing and Cultural Heritage, 2019.
  • F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella. VEDI-CH: Dataset and Fundamental Tasks for Visitors Behavioral Understanding using Egocentric Vision. Submitted to Pattern Recognition Letters, 2019 https://iplab.dmi.unict.it/VEDI-CH/
  • F. L. M. Milotta, A. Furnari, S. Battiato, M. De Salvo, G. Signorello, G. M. Farinella (2019). Visitors Localization in Natural Sites Exploiting Ego Vision and GPS. In International Conference on Computer Vision Theory and Applications (VISAPP).
  • Filippo L.M. Milotta, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, Giovanni M. Farinella (2019). Egocentric Visitors Localization in Natural Sites. Journal of Visual Communication and Image Representation. https://iplab.dmi.unict.it/EgoNature/
  • E. Spera, A. Furnari, S. Battiato, G. M. Farinella, Egocentric Shopping Cart Localization, International Conference on Pattern Recognition, 2018
  • S. A. Orlando, A. Furnari, S. Battiato, G. M. Farinella. Image-Based Localization with Simulated Egocentric Navigations. VISAPP 2019
  • S. A. Orlando, A. Furnari, S. Battiato, G. M. Farinella (2019). Image-Based Localization with Simulated Egocentric Navigations. In VISAPP 2019
  • A. Furnari, S. Battiato, K. Grauman, G. M. Farinella, Next-Active-Object Prediction from Egocentric Videos, Journal of Visual Communication and Image Representation, 2017
  • D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro and T. Perrett, W. Price, M. Wray (2018). Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. In European Conference on Computer Vision
  • Antonino Furnari, Giovanni Maria Farinella, What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision - ORAL, 2019. http://iplab.dmi.unict.it/rulstm/