ICVS2019 Tutorial:
Adaptive Vision for
Human Robot Collaboration
List of talks:
Conference page: https://icvs2019.org/content/tutorials
Previous edition: https://sites.google.com/site/avhrc2017/
Introduction to Adaptive Vision for Human Robot Collaboration
Introduction to Adaptive Vision for Human Robot Collaboration
Speaker: Dimitri Ognibene, University of Essex
Slides: [here]
References:
- D Ognibene, L Mirante, L Marchegiani. 'Proactive Intention Recognition for Joint Human-Robot Search and Rescue Missions through Monte-Carlo Planning in POMDP Environments', ICSR, 2019.
- D Ognibene, G Baldassarre, 'Ecological Active Vision: Four Bio-Inspired Principles to Integrate Bottom-Up and Adaptive Top-Down Attention Tested With a Simple Camera-Arm Robot', Autonomous Mental Development, IEEE Transactions on 7 (1), 3-25, 2015
- K Lee, D Ognibene, H Chang, TK Kim, Y Demiris, 'STARE: Spatio-Temporal Attention RElocation for Multiple Structured Activities Detection' IEEE transactions on Image Processing, 2015
- D Ognibene, Y Demiris, 'Towards active event recognition', The 23rd International Joint Conference of Artificial Intelligence (IJCAI13), 2013
- Ognibene, D., Chinellato, E., Sarabia, M., &Demiris, Y. (2013). Contextual action recognition and target localization with an active allocation of attention on a humanoid robot. Bioinspiration & biomimetics, 8(3), 035002.
- Ognibene, D., Pezzulo, G., & Baldassarre, G. (2010). How can bottom-up information shape learning of top-down attention-control skills?. In 2010 IEEE 9th International Conference on Development and Learning (pp. 231-237). IEEE.
Attention during Social Interaction
Attention during Social Interaction
Speaker: Tom Foulsham, University of Essex
Slides: [here]
References:
- Kwart, D. G., Foulsham, T., & Kingstone, A. (2012). Age and beauty are in the eye of the beholder. Perception, 41(8), 925-938.
- Foulsham, T., Cheng, J. T., Tracy, J. L., Henrich, J., & Kingstone, A. (2010). Gaze allocation in a dynamic situation: Effects of social status and speaking. Cognition, 117(3), 319-331.
- Foulsham, T., Walker, E., & Kingstone, A. (2011). The where, what and when of gaze allocation in the lab and the natural environment. Vision research, 51(17), 1920-1931.
- Laidlaw, K. E., Foulsham, T., Kuhn, G., & Kingstone, A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences, 108(14), 5548-5553.
- Ho, S., Foulsham, T., & Kingstone, A. (2015). Speaking and listening with the eyes: gaze signaling during dyadic interactions. PloS one, 10(8), e0136905.
Introduction to the Projective Consciousness Model
Introduction to the Projective Consciousness Model
Speaker: David Rudrauf, University of Geneva
References:
- Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.
- Rudrauf, D., Bennequin, D., Landini, G., Granic, I., Friston, K., Williford, K. (2017) A mathematical model of embodied consciousness. Journal of Theoretical Biology. 428: 106-131
- Rudrauf, D., Bennequin, D., & Williford, K. (2018). The Moon Illusion explained by the Projective Consciousness Model. arXiv preprint arXiv:1809.04414. [not peer reviewed yet!]
- Rudrauf, D., & Debanné, M. (2018). Building a cybernetic model of psychopathology: beyond the metaphor. Psychological Inquiry.
- Williford, K., Bennequin, D., Friston, K., Rudrauf, D. (2018) The Projective Consciousness Model and Phenomenal Selfhood. Frontiers in Psychology.
Ecological Interaction in Virtual and Augmented Reality
Ecological Interaction in Virtual and Augmented Reality
Speaker: Manuela Chessa, Univerista' di Genova
Slides: [here]
References:
- Ballestin, G., Solari, F., & Chessa, M. (2018, October). Perception and Action in Peripersonal Space: A Comparison Between Video and Optical See-Through Augmented Reality Devices. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 184-189). IEEE.
Modelling and imitating attentional behaviours in complex tasks
Modelling and imitating attentional behaviours in complex tasks
Speaker: Valsamis (Makis) Ntouskos, Sapienza Universita' di Roma
Slides: [here]
Videos: [here]
References:
- V. Ntouskos, F. Pirri, M. Pizzoli, A. Sinha, B. Cafaro. Saliency Prediction in the Coherence Theory of Attention. BICA 2013
- M. Qodseya, M. Sanzari, V. Ntouskos, F. Pirri. A3D: A device for studying gaze in 3D. ECCVW 2016 (EPIC)
- F. Pirri, M. Pizzoli, A. Rudi. A General Method for the Point of Regard Estimation in 3D Space. CVPR 2011
- L. Mauro, E. Alati, M. Sanzari, V. Ntouskos, G. Massimiani, F. Pirri. Deep execution monitor for robot assistive tasks. ECCVW 2018 (ACVR)
- E. Alati, L. Mauro, V. Ntouskos, F. Pirri. Anticipating next goal for robot plan prediction. IntelliSys 2019
- L. Mauro, F. Puja, S. Grazioso, V. Ntouskos, M. Sanzari, E. Alati, F. Pirri. Visual Search and Recognition for Robot Task Execution and Monitoring. APPIS 2018
- M. Sanzari, V. Ntouskos, F. Pirri. Bayesian image based 3D pose estimation. ECCV 2016
- M. Sanzari, V. Ntouskos, F. Pirri. Discovery and recognition of motion primitives in human activities. PLOS One 2019
- E. Alati, L. Mauro, V. Ntouskos, F. Pirri. Help by Predicting What to Do. ICIP 2019
Attention Measurement Technologies for Situation Awareness and Motivation in Human-Robot Collaboration
Attention Measurement Technologies for Situation Awareness and Motivation in Human-Robot Collaboration
Speaker: Lucas Paletta
Slides: [here]
References:
- Paletta, L., et al. (2019), AMIGO - A Socially Assistive Robot for Coaching Multimodal Training of Persons with Dementia, in Korn, O., Ed., Social Robots: Technological, Societal and Ethical Aspects of Human- Robot Interaction, Springer, Human–Computer Interaction Series, DOI 10.1007/978-3- 030-17107-0.
- Paletta L., Pszeida M., Nauschnegg B., Haspl T., & Marton R. (2020). Stress Measurement in Multi-tasking Decision Processes Using Executive Functions Analysis. In: Ayaz H. (eds) Advances in Neuroergonomics and Cognitive Engineering. AHFE 2019. Advances in Intelligent Systems and Computing, vol 953, pp. 344-356, Springer.
- Paletta, L., et al. (2019). Gaze based Human Factors Measurements for the Evaluation of Intuitive Human-Robot Collaboration in Real-time. Proc. 24th IEEE Conference on Emerging Technologies and Factory Automation, ETFA 2019, Zaragoza, Spain, September 10-13, 2019.
Introduction to Egovision in Human Robot Interaction
Introduction to Egovision in Human Robot Interaction
Speaker: Giovanni Maria Farinalla, Università di Catania
Slides: [here]
References:
- A. Furnari, G. M. Farinella and S. Battiato. Recognizing Personal Locations from Egocentric Videos. IEEE Transactions on Human-Machine Systems, 2017 - https://iplab.dmi.unict.it/PersonalLocations/
- A. Furnari, S. Battiato, G. M. Farinella, Personal-Location-Based Temporal Segmentation of Egocentric Video for Life logging Applications, Journal of Visual Communication and Image Representation, 2018 - https://iplab.dmi.unict.it/PersonalLocationSegmentation/
- G. M. Farinella, G. Signorello, S. Battiato, A. Furnari, F. Ragusa, R. Leonardi, E. Ragusa, E. Scuderi, A. Lopes, L. Santo, M. Samarotto. VEDI: Vision Exploitation for Data Interpretation. In 20th International Conference on Image Analysis and Processing (ICIAP), 2019
- F. Ragusa, L. Guarnera, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella, Localization of Visitors for Cultural Sites Management, In SIGMAP 2018
- F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella (2019). Egocentric Point of Interest Recognition in Cultural Sites. In International Conference on Computer Vision Theory and Applications (VISAPP)
- F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella, Egocentric Visitors Localization in Cultural Sites, ACM Journal on Computing and Cultural Heritage, 2019.
- F. Ragusa, A. Furnari, S. Battiato, G. Signorello, G. M. Farinella. VEDI-CH: Dataset and Fundamental Tasks for Visitors Behavioral Understanding using Egocentric Vision. Submitted to Pattern Recognition Letters, 2019 https://iplab.dmi.unict.it/VEDI-CH/
- F. L. M. Milotta, A. Furnari, S. Battiato, M. De Salvo, G. Signorello, G. M. Farinella (2019). Visitors Localization in Natural Sites Exploiting Ego Vision and GPS. In International Conference on Computer Vision Theory and Applications (VISAPP).
- Filippo L.M. Milotta, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, Giovanni M. Farinella (2019). Egocentric Visitors Localization in Natural Sites. Journal of Visual Communication and Image Representation. https://iplab.dmi.unict.it/EgoNature/
- E. Spera, A. Furnari, S. Battiato, G. M. Farinella, Egocentric Shopping Cart Localization, International Conference on Pattern Recognition, 2018
- S. A. Orlando, A. Furnari, S. Battiato, G. M. Farinella. Image-Based Localization with Simulated Egocentric Navigations. VISAPP 2019
- S. A. Orlando, A. Furnari, S. Battiato, G. M. Farinella (2019). Image-Based Localization with Simulated Egocentric Navigations. In VISAPP 2019
- A. Furnari, S. Battiato, K. Grauman, G. M. Farinella, Next-Active-Object Prediction from Egocentric Videos, Journal of Visual Communication and Image Representation, 2017
- D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro and T. Perrett, W. Price, M. Wray (2018). Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. In European Conference on Computer Vision
- Antonino Furnari, Giovanni Maria Farinella, What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision - ORAL, 2019. http://iplab.dmi.unict.it/rulstm/