1. Ecological interaction in Augmented Reality (Presenter Manuela Chessa, 1 hour + 15 min)
1.1 Perception in AR/VR systems: how interaction in AR affects our senses
1.2 How misperception issues in AR/VR affect interaction
1.3 Visual fatigue and undesired effects
1.4 The role of the technological solutions and devices in perception and interaction
Download the presentation here
2. Using vision to grasp objects (Presenter Guido Maiello, 1 hour + 15 min)
2.1 Visual representations of 3D shape and material for grasping
2.2 Egocentric and allocentric reference frames
2.3 A computational model of grasp selection
2.4 The road towards image-computable models of visual action planning
2.5 Neural correlates of visual grasp selection
2.5 Translating vision and motor neuroscience into AR/VR for technological and clinical advancements.
Download the presentation here
3. Ecological perception in Virtual and Augmented Reality: a computational model (Presenter Fabio Solari, 1 hour + 15 min)
3.1 A computational model of visual perception for action tasks
3.2 Space-variant image representation and neural processing
3.3 How the modeled perception can inform the design of AR/VR systems
Download the presentation here
4. Active Vision for Human Robot Collaboration (Presenter Dimitri Ognibene, 1 hour + 15 min)
4.1 Active Perception (AP): foveal anatomy and control of the eye.
4.2 Uncovering the design principles of systems that adaptively find and selects relevant information will have an impact on both Robotics and Cognitive Neuroscience.
4.3 Active Vision (AV) robotic models
4.4 An information theoretic AV model for dynamic environments where achieving an effective behaviour requires the prompt recognition of the hidden states (e.g. intentions) and the interactions (e.g. attraction), and spatial relationships between the elements in the environment
4.5 A neural model of the development of AV strategies in ecological tasks, such as exploring and reaching rewarding objects in a class of similar environments, the agent world.
Download the presentation here
5. First Person (Egocentric) Vision for Localization and Anticipation (Presenter Giovanni Maria Farinella, 1 hour + 15 min)
5.1 Egocentric Localization
5.2 Next Active Object Prediction
5.3 Action Anticipation
5.4 Open Challenges: The EPIC-KITCHENS Dataset
Download the presentation here
6. The Projective Consciousness Model and experimental Virtual Reality: integrating strong AI and VR for psychological science (Presenter David Rudrauf, 1 hour + 15min)
6.1 Mathematical modeling of psychology
6.2 Projective Consciousness Model, cognitive and affective sciences
6.3 Computational implementations in artificial agents
6.4 VR as a common playground for artificial agents and humans
6.5 Virtual agents
6.6 Prediction and mutimodal quantification of behavior in VR
6.7 Interest for applications in normal and pathological contexts.