Research

Overview

I am interested in general in the interaction of Human with innovative systems. My main topics are:

  • 2021, on going: Artifical Intelligence (Multi-agent system) for accessible interaction

  • 2018, on going: Virtual reality with and without visual impairments

    • With X-Road project, a street simulator for low vision and blind students,

    • Sensitization to visual impairments with VR

    • Accessible collaborative VR

  • 2017-2019: Audio-tactile augmented reality for inclusive with and without visual impairments tactile graphics, maps, images and boardgames

    • Using Papart spatial augmented reality toolkit to enhance tactile media with virtual audio feedback

    • Providing an graphical user interface authoring tool

    • Using Papart as an authoring tool for audio-tactile content, in addition to use it to read audio-tactile content.

    • Using Papart to make existing boatdgames inclusive for people with and without visual impairments (GameARt). GameARt is an authoring tool for audio-tactile content, in addition to read audio-tactile content.

  • 2013-2016: Artifical intelligence for training to learn collaborative protocol

    • Normative multi agent system to follow unexpected human activity in a crisis simulation (e.g. flood)

    • Normative multi agent system to understand table top tangible interaction for unexpected actions

    • Normative multi agent system to provide remote and local feedback about unexpected actions

I describe the two first points in the next section, and the last point in the last section.

From 2018: Mixed reality Inclusive to People with Visual Impairments

My research " Mixed reality Inclusive to People with Visual Impairments" has two themes: augmented reality with audio-tactile content, and virtual reality with audio and visual feedback.

Wait, what?!? Augmented and Virtual Reality for the blind?

From the definition, these concepts are not about visual feedback, but about virtual feedback. The virtual feedback can be visual (as in mainstream technologies), as well as audio and tactile.

Moreover, visual impairment includes low vision people, in a proportion of nine tenths, and blind people with light perception for the remaining last tenth. People with visual impairments (around 99%) can also use the vision, and even heavily rely on it. You are sighted? Imagine you walking by night in the city. You are in a "low vision" situation. But your vision makes a big difference (try to walk by night in the same city, but blindfold!).

The last point, and not the least: "Inclusive" is an important part. Sighted people are disappointed in the best case, and totally lost in the worst one, without visual interaction. It is important to keep image to be inclusive for the sighted.

In my research, I study: the non-visual feedback, and the interfaces accessible and with visual feedback, and how sighted people can use the same systems as well.

Virtual Reality

In this theme, I study the conception of Virtual Reality (VR), accessible to people with and without visual impairments. I try in particular to propose systems usable in real conditions in real context by end-users.

(January 2018-July 2018) Accessible VR smartphone application for a Street simulator in Orientation&Mobility See: Thevin, L. Briant, C., & Brock, A. M. X-Road: Virtual Reality Glasses for Orientation and Mobility Training of People with Visual Impairments. In : ACM Transactions on Accessible Computing , ACM New York, NY, USA In press

In this work, we developped and studied an accessible street simulator. 13 blind and low vision students followed an O&M class with their instructor. The VR app run on a smartphone, and is entirely usable by the teachers and the students.

Our lessons learned:

  • More accuracy in visual modelling is required, as people with visual impairments search for specific visual cues

  • If an object audio feedback come from multiple audioclips, the audio feedback should provide a interpolation between the clips. Otherwise, a suddenly more intense audio feedback is understood as an object teleported closer

  • The movement in VR cannot be only point and teleport, walk-in-place, fly or using path redirection, because all these mechanisms rely on vision to select or update the new position. Walking in the same way in VR and in real world is a suitable approach

  • More accuracy in tracking the world, the user and in positionning the virtual object is required, because the vision may not correct / update the inconsitencies or the decrepencies.


(May 2019) What are the challenges for inclusive VR for people with and without visual impairments? See: Thévin, L., & Brock, A. (2019, May). How to move from Inclusive Systems to Collaborative Systems: the Case of Virtual Reality for teaching O&M. CHI 2019 Workshop "Hacking blind Navigation"

In this work, we link theoritical frameworks about collaborative system and CSCW with the challenge on accessible VR for people with visual impairments. The introduction presents the previous works of the authors.


(November 2019-March 2020) Guidelines for Inclusive Virtual Avatars. See: Lauren Thevin, Tonja Machulla: Guidelines for Inclusive Virtual Agents: How do Persons with Visual Impairments Detect and Recognize others and their Activities, ICCHP. 2020.



(March 2020) VR demo around three misconception about blindness and visual impairments See: Thévin, L., & Machulla, T. (2020, March). Tonja Machulla. Three Common Misconceptions about Visual Impairments. 2020. IEEE VR 2020 3DUI Contest.

In this work funded by UNADEV, explore VR as a tool for sensitization about blindness and visual impairments. We design this work about misconceptions about visual impairments, according to professionals from IRSA Bordeaux specialized school. If you want to try the web demo: https://sites.google.com/ensc.fr/laurenthevin/demos/demo-3dui-2020.


(September 2019-March 2020) This work is under publication process. More information soon!

See: (not published)

Audio-tactile content (2017-2019)

An overview of this thema is available in the technical report of Viste project (from page 31), as well as lesson learned (from page 43) readable here: https://www.researchgate.net/publication/337948430_VISTE_Guide_of_Good_Practice_for_Policy_Recommendation_Empowering_Spatial_Thinking_of_Students_with_Visual_Impairment .

In this theme, I study how to make graphics accessible through interactivity. Visual graphics (images, maps, etc.) can be made accessible to people with visual impairments. Various relief representations exist: raised-lines print (the ink on a special paper swells after a short time in a special oven), 3-D printing and laser cut, small scale models, tactile graphics with various materials. Complementary to braille captions, it is possible to associate audio to tactile areas. When touching part of the graphics, the audio caption will be played.

(Summer 2017) See: Albouys-Perrois, J., Laviole, J., Briant, C., & Brock, A. M. (2018, April). Towards a multisensory augmented reality map for blind and low vision people: A participatory design approach. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 629). ACM.

Before starting my position on this topic, I first participated in an experiment on a system using the PapARt framework to detect touch with depth camera (like a Kinect), proposing to give information when touching part of the map (exploration mode), and to give instructions to build a map with an automatic correction (construction mode).


(January 2018- March 2018) GUI to augment existing objects. See: Thevin, L. & Brock, A. M. Augmented Reality for People with Visual Impairments: Designing and Creating Audio-Tactile Content from Existing Objects. In : International Conference on Computers Helping People with Special Needs. Springer, Cham, 2018. p. 193-200.

I continued this research on the augmentation of real object using a graphical user interface. In the experiment, we asked the end-users (professors in a specialized school) to describe the tactile media they wanted to create, and the associated audio feedback (to ensure our features correspond to the need of the users). We asked them to create their tactile media, and we give them a manual (to verify they could create alone the augmented content). The process consists in drawing interactive zones on the picture of the tactile objects, and to associate the text to be read by speech synthesis. The contents created by the professors were then given to students with visual impairments, to verify the created content was accessible.


(November 2018- July 2019) Augmented reality to augment existing objects. See: Thevin, L., Jouffrais, C., Rodier, N., Palard, N., HACHET, M., & Brock, A. M. Creating Accessible Interactive Audio Tactile Drawings using Spatial Augmented Reality. In: Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces (ISS'19). 2019

In a second work, we studied how the augmented reality device PapARt can be used to create directly the interactive content, without moving from the device PapARt to a computer, and back to PapARt. We study the use of such system in schools and we compared the content creation features with this system and with the GUI system of the previous experiment.


(March 2019-September 2019) Audio-tactile augmented reality to improve inclusivity of existing board games. See: (accepted) L. Thevin, N. Rodier, B. Oriola, M. Hachet, A. Brock, C. Jouffrais: Inclusive Adaptation of Existing Board Games for Gamers with and without Visual Impairments using a Spatial Augmented Reality Framework for Touch Detection and Audio Feedback, PACMHCI ISS. 2021


See: (not published)

Before 2016: Multi agents systems for training in crisis management

I have defended my thesis in 2016. My PhD researches are about multi-agent architecture, to support Humans in learning collaborative protocol.

I propose to use multi-agent system to manage and detect unexpected actions in real time, that are the interesting actions in training:

  • follow if the users' actions (i.e. the "Activity" in ergonomics) follow the Emergency plan (i.e. the "Task" in ergonomics)

  • analyze if the user did wrong (i.e. should have followed the plan, the user's actions should be revised) or right (i.e. the plan is not consistent with the goals, the plan should be revised). We called this analysis "evaluative monitoring".

The multi-agent system enables moreover to let the users make unexpected actions:

  • using tangible activity as word and syntax, i.e. it enables to create meaningful "sentences" (i.e. actions) even if not expected

  • giving feedback using syntax and privacy rules, to give feedback on unexpected action

  • the sytaxes are rules, that can be followed or not. The feedback can be adapted from "tangible pattern not understandable" to "action represented by the tangible pattern not correct regarding the plan".

Our system contributes to group awareness by giving feedback on the user's actions in complicance with its own organization (e.g. the city) and the other organizations (e.g. firefighters and police).


See (PhD thesis, French): Thevin, L. Un système-multi agent normatif pour le soutien évaluatif à la collaboration humain-machine: application à la gestion de crise. 2016. Thèse de doctorat.