Search this site
Embedded Files
MDT @HAI2018
  • MDT@HAI2018
    • Home
    • Programme
    • Location
    • Important Dates
    • Contact
  • Call for Abstracts
  • Registration
MDT @HAI2018
  • MDT@HAI2018
    • Home
    • Programme
    • Location
    • Important Dates
    • Contact
  • Call for Abstracts
  • Registration
  • More
    • MDT@HAI2018
      • Home
      • Programme
      • Location
      • Important Dates
      • Contact
    • Call for Abstracts
    • Registration

A workshop of the 6th International Conference on Human-Agent Interaction


Trust is fundamental to the functioning of human societies (Baier, 1986); in a world where agents and machines are becoming part of our daily lives, trust will be fundamental to a functioning human-machine society as well. It is already an important issue in interactions with existing personal assistants like Siri (Cowan et al., 2017) and is likely to be a significant issue for future interactions with agents in a wide variety of domains. Therefore, we should make every effort to ensure that machines are designed to be trustworthy, based on accurate analyses of people’s trust perceptions and trusting behaviour. While there is a large body of research studying trust as a perceptually measured phenomenon (e.g through validated questionnaires), insight into behavioural measures of trust is lacking. This is concerning, given that intentions and attitudes do not clearly correlate with behaviour (Greenwald et al., 1988).

While there has been some research involving behavioural measures of trust in an agent - such as interpersonal distance (Haring et al., 2013), economic games (Torre et al., 2018), cooperation games (Chidambaram et al., 2012), simulated driving scenarios (Large & Burnett, 2014), body language coding (Celiktutan & Gunes, 2016), etc. - we are lacking a unifying definition of what type of behaviours we should be observing and measuring. Other concepts such as competence, intelligence, or safety also gravitate around the concept of trust, and should equally be taken into account.

This workshop will look to map out future work on trust in Human-Agent Interaction, bringing together leading researchers from industry and academia in a number of domains related to the study of trust. We are looking to bring together researchers from Psychology, Computer Science, Design, Engineering, User Experience, Cognitive Science, and more. We aim to advance our common definition of trust, map best practices to measure it and the concepts related to it and share ideas on how we can design trustworthy interactions with agents.

The rough outline for this full day workshop is as follows:

  • Morning: Short oral presentations. These will be on previous experimental work, theoretical considerations, or innovative ideas for new methodologies. Depending on funding, we will also invite a keynote speaker to start the morning session
  • Afternoon: Concept and method mapping: A round table discussion will draw the points, concepts and methods raised in the oral presentations to create a map of the concepts, methods and approaches used in researching trust as well as what needs to be considered as a priority for future work in these areas.
  • Afternoon (following): Designing trustworthy machines exercise: This will involve a design exercise whereby attendees will need to sketch out a trust-related agent interaction, using scenarios, storyboarding and paper prototyping methodologies. Participants will be encouraged to use the concepts and methods mapped in the afternoon sessions to guide their design and its evaluation. .
  • Closing: we will conclude with a panel session, where the chairs will summarise the events of the day; we will also agree on key points that could be included in the workshop summary, which we hope can shape the future direction of this research topic.


Google Sites
Report abuse
Google Sites
Report abuse