research

research lines

We examine meanings, values and beliefs, subjectivities and identities, and logics of power embedded in multiple narratives and practices articulating AIDA technologies. We address these articulations, that circulate across diverse spaces, timescales, institutions, and scientific and technical systems, through three research lines:

(i) Biometrics, security and policing

(ii) Public involvement, ethics, and critique

(iii) Visual representations, arts, and futuring

ongoing projects

AIDA in healthcare: objects, visual representations, and futuring

The materialities and imageries of emerging technologies related with AIDA are reconfiguring healthcare and medicine, enlarging the foreseeable futures of our health. From human life conception, to ageing,  and death, this project focuses on concrete uses of AIDA in healthcare and biomedical research, exploring health-related experiences, multiple temporalities, and visual and artistic representations through which the ‘natural’ becomes artificially mediated. While these technological and bioscientific innovations circulate before widespread real-life applications, activities of future-casting stand out. How do science and fiction intertwine in the foreseeable future of AIDA mediated health? What are the trajectories that shape scientific revolution, from fiction to acceptable and feasible science? What concept of future healthcare is being created and disseminated? Until what extent is this future contested? What is the role of AIDA in the shaping of such imaginaries, in the ambiguous and emergent space between science fiction and science facts? How do social values related to ethics frame what is considered desirable for the future of AIDA mediated health? What kind of impacts can these emerging technologies and imageries have for people and society regarding the way we experience, see, and conceptualise bodies, health, illness, difference, care, resistance, or life itself? (Researchers: Susana Silva, Susana de Noronha, and Emília Rodrigues)


Big Data in policing

Big Data technologies – a set of techniques allowing the gathering of massive amounts of data originated from diverse sources – are commonly represented by Big Tech, popular media, and science fiction as a panacea for crime control. This PhD project explores the expectations of police officers in Portugal regarding the potential role of Big Data in the governance of crime. Preliminary results show the reproduction, resistance, and (re)invention of powerful narratives by police officers, and the emergence of complex arrangements between “hype” and “disappointment” visions of Big Data. These hybrid expectations are oriented to calibrate broad cultural narratives with the actual lived experiences and experiential knowledge of police officers. (PI: Laura Neiva)


Big Data in tourism

In recent years, there have been substantial investments in the digital transformation of the tourism industry. This accelerated shift led to a strong emphasis on employing Big Data techniques to predict tourism trends and behaviours. This PhD project engages with diverse stakeholders closely involved in the digitization processes within the tourism sector. Results reveal a flexible spectrum of three distinct discursive narratives: techno-optimism, performativity of ethics, and uncertain futures. While there is a noticeable positioning to perceive Big Data as an "inevitable" and "natural" outcome of technological progress in the tourism sector, the stakeholders emphasise the importance of privacy, data protection, and ethical practices. These views illustrate defence against criticism, while ensuring credibility and gaining support for maintaining business as usual. (PI: Maria João Vaz)


Data colonialism, algorithmic coloniality, and decolonial AI

By recognising the historical continuity of structural coloniality, we aim to address AI, data, and algorithms as sites and manifestations of digital colonialism in multiple forms of extraction and exploitation, and othering. Aiming to contribute to the emergent literature on decolonial theory we focus on digital structures, socio-cultural narratives, knowledge systems and ways of developing and using technology which are based on systems, institutions, and values reproducing the coloniality of power which persist from the past and remain unquestioned in the present. One aim of this project is to give a voice to vulnerable and underrepresented communities, while developing methods and theories that relate to critical race studies, decolonial theories, reparatory approaches, and new and alternative data epistemologies.  (Researchers: Sheila Khan and Helena Machado)


Governance of biometric data and controversies in security contexts 

The type and amount of data that can be retrieved from the human body continues to grow. This includes, for example, genetic material, fingerprints, facial images, and movement patterns. Despite their different nature, such biometric data  are increasingly mobilised to regulate behaviours, and targeting criminalised and vulnerable populations. At  the same time, critical voices claim that uses of biometric data raise serious privacy and human rights concerns.  This project examines socio-technical controversies related to data-intensive uses of biometrics in security contexts. Our research questions are: Who are the actors involved in the controversies related to biometric data, what roles do they play, and how does continuous technological innovation change who is involved? What are the implications of policy-making,  regulation, and industry in shaping and reframing such controversies? (Researchers: Rafaela Granja, Filipa Queirós, Helena Machado, and Laura Neiva)


Responsible AI, ethics, and publics' engagement

In the global pursuit of advancing AI technologies, governments, Big Tech companies, AI scientists, and policy-makers are increasingly recognizing the imperative to address the social and ethical dimensions of AI development. This paradigm shift includes a fundamental consideration: the concept of Responsible AI where alongside overarching program objectives for AI integration, there is a growing emphasis on publics’ engagement. This recognition stems from the understanding that AI technology is subject to influence, transformation, and redirection by a multitude of social and ethical considerations that extend beyond technical expertise alone. This endeavor delves into the various ways in which the publics interacts with AI in diverse social, political, and cultural contexts. We critically examine the underlying assumptions and methodologies driving the effort to better align AI with its societal contexts, all under the umbrella of Responsible AI, and we contemplate the broader implications of this endeavor. Key questions we explore include: Why and how might the publics play a pivotal role in AI development and implementation? What mechanisms initiate diverse forms of publics’ engagement? What intricate ethical decisions and debates are enacted, and what alternative futures emerge as a result? (Researchers: Helena Machado, Susana Silva, Laura Neiva, Rafaela Granja, Emília Araújo and Maria João Vaz).