Whereas ‘strong AI’ (i.e. future human-level or superhuman-level artificial intelligence or ‘superintelligence’) hardly bears any relevance to society beyond sci-fi fantasies and the theoretical speculations it evokes, it is ‘weak AI’ (that is, artificial intelligence having specific functions and applied to particular tasks) that has a huge impact on the human condition today, and therefore primarily deserves attention.
Across the vast array of applications of weak AI in various sectors, there is one particular method that stands out in its versatility: ‘predictive analytics’ that is a kind of ‘statistical AI’ which is capable of extrapolating future probabilities from data patterns of the past. It is AI-powered predictive analytics that underpins not only recommendation algorithms on internet platforms, but also decision-making in finance (on the stock market and in domains like credit policy and insurance), job applications, criminal justice and policing – just to name a few areas.
The societal impact of AI’s predictive analytics is not difficult to guess: it increases inequality and injustice; it makes even harder for the disadvantaged to leave their socio-cultural pigeonhole. Instead of their past offering the possibility of an open future, their past and their future end up in a loop perpetuating itself. With statistical and algorithmic probability increasingly defining the course of action we take in society, we are in a process of inching toward determinism – and sidelining and eventually eliminating human freedom.
This is a predicament in which the loss of freedom and social injustice coincide.
The ‘blind faith in data and algorithms’ as a prevalent attitude and anthropological reality in contemporary society needs to be subjected to a critical assessment. This blind faith attributes to technology the ability to reveal ‘objective truth’; by the same token, AI technology comes to be invested with divine powers. Such an attitude toward technology (1) is unaware of its own human bias, and also (2) loses sight of the necessary human involvement, by the need to interpret predictive analytics, in the process of uncovering so-called ‘objective truths’.
The encroachment on human freedom of a decision-making submitting to predictive analytics throws interesting light on the ’denial of free will’, so common today in science and philosophy, by showing it in action in social reality. What we witness is not an ‘absence or non-existence of free will’ as a fact, but rather a ’performed unfreedom’ or ‘performed determinism’ which implies something we can term ‘the paradox of unfreedom’: unfreedom as freely chosen (less by powerful individuals to the detriment of disadvantaged ones than by the whole of society against itself in embracing a ‘policy of determinism’).
Just as social or corporate policy submitting to the AI of predictive analytics and causing a loss of human freedom, the ‘denial of free will’ is a performative act rather than a constative one. Therefore, it runs the risk of being morally irresponsible and socially destructive.
Human freedom is hardly a ‘metaphysical given’ to be asserted or denied; it does little justice to human freedom to approach it in terms of ‘truth’ or ‘untruth’, ‘existence’ or ‘non-existence’. If there is in this regard anything to learn from the social conditions as shaped by AI, it is the insight that human freedom ‘becomes’ rather than ‘exists’ – it is emergence or evanescence, a gain or a loss.
The project aims at raising awareness in church and society of the challenge posed by the AI of predictive analytics.
Rather than simply carrying out a social and philosophical analysis, the project also serves to improve conditions for human freedom in society. It is not intended as a last-ditch effort to salvage a ‘final sanctuary’ of freedom amid the current shift from future probability to determinism – protecting some sort of ’human factor’ from disappearance.
Instead, the project will be run on the assumption that increased transparency in and understanding of the subject it investigates can have a liberating effect on society.
3 workshops / expert seminars, organised in co-operation with Vatican institutions or possibly other pontifical universities (December 2023, October 2024, November 2024);
talks / lectures (at the Angelicum, the Pontifical Academy of Social Sciences etc.);
3 publications in academic journals;
1 monograph (book proposal to be submitted to the publisher by February 2024, final manuscript by September 2025).
Titles/topics for the three seminars:
Changing the Global Game: Artificial Intelligence, Predictive Analytics and the Human Condition (December 12, 2023): a seminar with technologists and philosophers of technology
Artificial Intelligence, Human Agency, Political Action (September 30, 2024): a seminar with political scientists and political philosophers
Artificial Intelligence and Social Justice (November 2024): a seminar with social scientists and economists.