Computational Arts Research and Theory Blog

Week 1: Computational Aesthetics in The Practices of Art as Politics; Patricia Ticiento Clough, The City University of New York, Feb 2015.

This was an impenetrable wall of argot, generally speaking, and it was badly structured - not sure there was a through line or a cohesive argument. However, there were interesting things you could intuit from the text. Things I liked include: the idea of a 'trauma' being sustained (somewhere - by who? how?) through the increasingly visible / immanent computability of matter with or without human beings; the idea of art, artworks and art appreciation and function all leaping outside their prior domains like ball lightning, catalysed by the way this 'datalogical turn' is working within a neoliberal political framework; and the references to Parisi, who I think I'd get on with and will now look up. Otherwise... I wasn't sure about how well the terms were couched, why the repetitious lip service to ontological thinking, or what was meant here by art as politics. You want to talk about the commodification of human processes, how about the ongoing LARP situation that is the Academy?

Week 2: Computation and Software Art, ed. M. Fuller, 2008. Chapter 3, Algorithm; Andrew Goffey

Software is not de novo, it is not neutral, and it is not immaterial, yet much public presentation of software implies or explicitly directs the user or consumer to believe it is solving a problem well, uniquely, for the first time, in the best way, yet in a socially and politically neutral way, and that it exists apart from hardware as 'ones and zeros', that the hardware it is built on offers no constraints except perhaps technical, non-political ones, only as an idea or method somehow distributed amongst everyone for the greater good. How sinister, and how obviously true. Matthew Fuller's lucid introduction struck me quite forcefully, where "much software comprises simply and grimly of a social relation made systematic and unalterable".

Andrew Goffey's chapter on algorithms considers these formal systems of 'logic plus control' in this broader context, and makes a similar play on the dangers of considering them as entirely abstracted. He points out that algorithms are always embodied in some way - through the constraints of the hardware and software that executes them and on to the real-world effects on researchers or consumers. He sketches a view of the ideological atmosphere in which algorithms began to get closer to an implementation layer in the early 20th century, pointing out that even in the context of very early computing, practitioners were always tempted to look from the logic of the algorithm to the physics of real electronic computation, to get into the details of 'how to' in tandem with the formal workings of the algorithm.

He makes a case that the algorithm itself, not the program or 'implementation system' (ie hardware + programming language) could be considered as a 'statement' (énoncé) in Foucault's definition. In other words, that the meaning of an algorithm doesn't come from its syntax or semantics but from other rules which may not be contained within it - that although algorithms 'do things', it is possible to view them as having a purpose or effect beyond the 'thing' they 'do' (much as phatic speech can have a social meaning beyond the semantic content). Therefore we can analyse algorithms in the sense of a cultural discourse, and examine the way they operate transversally on humans and machines.

In our discussion groups we expanded on this and on the points Goffey makes around algorithms existing in a complex and ill-defined network of processes and power-knowledge relations, and the law of unintended consequences - we talked about the way that organisational structures can affect software development and brought up some examples of social issues created by algorithms (eg facial recognition software that is racially insensitive / prescriptive, hyper-targeted advertising).

The supposedly abstracted and formal nature of algorithms has a seductive power - but it is worth considering why, why this is the dominant or default cultural framework for them, who gains from the promotion of this idea, when even a cursory consideration of the way algorithms actually effect their changes, and of who creates them and why, gives the lie to the idea of them as purely formal, theoretical entities.

He ends on a note of happy accident (or infection), which is kind of dissonant with what he's just been saying, but it's nice to think that algorithms might inadvertently be agents that generate new forms (of art, or society), rather than inert conduits for reifying and then exacerbating existing social inequality and oppression.

Week 3: The Semiotics of the Moon as Fantasy and Destination; M. Betancourt, Leonardo 48:5, 2015.

My reading for week 3's discussion was Michael Betancourt's review of his own body of work, which is concerned with montage and collage, particularly in video, and focuses on the semiotics of easily-recognised images when combined with glitches and repurposed in pseudoscientific or 'quotational' contexts.

I'd found the reading because I also enjoy the moon as a thing and as a symbol. Betancourt argues that the moon has an ambivalence that allows you to see more clearly the critical position of the material you create around it. His earliest work, Two Women and a Nightengale (1996-2004) is a series of collages which quote a Max Ernst painting, are bound to, and respond to, 20th century surrealism, and provide a master key (so he says) to his later work.

He reviews all of his major pieces and helpfully goes over the methodology - one thing to take away is how closely methodology and process form a part of the finished piece in the world of computational art pieces. I really enjoyed the idea of using older imagery and archival footage that already has a semiotic or metaphorical kick and reprocessing it in ways that provide a firm critical position - as Betancourt says, 'the critical foci of these works was a central part of their planning process, even though the particular, unique aesthetic problems posed in each piece were immediately determinate of their morphology and structure... ...In spite of these differences, my works cohere around the same icons, aesthetic protocols of juxtaposition and combination and theoretical critiques.' He even created a new work, the Dark Rift, as a result of carrying out this review.


There's a lot more to his work and he's an interesting figure because of the times in which he was working and the different emerging technologies he has used - which in itself seems like a critical practice and position with regard to repurposing. He's used sonified electromagnetic radiation and cosmological geometry (Telemetry, 2003-2005) and equated nuclear testing and destruction with glitch aesthetics (Star Fish, 2012). There is a coherence to his work despite the quickly changing media - and it was useful to read an artist coming to terms with his own body of work from the point of view of a learning practitioner.

Week 5: Research Project - Artefact Research (post-hoc)

Here are three artefacts which align with my research interest in semantic and affective computer vision, to inform thinking about a computer-vision oriented research artefact. These have been compiled after being put into a group and working out a more complete vision - so they are less directly implicated with the direction our research project took, and more of a 'mood board' for areas that could be relevant to affective CV.


Artefact #1: Ed Atkins, DEPRESSION (2012)

This live performance uses a downbeat, affectless monologue as the trigger for sonic responses and some more theatrical bursts of energy. It's interesting in that there is a deliberately absent 'face' at the centre. In aesthetic terms, honestly though, this does seem like Chris Morris's Blue Jam without the humour - it's included here to demonstrate the extreme difficulty of creating anything really new with text and performance based art (as positioned inside gallery walls) other than as a referential or meta-performative piece. Which restricts the final product, often, to being hackneyed.

Artefact #2: Gottman, J, Murray J et al. 1995. Mathematics of Marital Conflict: Qualitative Dynamic Mathematical Modeling of Marital Interaction. Journal of Family Psychology

This paper is included as a marker towards the ways in which human-computer interaction through sentiment could be modelled. A moving index of overall 'positivity' or 'negativity'. Gottman and Murray used the Rapid Couples Interaction Scoring System (RCISS; Krokoff, Gottman, & Hass, 1989) to create a mathematical model of marital interactions. With this model they could predict the thresholds at which overall changes in the positivity or negativity of a conversation (the 'affectual environment' if you like) occur - which has important implications for human-computer interaction. This is pretty general and not directly involved with computer vision - but it's a key reference for any consideration of relationship-building between a person and an affective machine.

Link here: http://joe.ramfeezled.com/wp-content/uploads/Cook-et-al-JFamilyPsyc-19951.pdf

Artefact #3: A hacked Furby

We're interested in getting some kind of emotional effect or some kind of disruptive experience through the interaction between a computer + vision + text and a human face - but in practical terms our artefact is unlikely to move beyond a screen at this stage. The hacked furby is included as a demonstration of the kind of insidious jeopardy that could follow from having computers or algorithms that appear to be embodied intelligent companions but are actually listening nodes in an unsecured network. And also to demonstrate the happy trust that people tend to put in anything with stereoscopically arranged eye-holes and fur.

Link here: https://www.theguardian.com/money/2017/dec/03/furby-argos-intelligent-toys-security-hacking

Week 6: Research Project - Project Plan

I'm working with Jérémie Wenger on this project, and we've decided to try and merge interests in text generation and generative writing, emotional AI, and computer vision.

We had several chats and coffees this week to talk it through, and the project plan we agreed looks like this:

Artefact: interface between visual data and textual transformation

Can visual inputs effectively create changes in affect by interpolating between text?

Can poems transformed by faces be used to guess emotion in those faces?

Is poetic transformation through affective input a satisfying experience for the reader?

What are the ways in which visual data captured by a videocamera can be used to modify or create literary texts? Our project has two ends: a visual end, dealing with data input through video (Guy), and a textual end, where this input is used for literary ends (Jeremie). On each side, various levels of complexity have been uncovered. As we advance through the realisation of a prototype, we expect to encounter technical difficulties as well as new ideas that will enlarge our understanding of the subject.

On the visual side, the ultimate goal is the intersection between computer vision and emotions through facial recognition, namely, the detection, classification and output of human emotional states by an AI receiving data from a videocamera.

Decreasing the level of complexity, however, leads to:

1) data output produced through computer vision from various facial expressions (emotion detection and classification left out);

2) data output produced through computer vision from various shapes (faces left out);

3) data output produced from various shapes (direct computer vision left out).

On the textual side, the equivalent horizon consists in a multiplicity of textual techniques and tools including for instance:

1) working with APIs so that a given input can be used to extract linguistic data from various sources (OED, Wordnik) and integrated into texts;

2) tools for interpolation between two texts (e.g. smooth/imperceptible transition between one word and another changing one letter at a time; similarly interpolating from one text to another using words as the fundamental unit; another parallel could be the lerp function between two colours, with one text gradually morphing into another by a certain percentage);

3) using data input to switch between nontrivial variations of one same text or to generate a unique textual object (if the input is an emotion drawn from a face, the final text could be ‘coloured’ by that result) ;

4) in the simpler case of geometric figures, use this data to produce visual/geometric poetry. (The project will focus on poetic techniques, but other avenues could be considered for fiction and narrative in the future.)

Project plan with dates:

• Collate initial resources and research (Friday 3rd November);

• Document affective AI / e-poetry landscape (Friday 10th November);

• Create small set of possible technical approaches and research their feasibility of difficulty of implementation given the timeframe – Jérémie does text, Guy does Computer Vision. Looking at: text generation; text transformation; affective CV; CV for primitives with appropriate output for text interventions. (Friday 17th November);

• First attempt approach selected – working code proof of concepts on both sides (Friday 25th November);

• Second working code example – joined approach / prototype artefact (Friday 1st December)

• Presentation worked up (Wednesday 6th December).

Throughout the project we will both (Jérémie: jcw.persona.co/computational-thoughts, and Guy: https://sites.google.com/view/gcmfa/research-and-theory-blog) keep project diaries with thoughts and comments on our progress.

Week 7: Research Project - Initial Thoughts and Work

Jérémie and I split the workload - he was looking at writing / generative tools and I was looking at computer vision (referred to as CV from now on).

Trawling through the Leonardo journal and JAR revealed a lot of work concerned more generally with sentiment recognition using machine learning techniques, but not that much art or critical theory in this area. What I could find was a lot of procedural information - what was done, how, what was the thinking behind it - but it was all quite dry.

Turning to the 'how' of what we were going to attempt - a reminder that we've decided to create an interactive program that both responds to emotion and offers up generated textual responses in a creative way as a form of poetics - I looked for the state of the art for emotion recognition. I found this, which was useful: https://nordicapis.com/20-emotion-recognition-apis-that-will-leave-you-impressed-and-concerned/.

I was starting to understand the scale of the issue and realising that we would probably have to limit our ambition a little (I believe Jérémie was finding the same on the text side of things). I looked at Microsoft's Project Oxford and at some other proprietary technologies, and quickly realised our solution had to be:

  • Open source
  • Free
  • Real-time / video capable
  • Supported and / or with plug ins

This limited me realistically to using environments and languages we were using on the course. I looked into using Open Frameworks but connecting that up with a text generator seemed tricky. After discussion, Jérémie and I realised that an API-first solution was not going to be plausible. We finally settled on using Processing - with its many libraries (including a well-documented OpenCV library and very simple video capture) and its text drawing and string manipulation capabilities, we could do everything in one 'box'. And the bonus was we could port it very easily to a website from there.

We had initially wanted to make people smile for their poems - the title of the piece became 'SMILE POETRY GOD' at one point, whether we'll keep that I don't know - and I found a hacky way to do this using someone else's work - Bryan Chung has written a library for processing that does 'smile detection': http://www.magicandlove.com/blog/2011/05/04/smile-detection-in-processing-mac-osx/

Meanwhile Jérémie has created a timed generative poetry tool in Processing. We're getting somewhere - just need to work out how to join it all together.

Week 8: Research Project - from prototype to artefact

I take a side route this week and concentrate on working with text in processing to see if I can get anything going. We have a rich tradition of nonsense/found/generative/randomised 'poetry' to draw on and so I'm happy to make something that 'feels' like something that I might have enjoyed stumbling across in the blogoverse years ago and call it art.

Some references:


I find a selection of wordlists online that we can work with - Jérémie has been looking at dictionaries but I just need to use something as a proof of concept - this accidental find proves very useful:

https://en.oxforddictionaries.com/explore/word-lists

This is exactly what I need - a list of words with meanings that 'sound' poetical. I go with the 'Literary' list. After some work, I get a poem generator. The poems make me laugh a lot. Here are some examples...

I also editorialised these a bit - selecting specific words myself for the start and end of the titles, and also allowing the wordlist to be broken a little bit.

Some definitions snuck through, and were broken up over lines in the list, which is useful because they contain different parts of speech. I also added in punctuation - and randomised all of these elements.

Basic random() functions are best here, rather than Perlin noise, because we want each line choice and title choice to have an equal chance of beginning with a different letter - otherwise we might end up with more alliterative poetry, because the list is in alphabetical order.

Integrating the CV with this was tricky - and Jérémie, working separately, discovered an issue with our 'borrowed' PSmile library when you port the Processing sketch to P5.js - it just didn't work, and with no way of opening up the hood we had to abandon this if we wanted our final artefact to be a web page.

So we abandoned this - and instead created a program that reacted to the presence of a face. This could use the supported OpenCV library. It still took a while to get this right, and to organise the logic of what the sketch did when a face was identified.

Eventually, however - we got something. Not what we initially planned - a much reduced thing in terms of functionality - but a starting point. Options now included playing with the interactivity, playing with the response, and changing the aesthetics of the 'poems' through typography and through the type of words or phrases included in the underlying data.