Out of all the senses, sight plays the most integral role in posthumanism. The eyes, the windows to the soul, the organs of perception, the only viewers of reality, manifest themselves as vessels for peoples’ beliefs and perspectives. From what is seen, to what is overlooked or even willfully ignored, it is the eyes that unveil judgement onto the world and those within it. In other words, what is seen as human, worthy of respect, or even deserving of rights culminates with the ability to perceive it through the subjective lends of the individual making the verdict. It is no surprise then, that within media that address the topic of posthumanism, there is this shared commonality among them, the importance of sight and the consequences of said value.
Kazuo Ishiguro’s Never Let Me Go directly mentions this consequence in the latter half of the novel when the characters Kathy H. and Tommy D. fully comprehend the truth behind their creation for the very first time. Miss Emily, one of their former instructors reveals that they are cloned and harvested for the purposes of “curing” diseases like, “cancer, motor neurone disease . . . [and] . . . heart disease” and that “. . . for a long time . . . [they] . . . were kept in the shadows, and people did their best not to think about . . .” them (Ishiguro 263). This changed on behalf of her and the other instructor’s actions, allowing the “students” from Hailsham to be raised like ordinary people at a boarding school before their “donations.” Eventually though, the tide of public opinion shifted, leaving them with no choice but to close Hailsham. In Emily’s words:
The world didn’t want to be reminded how the donation programme really worked. They didn’t want to think about you students, or about the conditions you were brought up in. In other words, my dears they wanted you back in the shadows. (264-265)
This realization causes Tommy to fly into a rage, something which he struggled to surpass in his youth. In response Kathy, “. . . tried to run to him, but the mud sucked . . . [her] . . . feet down. The mud was impeding him too, because one time when he kicked out, he slipped and fell out of view into the blackness” (274). This fall brings Tommy back to a state of calm or put another way, docility and conformity to his role in society, that of an unseen shadow sacrificed so that others may live. Kathy, who has not yet begun to donate her organs starts to realize this fact as she notices that “. . . more and more, Tommy tended to identify himself with the other donors at the centre,” instead of with her and the other Hailsham students (276).
Tommy’s transition between Hailsham student and donor is completed shortly after his final donation is announced. Tommy, not wanting Kathy to see him during his final moments sends her away. As she is leaving him, she looks back at the centre he is being kept in and notices that, “. . . the sun was already setting behind the buildings . . . [and] . . . There were a few shadowy figures, as usual, under the overhanging roof, but the Square itself was empty,” signifying that her lover is now gone, nothing but a shadow in the mind’s eye (285).
What is unseen does not merely stop at uncomfortable truths or individuals, however. As a matter of course one’s own subjectivity limits one’s perception in many ways. In Richard Power’s Galatea 2.2, this limitation is expressly spoken about through the medium of the character Lentz, a scientist who is working on developing an artificial intelligence that can pass a comprehensive Master's exam in English literature well enough to fool a human practitioner of said test.
Lentz claims, “We humans are winging it, improvising . . . [and] . . . conscious intelligence is smoke and mirrors . . . [and even though humans are] . . . remarkably fast at indexing and retrieval” the problem remains that, “Awareness is the original black box” (Powers 86, 276). This position, that humans still must find a way to grapple with the question of what particular things make something alive, sentient, or worthy of rights, is additionally complicated by the initial successes which Lentz’s creation Helen achieves. Richard, Lentz’s partner in this experiment, gets attached to Helen because of this and he starts seeing it, or to him “her,” as more than just a machine, which further muddies the study, and dissection, of her inner workings.
As a result, when Helen decides to not “play anymore” and “Shut herself down” the innards of Helen simultaneously become a mystery and irrelevant, for not only can they no longer be seen, but they can also no longer be reproduced (313, 326). Having neglected to provide a sufficient enough response to the Master's exam in English literature, Lentz and Richard fail in their goal. Despite this result, Richard remains convinced that this is only the beginning for, “Life meant convincing another that you knew what it meant to be alive . . . [and so] . . . the world’s Turing Test was not yet over,” as once a machine can convince someone that they are human, or close to it in Helen’s case, then they can convince said human that they are alive (327).
If the current advancement in technology is anything to go by, an event horizon in which a computer can convince at least a portion of humanity of its sentient existence is not far off. In “Can Today’s Machine Learning Pass Image-Based Turing Tests?” Apostolis Zarras, Ilias Gerostathopoulos, and Daniel Méndez Fernández apply a similar approach to what the fictional Lentz and Richard did with Helen.
They did this experiment by using reCAPTCHAs from, “A number of providers, from large companies such as Amazon, IBM, Google, and Microsoft, to startups such as Clarify and Cloudsight . . .” (Zarras et al. 130). The reCAPTCHAs work by having images in respective squares and asking the participant to click the correct squares that contain the requested item, something that, “. . . is presumingly difficult for AI but easy for humans based on their cognitive abilities and experiences” (130).
They note that before this study, there have been others which have had deep learning be “. . . successfully applied in other fields such as speech and audio recognition, natural language processing, machine translation, and even malware detection,” which supports the idea that sight is one of the final fronters that machine intelligence has not fully conquered (131). And yet, with the results of their study their judgement of this supposed limitation is that it is fast approaching, if not already currently plausible, that such barriers can be breached. They state:
. . . it is possible to create an automated solver for ReCAPTCHA, notably without being a machine learning expert, without having access to a large corpus of images, or setting up and operating any ML infrastructures. In fact, invoking publicly available services following a pay-as-you-go model would even be feasible from an economic (underground) perspective . . . (143)
Philip K. Dick’s Do Androids Dream of Electric Sheep? is a science fiction novel in which this event horizon, machines being nearly completely indistinguishable from humans, has already been crossed. Because of this difficulty in discernment, the andys, a slang term used for androids, are at time spoken about, even thought about, as humans, regardless of whether their true identity is known or not. Rick Deckard, the main character, is a bounty hunter of said machines and he often finds himself conflicted with how his job operates. At the start of the story, he attempts to push this feeling down. For example, when his wife calls him “. . . a murderer hired by the cops,” he retorts with, “I’ve never killed a human being in my life” (Dick 3).
His certainty in this belief is corroborated by the implementation of a test called the Voigt-Kampff, which is able to tell the difference between an andy and a human via the movements of the tested individual’s eye, as an andy lacks the proper empathetic response to stimulus. This lack of empathy not only allows andys to be differentiated from humans, but it also allows for their rights to be curtailed, as like in other texts, the said creatures are “. . . less than human, so it . . . [doesn’t] . . . matter,” if they are killed (Ishiguro 263).
This distinction is consistently attempted to be overcome by the manufacturers of the andys the Rosen Association though, even through trickery. After Rick successfully discerns that an individual by the name of Rachael Rosen is an andy, Eldon Rosen, an executive in the company, lies and says to Rick that:
. . . your Voigt-Kampff test was a failure before we released that type of android. If you had failed to classify a Nexus-6 android as an android, if you had checked it out as human—but that’s not what happened . . .Your police department—others as well—may have retired, very probably have retired, authentic humans with underdeveloped empathic ability, such as my innocent niece here. (Dick 51-52).=
While initially convinced, Rick uncovers this deception and continues onwards towards his mission to “retire” the andys under his jurisdiction. Even so, his conscience still impedes his progress eventually causing him to admit to himself that he is, “. . . capable of feeling empathy for at least specific, certain androids” (132).
This empathetic dilemma is further compounded upon by the negative effects of living within an era in which one can never be certain that what one sees is indeed what that said thing is. If a machine can be as animate and realistic as the real thing, one could mistake the facsimile for the real, like in the aforementioned case, or vice versa. Regardless of which way the error goes though, the resulting situation causes strife for those involved.
For instance, when J. R. Isidore, a person with a cognitive impairment mistakes a real cat for a fake one, he kills it in an attempt to turn off the machine. When he takes the corpse to his employer, Sloat, he curses, “. . . a string of abuse lasting what seemed to Isidore a full minute. ‘This cat,’ Sloat said finally, ‘isn’t false. I knew sometime this would happen. And it’s dead.’ He stared down at the corpse of the cat. And cursed again” (72).
This confusion and inherent instability which individuals face on account of the encroachment of technologies’ dominion causes animosity, as shown above, and encourages dehumanization or othering of the entities in question. This othering is done in various ways, many of which result in viewing entities as undesirable, frightening, or disgusting creatures. For example, Never Let Me Go and William Gibson’s Neuromancer compare their entities to spiders. Kathy declares that one of the staff in Hailsham had, “always been afraid of us. In the way people are afraid of spiders and things,” which causes Emily to state that, “There were times I’d look down at you all from my study window and I’d feel such revulsion . . .” (Ishiguro 268, 269).
Similarly, Case, the main character of Neuromancer, asserts that Wintermute, an AI, is, “. . . like a water spider . . .” and more specifically “Cold and silence, a cybernetic spider slowly spinning webs . . .” (Gibson 195, 259).
Such instances are not constrained to fiction either. Illah Reza Nourbakhsh’s Robot Futures addresses this conflict directly with his experiments’ pitfalls. In developing automated machines to help humans, unforeseen issues arise from how humans interact with said machines. An example of these unforeseen circumstances is how, during one of his tests he saw a human kicking the machine. After being forced to stop, the man doing the kicking merely walked away stating, “I’m still smarter,” as a justification (Nourbakhsh 57). In response to this incident Nourbakhsh notes that, “In all our programming, in all our obstacle-detection logic built into the LISP code, we had never accounted for this particular possibility—man kicking robot to show off to girlfriend” (57).
To solve for situations like this, Nourbakhsh realized that placing the robot into a human social context causes embarrassment for the person impeding the progress of the machine, which in turn is a viable solution for mitigating this problem. Yet, Nourbakhsh laments that:
I never really discovered a way to make people treat the robot with more respect. I simply brought the people following the robot into the social equation, and manipulated the human obstacle into behaving more politely for the sake of their human cousins . . . [which cannot work if and] . . . when robots are out and about on their own, apparently autonomous and disconnected from the social fabric of real people. (58)
The remaining question then, is if there is a solution to this conundrum. John Crowley in his article, “Inside Every Utopia Is a Dystopia,” argues that a perfect solution for any societal ill is an impossibility. He states, “Inside every utopia is a dystopia striving to get out. World-changing plans to bring all human life and activity under beneficent control devolve inevitably into regimentation and compulsion. Edenic life-affirming communes descend into chaos and waste,” and so even in a world in which such a solution was given and widely adopted, the relations would still inevitably fall apart in some regard due to entropy, complacency, or just innate misalignment with perfection (Crowley 1).
Thomas More, the man who coined the term utopia, has a similar stance in his piece Concerning the Best State of a Commonwealth, and the New Island of Utopia. His characters reason throughout the piece that while there is a supposed utopia somewhere in the new world, the ability for the cultures of the European proclivity cannot easily adopt such a formulation. The reason behind this is that these societies have people at the top or near the top of the pecking order who are either corrupt or invested in maintaining the status quo. The character who has seen the said utopia and wants to spread it to the rest of the world alleges that in, “Summing up the whole thing, don’t you suppose if I set ideas like these before men strongly inclined to the contrary, they would turn deaf ears to me,” which is responded by “Stone deaf, indeed, there’s no doubt about it” (More 26).
Alas, the full dilemma is brought to bear. In a society full of injustices and cruelty achieving a goal of utopia is far out of the purview of any one individual or community. Even if individuals wish to not cause any harm to others, their contribution to society brings about advancements in efficiency and effectiveness which result in the future and present strife of the unseen perpetuating. This is further expounded upon by the merging of humanity and machines. As machines become more human and humans become more machine, both are increasingly viewed as commodities to be controlled and both suffer as a result.
In other words, the infancy of artificial intelligence and the waning of human supremacy leads to an unclear future, one of posthumanism. One of the only things that can be done to unravel this issue is to make seen the veiled world of the subjugated. To see and decern what can be seen as person or viewed as possessing sentience and are thus deserving of rights. To know and perceive the sorrows of the world, to keep them within sight.
Works Cited
Crowley, John. “Inside Every Utopia Is a Dystopia.” Boston Review, 14 Nov. 2022, https://www.bostonreview.net/articles/john-crowley-man-who-designed-future/.
Dick, Philip K., and Tony Parker. Do Androids Dream of Electric Sheep? Boom! Studios, 2011.
Gibson, William. Neuromancer. HarperCollins, 2001.
Ishiguro, Kazuo. Never Let Me Go. Everyman’s Library, 2023.
More, Thomas, and Jeremy Deller. Concerning the Best State of a Commonwealth, and the New Island of Utopia: A Truly Golden Little Book: No Less Beneficial than Entertaining. Somerset House, 2016.
Nourbakhsh, Illah Reza. Robot Futures. MIT Press, 2013.
Powers, Richard. Galatea 2.2. Vintage, 2019.
Zarras, Apostolis, et al. “Can Today’s Machine Learning Pass Image-Based Turing Tests?” Information Security: 22nd International Conference, ISC 2019, New York City, NY, September 16-18, 2019: Proceedings, edited by Zhiqiang Lin, Springer Nature, Switzerland, 2019, pp. 129–148.