Abusing the Robot
The modern word "Robot" originates from the Czech word "robota," translating to corvée or serf labor, illustrating early on that automated machines were conceived primarily to serve human needs. From the protective bronze giant Talos in Greek mythology to the obedient clay Golem of Jewish folklore, these entities, though not robots by name, fulfilled roles to serve their human creators, devoid of personal thought or a more general possibility to be used by the masses
Throughout history, man has often sought to distinguish themselves from their servants, sometimes treating them without decency and subjecting them to abuse to meet unreasonable demands. Fortunately, the recognition of rights across various societal segments—women's rights, workers' rights, children's rights—has begun to erode the most egregious forms of such abuse.
Right?
In the 21st century, every single person had a moment where they mistreated a technological application in some way or the other. Slapping a screen to get a better resolution, shaking the remote control because the TV channel wouldn't change.
This casual mistreatment of technology begs the question of when, if ever, societal norms will evolve to treat robots with respect. As we work to break through that limit of giving robots more human-like emotions, wants, and needs, a question begs to be answered: do we limit these advancements to preserve their efficiency, or do we recognize a moral imperative to extend some form of rights to robots? To round this part off, I want to point of a famous abolitionist, and I want to ask you, where you think the cutoff point should be.
-Frederik Douglass
While pop culture is filled with narratives of slaves, serfs, or oppressed beings rising against their tormentors, the serious discussion of robot rights remains largely within academia. Yet, as robots are utilized more and more in our day-to-day lives, either enhancing our efficiency, doing the work for us, or being used for our entertainment, the urgency to address their rights grows.
As we envision a future where robots are not just tools but partners in our society, we must consider the ethical implications of our creations. The prospect of sentient or semi-sentient machines introduces a ton of ethical dilemmas. Will we ever start to consider the same rights and empathy towards animals and humans towards robots? Or will we continue to treat them as mere property, devoid of rights and subject to our whims?
I believe we're approaching a significant crossroads in how we think about technology and rights. As movements in the past fought for recognition and rights, we might soon find ourselves in a debate that pushes us to reconsider our definitions of sentience.
The journey towards recognizing robot rights might follow a similar path to human rights — evolving from a niche topic to a central issue in political discussions. While I am tempted to speculate about whether robots could one day vote, especially as we continue to advance technologies like large language models (LLMs), and face controversies like that one Google engineer claiming Google made an LLM that was sentient, we should start by rethinking our interactions with the technology we use every day.
Instead of reacting with frustration, like hitting the TV when it doesn't work, maybe it's time we consider a more understanding approach.
After all, it might only be a matter of time until we develop our artificial creatures to a level where they could start considering us their oppressors if we continue with our current attitude.
Written by Robert C. Weber for the first Assignment of Artifical Creatures 2024 at Leiden University; taught by Peter van der Putten