Tutelage of Robots?

I found several of the ideas shared here interesting and inspiring, among them I’m most intrigued by Trent’s post about how robots can be managers or leaders for humans. While the discussions on power and hierarchy has become an almost cliched yet still highly relevant topic in human society, how the growing role of artificial creatures or algorithms would shape people’s opinions, choices and behaviors awaits further examination.

Recalling the classic Milgram experiment and combining with my previous idea of paradoxes, I’m interesting in exploring an artificial creature (or a group of smaller creatures) that give instructions which make one feel conflicted, such as instructions for the player to inflict destruction or harm, to see if the viewer chooses to conform or use their own understanding. Or simply an obviously wrong step in a game, for instance, while playing Minesweeper, a soft-speaking cute-looking plushie begging the player to click on a square that clearly has a mine. In this case the scenarios, choices or behaviors asked of the player can be in increasing intensity, contradiction, and performativity. This creature is expected to make players realize how human feelings and opinions remain paradoxical and uncertain in many cases, yet we are the very group who train robots/artificial intelligence, which accepts clear, definite instructions from humans and in turn gives them to humans. 

Other than that, I also liked a work shared by Merel a lot, so I’ll also include it here for reference & inspiration.