“We can, and should, imagine search with a variety of other possibilities.”

– Safiya Umoja Noble, Algorithms of Oppression, p. 180


Our team, consisting of 8 graduate students and 1 professor from West Virginia University, had an idea to collaboratively compose a toolkit as a means for learners to ethically and sustainably engage with the search function used by many popular online social platforms. This idea manifested the website you are currently browsing. This project, which emerged from our time together in a Digital Humanities seminar, seeks to put scholarship to work. After reading a range of work from scholars doing Digital Humanities in so many different ways (see our reading list here), we came to an agreement that search is a form of digital literacy to which all users should be more attuned. Our intent with this toolkit is to put theory to practice; to bridge the gap between those who have privileged access to these theories and those who do not. We seek to make the function of search, which shapes almost every platform we use on a daily basis, a bit less opaque. We hope that our readers find practical use in this toolkit to navigate the popular digital spaces that many of us interact with.


This toolkit offers a breakdown of how to more ethically engage with the search function on popular platforms including Amazon, Google, Instagram, Pinterest, Snapchat, TikTok, Twitter, and YouTube. Each page in this toolkit breaks down the functionality of searching on the named platform. We offer practical analysis of what ethical conundrums, paradoxes, and/or problems exist not only in the results various searches on these platforms yield, but in the ways users are urged to engage with or to avoid the platform’s search function. The toolkit also includes Future Work and Further Reading pages which puts our work into conversation with others who are exploring parallel questions.


Learning how search works on these platforms can drastically change a user’s experience. With tools such as hashtags and location sharing, learning how to find accurate results can be daunting at first. Add those markers into the mix with algorithms, or instructions written in computer code that then generate results, and users experience a sort of magic when they search on digital platforms. Though magic is meant to be mysterious, we argue that the processes that directly feed us information and content should not be. This toolkit is a starting point for learning about how to use search functions, while also understanding the implications of platforms’ power over users.


We do hope that the information we gather and discuss in this toolkit will not only help users understand the navigation and search processes of the platforms in question, but also to consider their intriguing similarities and differences. Realizing how algorithms work should somehow raise questions about the ethics and righteousness of these various practices. Pointing out how algorithms function is a way to better understand and expose their sociopolitical rhetoric, and how they seek to amplify neutralities and/or biases. It is important for users to recognize that the algorithms of these platforms exert rhetorical influence and power that they should be more mindful and aware of, such as preferences in political partisanship or racial bias in image cropping.


While our toolkit addresses the search functions of popular platforms, it is also important to be aware that the content that we search for is not 100% of the content that we consume. Much of what we see and participate in on these platforms is served to us independently of our active searching as advertisements, “suggested”, and/or “related” material, based on previous engagement with content on a platform, data gathered about us as users, priority placement paid for by the content’s creator, and other factors that are generally obscured by the proprietary nature of a platform’s algorithms. Because “served” content is selected to prioritize keeping users on the platform, it has a well-documented tendency to “pipeline” into more and more extreme content, especially around highly-politicized subjects. Much has been written about this issue elsewhere, and will continue to be written as algorithms are updated and patched, and whistleblowers release information that is currently unavailable to us. While served content is beyond the scope of our toolkit, it is important to bear in mind that just as searching itself is not neutral, neither is the content that we are exposed to without searching for it directly.


As individuals living in a hyper-networked age, it is increasingly difficult to avoid digital platforms that leverage the labor of users in ways that are often troubling. When we pick up our phones to do a quick search online, or to post something on social media, we (un)knowingly acquiesce to all sorts of conditions. That is, when we download an app or create an account on a service, we typically quickly click the “agree” to user conditions, given that it would take 76 work days per year to read all of the fine print users encounter in any given year (McDonald & Cranor, 2012). Part of our over-saturated media landscape are the practices that technology companies use to turn us into data: our patterns, our searches, our demographic information are a stand-in for us as complex human beings; these data points are then used to make money off of us. Because our digital lives are intertwined with Big Tech, it is difficult to unwind ourselves from it. One admittedly small way to re-claim our data–and ourselves– is to be more aware of how searching shapes us, and how we shape it.


Understanding that “searching” is not a neutral task is an important component of digital literacy. Interacting with search engines, whether they are embedded in social media sites, shopping applications, or function independently as their own indexes of information (i.e. Google), demonstrates a constant exchange of information. Users of these sites have a responsibility to understand that searching has real life consequences. Because of these consequences, online platforms have a responsibility to be transparent about their algorithmic and data collection practices. Action from both parties is necessary.


Finally, one quick aside: You might notice that this website is hosted on Google Sites. We chose to use this platform for two major reasons: one, it allowed for easy collaboration and file storage; and two, we had access to it as part of our university’s subscription to the Google Education Suite. The irony of this is not lost on us: even as we write critically about search algorithms on Google, we feed its content. All that we might say at this time is that this represents how deeply intertwined into the digital landscape we are, and that perhaps there is some power in using these tools in ways that encourage us to view them with more skepticism.


Collaborators:

  • Katie Saucer

  • Luke Lyman Barner

  • Nibal Abou Mrad

  • Olivia Wertz

  • Stone Schaldenbrand

  • Taylor Miller

  • Terra Teets

  • Erin Brock Carlson


And special thanks to Cody Grey for their additional contributions.