Final project materials
Smart Home Devices and Privacy concerns
As home surveillance systems become more widespread and cheaper to buy and implement, more homes are adopting them in order to increase feelings of security. We define smart home surveillance systems to be ones that allow video footage to be accessed from a remote device and location. The surveillance systems we will be examining are cloud-based, and are relatively easy to access from a remote phone, tablet, or computer. In some cases, real-time streaming of what is being recorded by the cameras is all that is available to the user. But, in some, these videos can be saved and viewed later, and must be stored in the cloud for later access.
In particular, we take a look at the three video surveillance systems manufactured by Ring, Nest, and Arlo to answer questions about current perceptions about the systems, as well as describe incidences with each company’s handling of customer video data and their responses to them. We consider the impact that these systems can have on victims of abuse, as well as how they can contribute to racial divides. We also note the relationship that the company Ring has with law enforcement agencies, and what that means for privacy.
Finally, we make a few brief suggestions for how the systems may be improved to better suit consumer needs for security and privacy, as well as consider the implications the systems cause for greater society, including the way that it contributes to and normalizes the idea of a surveillance network across cities and in neighborhoods.
Ethics and Morality of Technology in Warfare
Comparing smart devices - A layman's guide before purchasing such items
In this paper we will be discussing how the iPhone, Samsung Galaxy Watch, and the Amazon Echo invade our privacy and then use said data they collected from spying on us and how they use and process this data to influence the users of said products. After the research we have done we have found that the iPhone has privacy concerns regarding with the usage of its apps, the Amazon Echo privacy concerns mainly revolve around the fact it is constantly listening to what you say, and the Samsung Galaxy Watch privacy concerns revolves around recording your biometric data and apps. The data that are collected from each of these is then processed in ways that can do things from helping to harming the user. The data that is collected from the iPhone, Samsung Galaxy Watch, and the Amazon Echo that has had some form of positive impact on their users all have to do with apps or skills that uses the user's private data to better influence its user to be more healthy or safe. However, the data that is collected from these smart devices that have negatively impact the user all have to do with influencing the user to use said items more, to the point that they are addicted to some part of that technology. Hopefully after reading this paper, people will be aware of the potential risks and benefits of using said items and are able to make informed decisions about things having to do with the product or the products themselves.
Sexism in Gaming Communities
In an age where people can connect with others all across the world with the click of a button, online video gaming communities have flourished. Not only can video game players, or gamers, play alongside and against friends and strangers alike in multiplayer games, but social media has given these communities ways to interact with one another. Some may live stream their gameplay sessions to a live audience or upload clips of their best moments, others may create creative content for their favorite characters, and some may simply use social media to become friends with those who share a common interest. But there is also a darker side. Harassment and bullying can run rampant, especially with the anonymity that online communication offers. This is especially pressing in gaming communities, where in-game voice chat can lead to those with female-sounding voices being harassed and bullied into not playing the game based on their perceived gender.
This paper focuses on two successful online multiplayer games: Blizzard’s Overwatch and Riot Games’ League of Legend s. Both games are competitive in nature, with two teams competing against each other and players choosing unique characters to play. This is a perfect landscape for all kinds of human behavior, from impressive team communication skills to harassment and harsh words to and from many groups within the community. We will be exploring the kinds of sexism that have arisen within the games and their communities and look into what kinds of methods are in place to attempt to negate this kind of discrimination.
YouTube and Radicalization
YouTube is one of the largest video platforms on the face of the planet and the second largest search engine, outranked only by Google [1]. The platform hosts a wide variety of content from original animations to video essays to cooking tutorials, offering trillions of hours of both entertainment and education among other things. One type of content on YouTube is political commentary, whereupon a person examines political issues and hot button topics, analyzing them while expressing their own views on the subject, sometimes supplanting their stances with data or evidence.
While political debate is usually a very healthy way of discussing issues, a certain part of YouTube has grown to prominence with troubling speed, gaining traction while pulling tens of millions of collective viewers down a vortex of dangerous ideas and troubling actions. This vortex is known online as the Alt-Right rabbit hole, or as the influencers and viewers would call it, “Red-pilling.” A reference to the 1999 film The Matrix, the exact logistics of the term are unclear among Right-wing radicals, but however it is described, the unifying definition revolves around the idea that by becoming more ingrained in Right Wing Extremism, radicals have found “The Ultimate truth.” Nearly fifty percent of those who consider themselves “Red-Pilled” credit their transformations to influencers on YouTube, whose videos and discussions led them down the path to where they are now.
These content creators know how widely used YouTube is, and they leverage its large viewer base to indoctrinate people into their own thought processes and ideologies. What makes this so dangerous is that while it all seems like harmless conversation and content over internet media, it can actually lead to real-world consequences, including extreme and radical violence.
In terms of how the algorithm itself operates, Youtube was originally created for the public to facilitate the process of uploading and sharing videos. In 2005, Nike became the first major company to utilize Youtube in order to promote its brand through its video featuring Brazilian soccer star Ronaldino. Thus began Youtube’s evolution as a multi-faceted platform for the public to share and brands to promote. [2]
Each Youtube video is assigned a ranking, which determines its likelihood of appearing as a suggested video to a user. Originally, the Youtube algorithm was designed to rank videos based on the number of views it had. This was altered in 2012, when the algorithm broke through the release of Psy’s Gangnam Style, which garnered over a billion views.
Now, the algorithm ranks videos based on the amount of time spent on each video, where longer videos would garner a higher ranking.
Youtube is now growing into one of the leading marketing and advertising platforms, with thousands of companies investing money towards the videos they post on the channels. It is a platform that continues to influence the globe at an astronomically large scale, being the second largest search engine on the internet.
We will explore how the evolution of Youtube’s monetization tools and the algorithmic change could pose a problem to the public as it facilitates exposure to videos that promote problematic ideologies as well as the ways that these ideologies are spread through media personalities.
Interface Design Encouraging Civic Integrity on Twitter
Twitter is one of the most widely used social media platforms in the world. In 2016, Twitter was the target of an orchestrated effort by Russian intelligence operations to interfere with the results of the United States presidential election. Since then, Twitter’s landscape has only become more political. The platform has implemented several updates to their interface and moderation policy intended to combat disinformation and prevent itself from becoming the site of another major election interference effort or coup. In this paper, we look at how these interface updates fit within existing taxonomies of opinionated design and perform a case study to determine how the moderation updates affect the spread of false or manipulative Tweets.