Have you ever spent over an hour watching YouTube videos, continuously clicking whatever video is next on the list, and finding that you don’t remember anything at the end? Well, a huge reason for this is YouTube’s well-developed AI-based recommendation system, designed to maximize the time viewers spend on the platform. So I raise a question: are these algorithms unethical, given that they try their best to use up, or waste, as much of your time as possible?
Before I answer that, I want to note some other ethical issues with AI recommendation systems. First, to work, they collect data on users’ viewing habits without their explicit consent or even a notification that this is happening. While the company may use this for recommendation systems and ad personalization, this sensitive information can be leaked to more malicious agents. Also, what we often don’t realize is that our data will also be used to personalize others’ recommendations, due to a strategy called collaborative filtering, where you are assumed to like things that people similar to you like. This technique is not clearly explained in YouTube’s privacy agreements. If it were, we might make a different choice about sharing our data.
Second, YouTube’s recommendation system drives 70 percent of the videos watched on YouTube. After all, it is easier to click once on your screen than to type out what you actually want to watch. This means that these systems can significantly impact our views and detract from our autonomy. If YouTube (even unintentionally) recommends biased content, it can cause bias in the way we think. Specifically, YouTube tends to recommend more politically extreme and/or conspiratorial channels, and also recommends only one viewpoint, depriving users of an opportunity to see all sides. This can boost support for extremist groups as well as other large and often undesirable social effects.
Now, YouTube and other platforms have done some small things to help combat these issues, such as pushing extreme channels down in the recommendations, or even blocking them outright. (These channels often violate YouTube terms or policies regarding harmful or dangerous content.) However, this is often not enough, with many extreme or conspiratorial channels able to find ways around this filter.
There are two main potential solutions: either the platform must improve its algorithms, or we better educate everyone about their possible harms. Personally, however, I don’t think the second option is effective. These platforms are addictive, and self-control can be very difficult. There has to be more intervention from the platform’s side, adding filters (easily toggleable by the user) and features to prevent addiction to the platform.
While companies usually dodge blame with the argument that users can leave their platforms, using these platforms often becomes a habit that is extremely hard to break. In some ways, users don’t have that choice. It isn’t right for companies to earn money when in doing so, they cause great harm to many.
Then there is the argument that ethical filters would be censorship; however, as long as YouTube makes it clear that these filters exist and makes them easily toggleable, users will still have control over what they want to see, while significantly decreasing the harm these videos cause.
In conclusion, there are many ethical challenges with personalized recommendation systems. They take up our time, remove privacy, and cause less autonomy and diversity in our viewpoints. Many of these challenges are quite difficult to solve; however, much of it needs to be fixed by the companies in charge of these platforms by changing their recommendation systems or platform as a whole to become more ethical.