Now that we know what to be mindful of and guard against, we must begin thinking about how to take action against the effects. No combative actions are guaranteed to eliminate these consequences, as we are still learning more about algorithms ourselves, but they can help us avoid them. Most, if not all, of this advice comes from experts in fields related to algorithms, consumers, or both. In a word, these tips are credibly sourced. I seek to present the facts above all else.
Recommended actions to take include, but are not limited to:
Encouraging companies to take preventative measures.
This adjustment could be a powerful one, as the public knows little about how algorithms function on a detailed, systematic level. Companies and their designers are the ones who hold the most power over the system. A few of the things they could do to help lessen poor consumer outcomes include implementing child restrictions on certain types of content. They could also introduce measures to make their systems less addictive. These actions would most likely require government intervention, as they would affect the profits of many companies. Keep in mind that these companies and federal governments can only do so much. They can implement child restrictions, but many young people list themselves as older than they are. More responsibility falls on parents and individuals to truly make a difference.
Using it as a tool rather than as the only resort.
One of the biggest problems with algorithms is that users are forsaking their own critical thinking skills in favor of doing what is easiest: taking algorithmic information at face value. This is a dangerous mindset because it weakens our independence. We become solely reliant on algorithms to give us what we want and need. They must be used alongside our other skills so we can experience the positive repercussions and avoid the harmful ones. This also improves mental health and behavioral development over time.
Reviewing information consumed regularly.
When looking back through personal algorithmic feeds, it is important that we evaluate what media we have consumed and whether it is having a positive or negative influence. From there, users can adjust accordingly. This is useful for avoiding communities that increase biases or promote hateful or violent speech. This change is only effective if consumers do this regularly and actively look for red flags in the feeds generated for them. It must also be objective, because algorithms are designed around our preferences; even if we like specific content, if the negative influences outweigh the positive ones, we need to set aside those preferences.
4. Teaching mitigation tactics to people at a young age.
Part of the problem is that older individuals are struggling because they are learning about algorithms at roughly the same rate as young children. This leads to a lack of understanding regarding what safeguards should be in place. Unfortunately, this change may not significantly help current, fully developed adults, but it will better prepare future generations. They will be less reliant on what their parents say and instead hold themselves accountable based on what they have learned. These concepts could even be taught in schools.
5. Separating algorithms from reality.
An ongoing issue algorithms perpetuate is the perception that the content they promote is factual, helpful, and realistic. Social media often encourages users to see only the best sides of life and people. Because of this, individuals may exaggerate or even fabricate experiences for the sake of views and likes. Teaching people, especially young people, that algorithmic content is often idealized helps them compare themselves to their feeds less. Mental health is better preserved, as are physical and social well-being.
6. Desensationalizing extreme content algorithms push.
Some of the most extreme content that algorithms promote is becoming increasingly popular. This is because such content sensationalizes heightened circumstances, even though it is often controversial in practice. For example, videos in which people consume massive amounts of food are typically widely viewed and receive many likes, though the consequences can lead to an extremely unhealthy lifestyle. Feeding this type of content into algorithmic systems allows it to gain more influence over users. By spreading awareness and avoiding its consumption, our lives can improve, and we can recognize harmful habits and behaviors for what they truly are.
7. Setting time limits, blocks, and other hard boundaries to avoid addiction and dangerous material.
One of the most effective ways to improve our ability to overcome these consequences is by consuming algorithm-driven content in moderation. Setting timers and blocking certain types of content helps protect us from the unpredictability of media. This also helps break the cycle of relapse. We can better hold ourselves accountable when we clearly define what is not acceptable. These restrictions not only protect us but are especially important for children who are still learning the difference between right and wrong.
With these problematic issues now coming to fruition, making some of these changes needs to be a priority. I recognize that some are more far-fetched than others, but some can be pursued immediately. Regardless, we must remember that “no matter how well one prepares and how effective the safeguards are, risk can never be fully eliminated.” We cannot eliminate the system; all we can do is strive to do better. Experts are also recognizing how urgent the situation has become. I encourage you to conduct your own research to determine which efforts may suit you best. Here are a few videos to get started—enjoy!