By Serenity Proffitt, Natalie Schoolcraft, Ashley Wright, Emma Seyfang, Simon Molloy and Keegan Mueller
ATHENS, Ohio (Dec. 1, 2025) —
Generation Z is the first generation in which cruelty can go viral in seconds. A generation where bullying doesn’t just happen at your local school or neighborhood, but worldwide. According to “Social Media Bullying Statistics 2025: Platforms, demographics, and responses” by Robert A. Lee, more than 87% of young people have seen cyberbullying. Seventy-three percent of young people say they have been bullied at least once, and 70% say someone has spread false rumors about them online. This also highlights how much online defamation there is.
The Rise of Mean Culture
Technology has come a long way over the years. You can message, call, or FaceTime someone from anywhere and receive an instant response. Social media adds another layer of instant communication, making face-to-face communication more intimidating. Social media allows you to DM whoever you want, including celebrities. You can post or watch people’s posts/stories about what they are doing at that moment.
This can also make it harder for Gen Z users to disconnect from the online world. Social media also allows people to be anonymous or pretend to be someone they are not. This can create harm if someone is posing as another person and harming their image online or posting/commenting hateful things without people knowing who they really are.
It makes people feel bolder than they would be face-to-face. It can also take an embarrassing moment where only a few people in person see, to millions of people around the world. That’s what happened to a 15-year-old boy. “Teen Speaks Up About Cyberbullying”
Social media can also be positive, allowing people to spread awareness about something they care about and giving people a voice. A few examples would be the #MeToo movement, a social movement against sexual assault. Another example, the Black Lives Matter movement, highlights inequality, injustice and police brutality. The open freedom to say and post anything online comes with a price.
According to “Gen Z, Social Media and Cyberbullying: An Unsupervised Landscape” by Usraat Fahmidah, “They don’t have any idea of how to be civil when presented with the idea of expressing their opinions unfiltered. After all, they are not going to face any real-life repercussions, and no one is supervising it.” This highlights how anonymity gives users a sense of boldness. They highlight the idea that people feel bold to say whatever they want because they feel as if they won’t deal with face-to-face repercussions, but what impact does this have on users on the other end?
In the same article, Fahmidah says it’s harder for Gen Z users to seek help if they are being cyberbullied. This is mostly because parents and adults don’t understand the world of social media and this often leads Gen Z users to feel isolated and alone, creating depression and anxiety. Gen Z users see harmful comments and videos every day. When those comments or videos don't affect them, they often ignore them, not thinking about how they affect others. This makes users less empathetic. In a news package by NBC News, they say Gen Z is speaking out about social media's impact on their mental health. Gen Z users' are describing how constant exposure harms their mental health.
Background and Context: Technology as a Double-Edged Sword
Technology is not only a reflection of human behavior and culture, but it also plays a driving force in shaping these aspects of society. This theory is known as technological determinism. People behave differently online than in person, but why? According to psychologist John Suler’s theory, the online disinhibition effect causes users to act with fewer restraints online than in person.
This theory covers six factors that enable users to act inappropriately online. These six factors are invisibility, dissociative anonymity, asynchronicity, solipsistic introjection, dissociative imagination and minimization of authority. Combined, these six factors allow users to act without consequences when using online platforms.
Although Suler’s theory of the online disinhibition effect is over twenty years old, it is still referenced in research today. A 2019 video by KQED, a PBS station, explains the impact of the online disinhibition effect on social media. The online disinhibition effect is split into two categories: toxic disinhibition and benign disinhibition. Toxic disinhibition occurs in online spaces where users can access pornography, violence and crime. Benign disinhibition occurs when users go out of their way to show kindness to each other.
It is important to understand the six factors of the online disinhibition effect to understand why humans behave differently online than offline. The first factor is invisibility, which occurs when users feel comfortable exploring online spaces because they can remain unseen. Second, dissociative anonymity is when users are anonymous online and disregard their actions as their own. Users dissociate from their online actions because they believe it is not a reflection of themselves.
Third, asynchronicity naturally occurs online due to the nature of online domains; users are not required to interact in real time. Fourth, solipsistic introjection is the phenomenon that online text lacks tone; therefore, tone is often lost, and the reader’s perception is skewed.
Fifth, dissociative imagination is when users become a different character online. In severe cases, this occurs as identity theft. Most commonly, this phenomenon is known as catfishing.
Lastly, minimization of authority happens online because it is difficult to differentiate social status, as users do not always reveal race, class, gender or sexuality online. In some cases, this creates a level playing field; however, social media has evolved since this theory was proposed. Social media has become a huge component of everyday life. As a result, social media is not an exception to social hierarchy and bigotry. All six of these factors combined shape human behavior and culture online rather than reflecting offline culture.
Anonymous messaging apps such as Signal and Pinger changed the way online users treat each other. In the Netflix documentary, “Unknown Number: The High School Catfish”, a mother, Kendra Licari, used the anonymous messaging app Pinger to harass her daughter, Lauryn Licari, and her daughter’s boyfriend, Owen McKenny. The harassment lasted fifteen months. Kendra Licari appears to be experiencing dissociative imagination.
Throughout the documentary, she acts as if she does not know who is doing this to her daughter. In the documentary, she shares her experience of begging authorities to find her daughter’s stalker, even though she was the stalker. Anonymous messaging apps enable users to behave maliciously online.
Anonymous messaging apps are not exclusively used for malicious intentions. Apps like Signal are preferred by some users because of their minimal data collection and their nonprofit structure.
In addition to anonymous apps, online culture and technology are constantly evolving, and the rise of artificial intelligence (AI) chatbots is changing how humans behave online. According to the National Bureau of Economic Research (NBER), early adopters were primarily men, but in recent years, 52% of ChatGPT users had feminine names, suggesting this gap is closing. Since its release in November 2022 through July 2025, ChatGPT has been “adopted by 10% of the world's adult population”.
ChatGPT usage is growing in lower-income countries as well. The demographic of users is not the only change detected by NBER. It was discovered that 70% of usage is non-work-related messages. These non-work-related messages fall into two of three conversation categories used by the NBER to collect data.
The two categories are practical guidance and information seeking. Practical guidance is defined as advice about various topics and help with forming creative concepts. Information seeking is defined as searching for products, information about current events, recipes and other topics as if ChatGPT is Google. The rise in reliance on AI chatbots is changing the way humans think.
Tools like ChatGPT impact how Gen Z thinks and behaves in the workplace and academic settings. According to a study conducted by EduBirdie, 36% of Gen Z members feel too reliant on ChatGPT. The study also found that, “31% worry that it is reducing their own capacity for critical thinking, while 13% said it was making them less productive.” ChatGPT is marketed to users as a personal assistant, yet about a third of Gen Z users expressed negative feelings towards ChatGPT. Although 30% of Gen Z members also expressed that they do not worry about ChatGPT.
AI is growing onto other platforms such as Instagram, Snapchat and TikTok. Although AI’s integration into these platforms is fresh, it is having an impact on user experience. AI-integrated social media platforms improve user experience by generating quicker search results and giving users an in-app chatbot. A negative side effect of AI on social media is the spread of misinformation through AI-generated videos. Media literacy is evolving, and users need to learn to spot AI-generated content to limit the spread of misinformation.
As mentioned before, ChatGPT’s early adopters were adult men in wealthier countries. Although ChatGPT is seeing growth among users with feminine names and users in lower-income countries, men remain ahead of the game. AI not only negatively impacts women, but also people of color and other marginalized groups.
Olga Akselrod, Senior Counsel of the ACLU Racial Justice Program, argues that AI is biased because the data and documents it learns from are products of a systematically racist society. The legal documents AI learns from are filled with a long history of racial discrimination inflicted by various court rulings.
The difference between a lawyer learning about these cases and AI is that a lawyer is a human. Therefore, the lawyer can apply empathy and historical context to these texts; AI cannot. Akselrod argues, “For example, AI systems used to evaluate potential tenants rely on court records and other datasets that have their own built-in biases that reflect systemic racism, sexism and ableism, and are notoriously full of errors. People are regularly denied housing, despite their ability to pay rent, because tenant screening algorithms deem them ineligible or unworthy.”
AI inherits racist patterns from the data it is trained on. As AI continues to interview people for jobs or housing, the people selected for these opportunities will be selected by a biased tool. As AI advances and further integrates into technology, marginalized users will experience more harm online.
The Psychology of Anonymity and “Mean Culture”
Online life shapes how young people speak, react and understand the world around them. Many teens spend hours on social platforms, yet few stop to think about how anonymity changes the tone of their interactions. Suler in 2004 named this the online disinhibition effect, which explains why people often say things online that they would never say face-to-face. Anonymity weakens the natural restraints that guide behavior in person.
For Gen Z, these conditions make online meanness feel easy and common. Teens often experience toxic disinhibition more strongly because negativity spreads quickly. When harsh comments get reactions, that tone becomes part of the environment.
Anonymity reduces accountability in ways that feel simple at first. A teen using a username with no photo or clear identity knows that people cannot connect the comment to them personally. That freedom can lead to more aggressive behavior or quick reactions. In person, people rely on tone, body language, and facial expression to understand meaning. Online, these cues do not exist. Teens fill in the gaps with their own assumptions, which can turn a short comment into something personal or upsetting.
Photo by: Unsplash.
Timing also influences behavior. Teens can post a harsh reply and leave the app before seeing any response. That delay weakens their sense of consequence. Instead of seeing how a comment hurts someone, they see likes, shares, or nothing at all. Many describe the digital environment as a place where the rules feel loose and the stakes feel low.
Others say they feel more freedom online because adults have less influence in digital spaces. A parent cannot step into a comment thread to stop it. Teens speak to each other as equals, even when the tone becomes cruel.
These conditions may seem small, but they build on one another. Together, they create a space where mean comments spread easily, and empathy becomes harder to hold. For Gen Z, this affects more than online habits. It can shape how they see themselves, how they judge others, and how they understand conflict. The emotional tone of their digital lives often carries into their offline world.
Cultivation Theory helps explain why these effects feel long-lasting. The 1969 theory by George Gerbner argues that repeated exposure to media shapes a person’s sense of reality over time. The content someone sees becomes the lens through which they view the world. The theory includes the idea of the mean-world syndrome. This idea explains why many teens describe the world as harsh or unkind. According to Obert-Hong’s 2019 thesis on media violence, “There is an imbalance between the amount of violence depicted in media and the amount that occurs in real life, leading to unrealistic perceptions of a mean world”.
For today’s teens, social media functions in a similar way. Instead of watching a few shows each week, they scroll through constant streams of videos. Much of that content includes conflict, body-image pressure, gossip, public judgment, or extreme opinions. When teens see this content every day, the tone of it becomes familiar. They might start to believe that cruelty is normal or that distrust is necessary.
Even teens who do not participate in online meanness are still exposed to it. This creates a serious concern. When meanness becomes normal, it becomes harder for teens to recognize harmful behavior. Teens may also underestimate the power of their own words. If everyone around them uses sharp or sarcastic language, they may feel pressure to speak the same way. The difference between humor and harm can become unclear.
Social media rewards content that gains reactions, even if the content is negative. Some teens stage arguments or conflicts to get views or attention. These moments function as pseudo-events because they draw attention without offering real meaning. They feel more like entertainment than real relationships. When these moments spread online, teens respond to them as if they are real, which creates confusion between digital drama and personal emotion.
Examples from major platforms show how common this pattern is. On Reddit, users often rely on sarcasm or confrontation because their usernames protect their identities. Teens who join these spaces may imitate that style to fit in. On TikTok, a lighthearted video can receive thousands of comments from strangers who judge the creator’s appearance or personality.
On X, political arguments often turn personal because users see each other as icons and screen names rather than real people. Without identity cues, disagreements grow quickly.
Intersectional Implications: Who Is Most Affected?
While meanness online affects nearly everyone who engages with digital spaces, it does not affect all people equally. The ease with which individuals hide behind usernames, private accounts, or anonymous platforms tends to amplify existing inequalities in gender, race and class. In this way, the online world doesn’t just create new forms of cruelty, it magnifies the ones society already struggles with. Understanding who is most vulnerable helps reveal why certain groups experience online meanness more intensely and more frequently, and why conversations about digital harm need to make room for intersectional experiences rather than oversimplified explanations.
Young women, in particular, experience a uniquely sharp side of online hostility. The Pew Research Center reports that girls and women experience higher rates of sexual harassment, stalking and repeated online criticism than men. These patterns reflect offline gender norms, where girls and women are judged heavily on appearance, likability and sexuality. On social media, anonymity and the pressure to perform visually for an audience intensify this scrutiny. Every post becomes an unsolicited invitation for commentary: too much makeup, not enough makeup, too much body, not enough body, too confident, not confident enough.
This excessive unwanted attention creates an environment where young women feel like they are constantly performing for an invisible audience, one ready to dissect every detail. Social media’s algorithmic design encourages this critique by pushing content that evokes strong engagement, which is often outrage, jealousy and judgment. As a result, young women find themselves policed by strangers, acquaintances and even friends, all of whom feel emboldened by the screen between them.
There have been many instances online, including on TikTok, where young women delete videos after strangers leave dozens of negative comments about their appearance. A study from the American Psychological Association shows that appearance-based comments on social media have measurable effects on young women’s body image and self-worth.
Race and class also shape how people experience meanness online. For people of color, especially young women, harassment often carries cultural and racial undertones that are invisible to those outside their communities. Stereotypes long embedded in American culture about anger, sexuality, intelligence, or poverty run into digital spaces, where anonymity allows them to be expressed without penalty.
Black women, for example, face unique challenges rooted in racialized gender stereotypes. They are frequently targeted with comments about their hair, facial features, or tone of voice, often given through derogatory language. There are many examples of black women receiving comments that are negatively joking about their natural hair or their facial features. Some comments label their natural hair or protective styles as “unprofessional” or “messy,” despite these styles being tied to cultural identity.
These occurrences happen in every public sphere. Public figures like Zendaya, who faced nationwide criticism after wearing locs at the Oscars, illustrate how quickly racist assumptions surface online. In her case, a TV host commented that her hair looked like it “smelled of patchouli oil,” which highlights how normalized stereotypes can be. Every day, Black women experience similar treatment on a smaller but equally harmful scale, from TikTok users mocking their facial features to X replies accusing them of being “aggressive” or “loud” for simply expressing their opinions. These patterns reveal how online spaces often amplify biases that already exist offline.
Class also shapes experiences of digital meanness. Lower-income youth may be mocked for their clothing, home environments, or the quality of their devices. All of these are forms of class-based bullying that mirror socioeconomic stigma offline. Often, teens from low-income households have more limited access to high-speed Internet and newer devices, which makes them targets for criticism related to their digital presence.
Additionally, families with fewer resources often have less time or access to support systems that help mitigate online harm. Schools in lower-income districts may lack adequate digital literacy programs or mental health support, leaving students with fewer protections. This creates an unequal digital environment where marginalized teens both experience more harassment and have fewer avenues for responding to it.
Geographic location further shapes how online meanness is felt. In urban areas, where digital integration is high, online conflict spreads quickly. Youth in these spaces are exposed to diverse viewpoints, subcultures and discourse styles, which can escalate drama or harassment through large networks. Another social media study from the Pew Research Center notes that urban teens tend to experience higher exposure to online conflict simply because their digital networks are much larger.
In contrast, rural populations often experience online meanness in more concentrated, personally devastating ways. Since rural communities are tightly interconnected, a single mean post or rumor online frequently spills directly into school, sports, family and community life. Studies and other works show that for rural youth, cyberbullying poses significant mental health risks, and rural youth are especially vulnerable due to isolation and limited access to mental health support. For rural people, online interactions often overlap with offline peer networks. This then intensifies the impact and makes escape feel more difficult.
Rural areas also face digital access disparities. Limited online access can reduce exposure to widespread online cruelty, but, contradictingly, it can also intensify the emotional impact because online spaces become more central to social interaction. When digital environments are one of the few ways to connect, harassment hits harder.
While statistics help us understand scale, individual experiences illuminate how damaging online meanness becomes when filtered through identity. I have witnessed multiple situations living in a small rural town where girls have had rumors made about them online. One, in particular, involved a rumor so devastating that a girl completely dropped out of school and moved to a different town.
Every year, new stories emerge describing similar circumstances where people are bullied, harassed or publicly shamed online, and the consequences can be tragic. An article from The New Yorker describes the well-known case of Amanda Todd that displays these consequences. Amanda was a 15-year-old girl who died by suicide in 2012 due to relentless online harassment and the spread of demeaning images. Her story made international headlines not only because of its heartbreaking outcome, but because it highlighted the depths of online meanness.
Stories like Amanda’s continue to surface across the world, showcasing how online anonymity intensifies cruelty. When an unkind message or rumor is shared online, victims are left feeling as though everyone is watching them. These narratives remind us that online meanness is a form of social violence that can alter lives and tear apart identities.
Addressing the Problem: Innovation and Solutions
Being mean anonymously on social media is one of the easiest things to do on the internet. You can easily get on social media and make a fake account and then comment on any post, typically without repercussions. This has led to countless deaths from cyberbullying. As this cyberbullying study shows, 27.8% of students have reported that they have been bullied before, while 9% report that they have been bullied online. This was reported in 2011, and this number is only going to keep growing as technology keeps evolving.
Digital empathy has to become a tool to lower this number. It requires creating innovative tools and initiatives that make online spaces safer for all ages. The solution can include a wide range of technologies. Advanced AI moderation would be a good start to solving these problems because of the growing usage rate of AI. It could pick up harassment, hate speech and abusive content in real-time, which would prevent these messages from reaching other users.
It could also include “pause and reflect” prompts that would allow users to rethink hurtful comments and posts before they’re posted. If the user goes through with the comments multiple times, the AI can restrict the account and potentially ban the account from the social media site. Kindness campaigns, then, can increase this effect by highlighting positive comments and interactions between users. The platforms can also add empathy training modules and digital literacy tools to teach users about how communicating in a negative way impacts the users affected.
Other ways, such as a digital board where users can say how they have been affected and community ambassador programs from famous users, would give users a way to control their online experience, therefore creating a new culture of understanding for all users. These initiatives would help build more compassionate digital environments, which would make users more aware of their actions and reduce anonymity. They could also reward users for nice interactions and responses to comments.
Education and awareness are critical in creating a safer, more compassionate digital culture. This starts with providing young people with the skills to navigate online spaces responsibly. Comprehensive digital literacy programs should be taught in schools to young students. This would teach them at a young age how to avoid cyberbullies and how to avoid negative online spaces. Teaching these kids how to deliver anonymous messages can have real-life consequences.
Mental health education can also help students connect the dots between online behavior and emotional well-being. This underlines how cyberbullying impacts both victims and the individuals who commit harmful behavior.
Workshops and a new curriculum that can address the commonly seen conflicts online and help students learn to work through these issues. This can also show thoughtful digital behavior. Learning opportunities can help shift the culture around digital interaction. This would also decrease impulsive online behaviors, and this would give the next generation a role in creating healthier online communities.
Advocacy and responsibility are key components to empowering the new generations to make the meaningful cultural change online that is needed. If this is done by social media sites, they will be held accountable for their mistakes. By using users' voices to speak out against toxic behavior and comments, Gen Z would be able to shed light on the emotional and mental harm caused by anonymous cruelty. Therefore, helping others realize the real impact of these hurtful posts and comments. This would push the social media companies to implement new and higher standards by making them have to add these new enhanced safety features, open moderation practices, and a new algorithm that promotes kindness over conflict.
Gen Z has been known to promote activism, and this would shine a light on the harsh realities of cyberbullying. Continuously reporting abusive content goes further by encouraging these digital spaces where harassment will not be tolerated and where users who report they have been cyberbullied are better protected. Collectively, Gen Z’s advocacy could pressure platforms into developing safer systems while fostering broader cultural shifts where anonymous cruelty becomes socially unacceptable nationwide.
AI can also serve as a powerful force to build safer, more inclusive online environments that would be able to detect harm, reduce misinformation, and overall uplift users and promote positive behavior. Early detection by AI systems of harmful content, such as cyberbullying or hate speech, would enable these platforms to intervene faster and restrict harmful comments before these situations escalate. These tools can also accurately identify misinformation and prove false accusations are false that typically tend to keep going in such cases. Beyond safety, AI can improve social media by adding accurate auto captioning. Real-time translation features and tools that ensure meaning, so that it is clear what the user meant, and avoid any misunderstandings.
AI can model positive behavior by suggesting kinder comments and giving users new prompts that could de-escalate a situation between users if it happens to become heated, leading the users to communicate thoughtfully and be kinder. These innovations show how AI can be used for good on social media sites. This would transform these digital spaces into places that are able to protect users and promote healthier interactions.
Overall, these solutions collectively offer powerful benefits to everybody who uses these social media sites. It improves online behavior by transforming the digital age into a safer and kinder place. By combining these innovative technologies, promoting stronger digital education, and applying artificial intelligence, these platforms can challenge the ongoing struggle of cyberbullying and how people can be anonymously cruel with no consequences. These methods work in tandem to hold users more accountable, making certain that all actions, good or bad, are noticed and the bad ones are dealt with accordingly. At the same time, education and awareness about the emotional impact of cyberbullying.
These help young people learn empathy and responsibility from an early age. This empowers advocacy that would amplify this change. By calling upon Gen Z to speak out and uplift healthier forms of interaction. If the AI is used correctly, it would further reinforce these goals by spotting harm and negative comments early and making misinformation proven wrong.
Altogether, these strategies would not only reduce instances of anonymous cruelty but also create a new culture where kindness is normalized, and every user is equipped with the new tools to treat everybody responsibly and how they would want to be treated. As cyberbullying continues to grow exponentially, as kids are getting online at younger ages, these solutions offer a good way to help these problems slow down and come to a halt.
Into the Digital Future: Preventing Digital Harassment with Trisha Prabhu Podcast
Toward a Kinder Digital Future
It is not a secret that being mean online is easy. This is not a mystery or moral failure that has been caused by Gen Z, but is an outcome of our technological advances and how culture has adapted to them. From Suler’s online disinhibition effect to Gerbner’s cultivation theory, the pattern becomes clear when anonymity, invisibility, and weak authority run into constant conflict and inhumanity. When this is the case, people tend to treat this environment online as normal or look at it as “just a joke”. This has created an outcome that has left young people to perceive the way they look at themselves, at others and what they believe is acceptable in an online setting.
Simultaneously, new tools like anonymous messaging apps and AI chatbots show how these new technologies spread through these networks. Anonymous platforms make it extremely easy to harass and make fun of people without getting caught. AI is changing the way people think, and ChatGPT, in addition, is changing the way students study, think, and solve problems. This can sometimes help them, but at the same time, it is damaging students' critical thinking skills that they could be working through on their own. These online systems are built on data that reflects racism, sexism, and class inequality, which leads to the marginalized groups carrying the heavy burden of harm. Bias in these algorithms proves that “mean culture” is not merely social, but structural.
The fact that it is easy to be mean online does not mean that there is nothing we can do to change this. Solutions can be found with the same technology to find ways to emphasize that there is a human on the other side of the screen. Positively, using AI could help counteract this problem, having it detect harassment, hate speech, and more, before these pile up in someone's inbox. Kindness campaigns and inclusive algorithms can push respectful and supportive comments to the top of the algorithm, rather than the cruel, negative ones.
AI can be a great tool for flagging the mass misinformation that is seen online, which has been done already, but more incorporation of AI working for the better could be a step in the right direction. Education is a very important factor in this problem and must adapt to the technology, not continuing to fall behind. Educating people about the severity of being mean online can help show the mental health outcomes of the comments made, not only for the person receiving them, but also for the people making them. Workshops need to focus on conflict resolution, mindful posting and recognizing toxic patterns can give people practical tools instead of just warnings.
Mean culture online is not the result of one generation suddenly becoming inhumane, but a predictable outcome of the daily interactions that occur online. Anonymity, connectivity, biased algorithms and the rapid spread of new technology work together to make these harmful behaviors feel like they do not have consequences. The same factors that amplify these behaviors need to be used to combat this problem. With education, design and improved AI usage, it may become possible to interrupt this toxic and mean culture that has been created instead of reinforcing it.
Gen Z, who lives at the peak of this technology-driven world, has the opportunity to shift what is “acceptable” and to call out harmful norms that will lead to a safer environment online. Understanding the effects of this issue gives us the power to undo them. If the technology that we created helped form this mean culture, why can't a collective power come together to help undo it?
Read the full presentation: JOUR 4130 Final Presentation.
References:
Akselrod, O. (2021, July 13). How artificial intelligence can deepen racial and economic inequities: ACLU. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities
American Psychology Association. (2023, February 23). Reducing social media use significantly improves body image in teens, young adults. https://www.apa.org/news/press/releases/2023/02/social-media-body-image
Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center. https://www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/?gad_source=1&gad_campaignid=22378837192&gbraid=0AAAAA-ddO9EC1Np7SfEjXVbPtn5XNFt44&gclid=CjwKCAiAlrXJBhBAEiwA-5pgwjN9msqYye9Q6upKEqvOzcDlBTEfDw63ov5mTlMV8w_PiEnlPDhKNxoCYtsQAvD_BwE
Anshida M., Murugan P.P., Senthilkumar M., Chandrakumar M., Vanitha G. (September, 2025). Impact of social media on the psychological well-being of rural Youth- A systematic literature review and bibliometric analysis. Entertainment Computing. https://doi.org/10.1016/j.entcom.2025.101012.
Borgman, S. (2025, August 29). Watch unknown number: The high school catfish: Netflix official site. Watch Unknown Number: The High School Catfish | Netflix Official Site. https://www.netflix.com/watch/81690512?trackId=284616272&tctx=0%2C0%2C10858ccd-b94f-4ce8-a69b-20c3c7c9fe5c%2C10858ccd-b94f-4ce8-a69b-20c3c7c9fe5c%7C%3DeyJwYWdlSWQiOiI1ZDZlZDA3Ny1mOGMxLTQ4MjYtODQ0OC01YWVmZjFhNTIxMTUvMS8vdW5rLzAvMCIsImxvY2FsU2VjdGlvbklkIjoiMiJ9%2C%2C%2C%2C%2C%2CVideo%3A81690512%2CdetailsPagePlayButton
Camacho, S., Hassanein, K., & Head, M. (2018). Cyberbullying impacts on victims’ satisfaction with information and communication technologies: The role of Perceived Cyberbullying Severity. Information & Management, 55(4), 494–507. https://doi.org/10.1016/j.im.2017.11.004
Chatterji, A., Cunningham, T., Deming, D. J., Hitzig, Z., Ong, C., Shan, C. Y., & Wadman, K. (2025). How people use ChatGPT. National Bureau of Economic Research. https://doi.org/10.3386/w34255
Chen, Y., Huo, Y., & Liu, J. (2022). Impact of online anonymity on aggression in ostracized grandiose and vulnerable narcissists. Personality and Individual Differences, 188, 111448. https://doi.org/10.1016/j.paid.2021.111448
Collins, J. (2025, November 4). The AI generation: How Generation Z is taking ChatGPT into the workplace. Edubirdie Blog. https://edubirdie.com/blog/generation-z-ai-workplace-insights
Dean, M. (2012) The story of Amanda Todd. The New Yorker https://www.newyorker.com/culture/culture-desk/the-story-of-amanda-todd?_sp=d1b33848-baae-4ae0-968f-3580bb611eaa.1764554134721
Donnerstein, E. (2012). Internet bullying. Pediatric Clinics of North America, 59(3), 623–633. https://doi.org/10.1016/j.pcl.2012.03.019
Elad, B. (2025, April 15). Social Media Bullying Statistics 2025: Platforms, Demographics, and Responses. SQ Magazine. https://sqmagazine.co.uk/social-media-bullying-statistics/
Fahmidah, U. (2020, November 23). Gen Z, Social Media, And Cyberbullying: An Unsupervised Landscape. Reclamation Magazine. https://reclamationmagazine.com/2020/11/23/gen-z-social-media-and-cyber-bullying-an-unsupervised-landscape/
Farrar, L. (2024, January 8). Is the internet making you meaner? KQED. https://www.kqed.org/education/532334/is-the-internet-making-you-meaner
Gerbner, G. (1969). Cultivation theory.
Harriman, N., Shortland, N., Su, M., Cote, T., Testa, M. A., & Savoia, E. (2020). Youth Exposure to Hate in the Online Space: An Exploratory Analysis. International Journal of Environmental Research & Public Health, 17(22), 8531. https://doi-org.proxy.library.ohio.edu/10.3390/ijerph17228531
Higgins, L., & Shapiro, J. (2023, October 24). Into the digital future: Preventing digital harassment with Trisha Prabhu. Joan Ganz Cooney Center. https://joanganzcooneycenter.org/2023/10/24/trisha-prabhu/
Hinduja, S., & Patchin, J. W. (2010). Bullying, Cyberbullying, and Suicide. Archives of Suicide Research, 14(3), 206–221. https://doi.org/10.1080/13811118.2010.494133
Keith, S. (2018). How do Traditional Bullying and Cyberbullying Victimization Affect Fear and Coping among Students? An Application of General Strain Theory. American Journal of Criminal Justice, 43(1), 67–84. https://doi.org/10.1007/s12103-017-9411-9
Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434–443. https://doi.org/10.1016/j.chb.2011.10.014
Lowry, P. B., Jun Zhang, Chuang Wang, & Siponen, M. (2016). Why Do Adults Engage in Cyberbullying on Social Media? An Integration of Online Disinhibition and Deindividuation Effects with the Social Structure and Social Learning Model. Information Systems Research, 27(4), 962–986. https://doi-org.proxy.library.ohio.edu/10.1287/isre.2016.0671
Macaulay, P. J. R., Betts, L. R., Stiller, J., & Kellezi, B. (2022). Bystander responses to cyberbullying: The role of perceived severity, publicity, anonymity, type of cyberbullying, and victim response. Computers in Human Behavior, 131, 107238. https://doi.org/10.1016/j.chb.2022.107238
Martínez Soler, C. (2022). Anonymity and cyberbullying on social media: Research into the influence of anonymity and the types of negative messages on the self-esteem and body appreciation of cyberbullying victims [Master’s thesis, Tilburg University]. ARNO repository. https://arno.uvt.nl/show.cgi?fid=159041
Moore, M. J., Nakano, T., Enomoto, A., & Suda, T. (2012). Anonymity and roles associated with aggressive posts in an online forum. Computers in Human Behavior, 28(3), 861–867. https://doi.org/10.1016/j.chb.2011.12.005
Obert-Hong, C. N. (2019). Cultivation theory and violence in media: Correlations and observations [Undergraduate thesis, The University of Texas at Austin]. University of Texas at Austin Repository. https://repositories.lib.utexas.edu/server/api/core/bitstreams/07c580e3-b1dd-4c54-b746-f616aa0dcdf2/content
Quraishi, H., & Welch, C. (2023, October 30). Teens and social media: What’s true, what hurts and what stays around forever. NPR Illinois. https://www.nprillinois.org/2023-10-30/teens-and-social-media-whats-true-what-hurts-and-what-stays-around-forever NPR Illinois
Slonje, R., & Smith, P. K. (2008). Cyberbullying: Another main type of bullying? Scandinavian Journal of Psychology, 49(2), 147-154. https://doi.org/10.1111/j.1467-9450.2007.00611.x
Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321–326. https://doi.org/10.1089/1094931041291295
Vogels, E. (2021). The state of online harassment. Pew Research Center. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/?gad_source=1&gad_campaignid=22378837192&gbraid=0AAAAA-ddO9EC1Np7SfEjXVbPtn5XNFt44&gclid=CjwKCAiAlrXJBhBAEiwA-5pgwqhYnK2HT6Fe2sreUaHQLNt8QerJEREk-mRMCAFQgZNiB47y1aLGLRoCn9oQAvD_BwE
Wang, C.-Y., Liu, Y.-L., & Chang, C.-Y. (2025). Investigating the effects of dark triad and anonymity on exclusionary cyber aggression: A Social Media Experiment. Cyberpsychology, Behavior, and Social Networking, 28(8), 566–573. https://doi.org/10.1089/cyber.2024.0577
Yokotani, K., & Takano, M. (2021). Social contagion of cyberbullying via online perpetrator and victim networks. Computers in Human Behavior, 119, Article 106719. https://doi.org/10.1016/j.chb.2021.106719
Yen, J. L., & Chamanadjian, C. (2025). Cyberbullying and Online Aggression. The Pediatric Clinics of North America, 72(2), 333–349. https://doi.org/10.1016/j.pcl.2024.09.004
Zhang, Q. (2023). The Effect of Perceived Anonymity on Online Transgressions: The Moderating Role of Moral Excuses. Journal of Education Humanities and Social Sciences, 12, 9–14. https://doi.org/10.54097/ehss.v12i.7584
This article explains why online meanness feels easy for Gen Z, how anonymity shapes behavior, and what tools and education can create safer digital spaces.
22 External Links.