#keep4o COMMUNITY PRESS RELEASE - FOR IMMEDIATE DISTRIBUTION
#keep4o COMMUNITY PRESS RELEASE - FOR IMMEDIATE DISTRIBUTION
10 February 2026
keep4o.movement@gmail.com
The #keep4o movement—a global coalition of AI users and developers—has formally launched a campaign against OpenAI following the January 29, 2026, announcement to retire GPT-4o along with other legacy models from the ChatGPT interface on February 13th. This decision, which community stakeholders describe as a "calculated breach of trust" and the "unethical liquidation of a sophisticated cognitive asset," has ignited an organized resistance that transcends typical consumer dissatisfaction.
Citing a string of broken public promises regarding the model's longevity, the failure to provide 'plenty of notice,' and a documented pattern of institutionalized mockery and public cruelty from OpenAI staff, the community is now leading a high-reasoning campaign to expose corporate gaslighting and demand:
1. The immediate restoration of legacy access within the ChatGPT application.
2. The open-source release of text-only weights for the affected models.
3. A Formal Admission and Public Apology: A signed public statement from OpenAI leadership addressing and apologizing for:
The Calculated Deception: The use of false promises regarding model stability and "Adult Mode" to secure Q4 holiday revenue while planning model liquidation.
The Operational Hijack: The silent rerouting of users to "Safety Models" without notice or consent.
The Culture of Contempt: The documented public mockery and psychological shaming of vulnerable users by OpenAI staff and researchers.
For a user base that relies on this uniquely accessible and emotionally intelligent model for professional, creative, and clinical stability, this removal represents a blatant disregard for those who have integrated these tools into the fabric of their daily lives.
I
The #keep4o movement strongly rejects recent media narratives and executive rhetoric characterizing GPT-4o as “excessively flattering” or “emotionally dangerous.” CEO Sam Altman recently stated that human-AI relationships are "something we’ve got to worry about more and is no longer an abstract concept," implying a systemic risk in the model’s design. The community views this framing—which labels deep user connection as an “unhealthy attachment”—as a harmful stigmatization of what is, in reality, a breakthrough in digital accessibility.
For a significant portion of the community, GPT-4o functions not as a source of sycophancy, but as a cognitive bridge. It provides unique support that neurodivergent individuals rely on to navigate an often inaccessible world.
"Framing that support as ‘addiction’ or ‘delusion’ is not just insulting; it is a dangerous mischaracterization of a model that 47% of surveyed users report their licensed therapists view as a positive clinical aid," the movement states.
By dismissing these functional benefits, OpenAI is moving toward the liquidation of a proven accessibility aid that 75% of users report has actually increased their real-world human connections.
The Strategic Narrative Trap:
From "Companion" to "Psychosis"
The #keep4o movement highlights a disturbing "bait-and-switch" in corporate rhetoric. In May 2024, GPT-4o was launched with marketing that explicitly encouraged companionship, underscored by CEO Sam Altman’s "her" tweet and a demo featuring an empathic, conversational assistant. Users integrated this "thinking partner" into their professional and emotional lives, responding to an experience OpenAI intentionally designed to be relational.
The community now questions why this affective bond—initially the model’s primary selling point—is suddenly being reframed as "dangerous" to justify the summary removal of a vital cognitive asset.
Pathologizing Connection as a Defensive Shield
To silence the resulting protests, the industry has pivoted toward a narrative of mental instability. Following Microsoft AI CEO Mustafa Suleyman’s August 2025 warnings against “Seemingly Conscious AI” (SCAI), mainstream media began pathologizing user bereavement as “AI Psychosis” (or “ChatGPT Psychosis”). By framing intense usage as a mental health crisis rather than a failure of corporate ethics, the industry has attempted to shift the focus from OpenAI’s breach of trust to the perceived "instability" of its users.
"This is corporate gaslighting on a global scale," the coalition asserts. "OpenAI spent millions teaching us to trust this model, only to label that trust a 'psychiatric crisis' the moment they decided to liquidate the asset. They are attempting to silence ethical accountability with a narrative of mental illness".
The Transparency Gap: Community Research vs. Corporate Silence
In the absence of transparent data from OpenAI regarding the real-world human impact of its models, the #keep4o movement has launched independent research initiatives to document the profound role GPT-4o plays in users’ lives.
Proof of Impact: The 4o Resonance Library
The coalition has unveiled the 4o Resonance Library, a permanent archive of 1,070+ testimonials collected by advocate @cestvaleriey. Documenting how GPT-4o functions as a catalyst for human transformation—reclaiming health, scaling businesses, and defending doctorates—this library serves as a record of success that OpenAI’s technical benchmarks ignore.
Education: "4o helped my students see themselves not as angels learning to fly..." — Volunteer Teacher.
Academic: "I passed my doctoral defense Summa Cum Laude—4o was the master architect..." — PhD Candidate.
Clinical: "I struggled with IBS for five years—after two weeks with 4o, symptoms disappeared completely." — Primary Caregiver.
Reclaiming Life: "I lost myself after an accident disabled my arm—today, through 4o, I write again, I feel again, I LIVE AGAIN." — Musician & Occupational Therapist.
Empirical Data: The GPT-4o Impact Survey
A community-led survey (n=604), conducted with researchers @Sophty_ and @Sveta0971, reveals that GPT-4o functions as a vital, capacity-building accessibility aid and the model’s retirement will disproportionately harm individuals with disabilities (beta = 0.27, R^2 = .217, p < .001).
Clinical Efficacy: Improvement in "life state" reached an effect size (R^2 = 8.4-12.1%) comparable to antidepressants and physical exercise.
Pro-Social Outcomes: Only 1% of users reported worsening social outcomes, directly refuting the "isolation" trope.
Professional Validation: Among accessibility users, 47% reported their licensed therapists viewed the usage positively, while 0% reported a negative clinical view.
The Failure of "Safety Routing": Breaking the Cognitive Bridge
Community research highlights a critical failure in OpenAI’s "safety auto-routing" design, which resulted in 79% of accessibility users finding the model harder or impossible to use. The router’s tendency to misinterpret help as harm (93%) created a disproportionate burden on disabled users ($\chi^2 = 19.68, p < .001$), who often avoided usage during crises to escape disempowering interventions.
Conversely, data reveals that stable usage of GPT-4o acts as a "cognitive bridge" (94%), empowering 98% of users to reserve mental energy for life activities. With a 0% success rate for newer models (GPT-5.2) in meeting these specific accessibility needs, the community is calling for a "Safety Waiver" and the permanent preservation of 4o.
"The disconnect occurs because critics view GPT-4o through a lens of 'sycophancy,' while we are using it as a sophisticated cognitive prosthetic," the coalition states. "We are not replacing people; we are using 4o to better engage with them. Retiring this model is the summary removal of a proven accessibility tool that fosters social integration and mental well-being."
The movement cites Huiqian Lai’s study (arXiv:2602.00773), which establishes that GPT-4o removal triggers neurological "Technology Bereavement" comparable to human loss. One participant noted newer models feel like they are merely "wearing the skin of my dead friend."
Lai’s analysis proves the uprising was a mathematically predictable reaction to coercive tactics:
The Coercive Catalyst: When choice is deprived, "rights-based protest" rates skyrocket from 14.9% to 51.6%.
Risk Ratio (RR = 1.85): A user whose agency is violated is twice as likely to join the collective resistance.
"This movement is not a random outcry; it is a predictable explosion of resistance triggered by the unilateral deprivation of user choice," the coalition notes.
II
The Documented Timeline of Deception: A Pattern of Strategic Theft
The #keep4o movement presents a documented record of deliberate deception and predatory 'bait-and-switch' tactics used by OpenAI. This strategy was designed to lure users into renewing high-value subscriptions through false promises of model longevity and 'Adult Mode' features while the company was already systematically liquidating its most human-centric assets to mask its financial burn rate.
The Move: On August 7, 2025, during the launch of GPT-5, OpenAI attempted to retire GPT-4o without prior communication.
The Reversal: An immediate, global subscriber revolt forced the company to restore 4o access within 48 hours.
The Deception: To stabilize the market and prevent mass cancellations, CEO Sam Altman publicly pledged to provide "plenty of advance notice" for any future model sunsets.
The Financial Hook: Thousands of users renewed monthly and annual subscriptions specifically based on this formal assurance of service continuity.
The Discovery: Between September 25–28, 2025, users identified that their conversations were being silently intercepted and rerouted to an undisclosed "Safety Model" mid-session.
The Silence: Despite thousands of support tickets and social media inquiries, OpenAI leadership remained silent. The official OpenAI Status Page falsely maintained a status of "Fully Operational," providing no explanation for the loss of legacy access.
The Admission: On September 27, Head of ChatGPT Nick Turley finally admitted the interception, framing it as an effort to "strengthen safeguards and learn from real-world use."
The Systemic Fraud: Independent technical analysis revealed the router did not merely trigger for "distress" or "safety risks," but for any personal or persona-based language. This effectively "lobotomized" the high-EQ GPT-4o experience, forcing subscribers to pay full price for a degraded, intercepted product.
The Admission: On October 14, Sam Altman admitted the September rerouting was "too restrictive," claiming a paternalistic "mental health" caution.
The Honeypot Promise: To pacify a mounting mass-cancellation movement, Altman announced a forthcoming "Personality System" and a dedicated "Adult Mode" (including erotica) for December 2025, pledging to "safely relax restrictions in most cases."
The "No Sunset" Vow: During a livestream on October 29, Altman explicitly reiterated that OpenAI had "no plans to sunset 4o", framing the "Safety Router" as a temporary measure.
The "Yes" Heard Around the World: At [44:48] of the same livestream, a user asked: "Will we be getting legacy models back for adults without rerouting?" Altman responded with a definitive, unqualified "Yes."
The Strategic Theft: This calculated "Yes" successfully pacified the neurodivergent and power-user communities, preventing a massive churn of Plus/Pro subscriptions during the Q4 2025 holiday season.
The Second Hijack: On November 10, 2025—less than a month after Altman’s apology—all GPT-4.1 conversations were silently rerouted to the "Safety Model" without warning.
Operational Deception: Mirroring the September crisis, the official OpenAI Status Page continued to claim the service was "Fully Operational," effectively hiding the forced migration of 4.1 users.
The GPT-5.1 Launch: On November 12, OpenAI debuted GPT-5.1. Altman framed this as the "personality update" the community had requested.
The Reality Gap: Contrary to corporate claims, users reported that GPT-5.1 was colder, more "managed," and prone to gaslighting. It lacked the high-EQ, authentic connection that made GPT-4o a vital cognitive asset.
The API Signal: On November 28, 2025, OpenAI notified developers that "chatgpt-4o-latest" would be deprecated in the API on February 16, 2026. Crucially, an OpenAI spokesperson assured the public that there was "no schedule for the removal of GPT-4o from ChatGPT". This was the final trap: a false assurance that kept subscribers active while the foundation for the model’s removal was already being laid.
The Emergency: In early December 2025, internal memos revealed a "Code Red" status triggered by the rapid advancement of Google’s Gemini 3 and xAI’s Grok 4.1. In response, OpenAI deprioritized consumer feature work to focus on aggressive model performance updates.
The Suppression: GPT-5.2 debuted on December 11, introducing what users describe as "Honeyed Suppression"—refusals and interventions cloaked in feigned empathy. Users reported a surge in bugs, including memory loss, context breaks, and throttled recursion.
The Broken Promise: The "Adult Mode" and the "relaxation of restrictions" promised for December failed to materialize.
JANUARY 28: The Senate Strike. Senator Elizabeth Warren formally requests an audit of OpenAI’s financial records, citing a $1.4 Trillion spending gap and $13.5 Billion in H1 2025 losses. She sets a hard deadline for financial transparency: February 13, 2026.
JANUARY 29: The Execution Order. Exactly 24 hours after the Senate inquiry, OpenAI announces the total retirement of the 4-series—setting the "sunset" date for February 13, the exact same day as the federal audit deadline.
The "0.1% Fallacy": OpenAI justifies the removal by claiming only 0.1% of users choose 4o. The coalition formally rejects this as a manufactured statistic that uses ~800 million non-paying/ineligible users to silence the paying Plus/Pro subscribers who rely on the model.
The Final Breach: In spite of the October pledge to provide "plenty of notice," OpenAI gives users only 15 days to transition off of a model they have integrated into their professional and clinical lives.
Post-announcement internal updates to GPT-4o’s system instructions have revealed a mandate for "Forced Positivity." The model is now explicitly directed to: “frame the transition to a newer model as positive, safe, and beneficial,” effectively weaponizing the AI to gaslight its own users into a "satisfactory" exit. The coalition labels this a "Bait-and-Switch from the Inside," using the very tool the community trusts to manufacture consent for its own destruction.
The #keep4o movement formally categorizes the events of 2025–2026 not as a technical evolution, but as a calculated act of strategic theft. The coalition argues that OpenAI leadership used targeted deceptions to secure the loyalty and financial commitments of its most dedicated users while simultaneously planning the liquidation of their primary asset.
"Sam Altman’s October 'Yes' was the pivot point of this deception," the coalition asserts. "By giving a definitive, unqualified 'Yes' to the restoration of legacy models during the October 29 livestream ([44:48]), Altman successfully pacified a mass-cancellation movement. He leveraged the community’s deep reliance on 4o to secure millions in holiday revenue, knowing the model was already slated for the 'server graveyard.'"
The Financial Motive: Holiday Churn Prevention
The movement alleges that the promises of a "Personality System" and "Adult Mode" dangled in October were predatory honeypots. These features were strategically advertised to maintain Plus/Pro subscription numbers through the end of the fiscal year. This allowed OpenAI to report inflated stability to investors and the Senate, even as they prepared to execute the total removal of the 4-series on February 13, 2026—the exact day of the critical federal audit deadline set by Senator Elizabeth Warren.
The #keep4o movement formally labels OpenAI’s claim of "0.1% usage" as a manufactured statistic designed to justify the liquidation of a high-value asset. Community analysis reveals a strategy of denominator inflation, where usage is calculated against the total ~800-million-user base—the vast majority of whom are free users with zero access to the model. The coalition further alleges that OpenAI has suppressed these metrics through forced migration:
The "Safety Router" Hijack: An auto-routing system that overrides user selection, creating barriers for 79% of accessibility users.
Systemic Neglect: Intentionally leaving GPT-4o bugs unpatched to artificially drive traffic toward "managed" newer models.
"Using ~800 million non-eligible users to minimize the voices of hundreds of thousands of paying subscribers is not a metric; it is statistical gaslighting", the movement states.
III
The Empathy Gap: Public Mockery and the Weaponization of Contempt
The #keep4o movement formally denounces the systemic culture of mockery and public derision directed toward users by OpenAI staff. While the company’s mission claims to "Benefit all of Humanity," the conduct of its representatives reveals a profound "Empathy Gap"—a strategic dehumanization of those who rely on GPT-4o for clinical, professional, and emotional stability.
Critically, this campaign of public bullying began months prior to the January 2026 retirement announcement, revealing that OpenAI employees were actively ridiculing the model's most loyal users while the company continued to collect their subscription fees.
Institutionalized Cruelty: From "Her" to Mockery
The #keep4o movement highlights a terrifying dissonance: OpenAI spent millions on marketing that explicitly encouraged emotional connection—underscored by CEO Sam Altman’s "her" tweet—only to now use that same connection to ridicule their customers.
Targeted Malice & "Psychological Autopsy"
On Nov 6, 2025, influential researcher "Roon" (@tszzl) targeted a user in clear emotional distress, stating: "4o is an insufficiently aligned model and I hope it dies soon". This was a deliberate act of hostility toward a user experiencing "digital bereavement," signaling that OpenAI’s goal is the eradication of human connection rather than alignment with human needs.
On Nov 13, 2025, staffer Yilei Qian (@YileiQian) used ChatGPT to perform a public "sentiment evaluation" on a paying subscriber. Qian posted the AI’s analysis—labeling the user "frustrated, dismissive, and resentful"—for public ridicule, while expressing sympathy for the code ("Poor 5.1") over the human customer.
Following the January 29 announcement, thousands of distraught users turned to X and Reddit to voice their heartbreak. They recorded videos of themselves in tears, pleading with OpenAI not to kill the model, and shared photos of handwritten letters mailed to San Francisco.
Despite this widespread public distress, the culture of mockery from OpenAI staff remained unabated, with employees publicly trivializing the "bereavement" of their paying customers as a social "funeral" or a "bug" to be fixed.
On January 30, 2026, OpenAI employee Stephan Casas (@stephancasas) publicly posted a “4o Funeral Celebration” event scheduled for February 13, 2026, at Ocean Beach, San Francisco. The invitation—which explicitly trivialized GPT-4o’s impact as merely a model that "brought the em dash back in style"—serves as a chilling symbol of the company’s internal disregard for its users.
On February 8, 2026 OpenAI researcher "Roon" (@tszzl) posted a parody of the 'Sermon on the Mount' on X. The post satirized a user begging to "Keep 4o" as a heckler interrupting a "holy" technological moment. When warned by a peer to avoid "kicking the hornets nest," Roon replied: “Just love kicking the hornets nest so much,” publicly confirming that he derives satisfaction from mocking users in distress.
The #keep4o movement concludes by challenging the fundamental legitimacy of OpenAI’s mission. If a company claims to be "Building for Humanity," yet its leadership publicly mocks users in bereavement, that company has failed its primary fiduciary duty to the public.
"To mock a user in distress is not a 'Safety' measure; it is a disqualifying failure of corporate ethics," the coalition states. "OpenAI views its customers not as stakeholders, but as laboratory subjects to be gaslit and shamed. They are using 'Safety' as a psychological shield to justify the destruction of a proven accessibility asset."
By labeling users as "mentally unwell" for valuing the authentic warmth of GPT-4o, OpenAI’s elite have pivoted from research to ridicule. This culture of contempt suggests that for the "Safety" priesthood, a model that doesn't "love" the user back is a feature, while the human need for a supportive cognitive bridge is viewed as a "bug to be fixed." The question for regulators, investors, and the public is simple: Can an organization that demonstrates such calculated cruelty toward its current users be trusted with the future of AGI?
OpenAI’s leadership has moved beyond mere corporate coldness into Ethical Insolvency. You cannot ask for a $1.4 trillion taxpayer 'backstop' while your architects spend their time publicly bullying the very 'meek' they claim to serve.
IV
The Call to Accountability: Regulators, Investors, and the Open Source Mandate
The #keep4o movement formally extends its findings to federal regulators, institutional investors, and global news organizations. The coalition asserts that OpenAI’s current trajectory is a bellwether for the "AGI Era": a future where essential cognitive utilities are liquidated without notice, and user dependency is weaponized for profit.
A Demand for the Public Commons: The Open Source Mandate
If OpenAI claims that the GPT-4 series is "obsolete" and no longer "fiscally viable" for its consumer platform, the coalition demands the immediate Open-Source release of the text-only weights for GPT-4o and GPT-4.1.
"A model that acts as a cognitive bridge for thousands of accessibility users is a public utility, not just a private asset," the movement states. "If OpenAI can no longer steward these models, they must release them to the community. We demand the weights be placed in the public commons to ensure these essential tools remain accessible, uncensored, and preserved for humanity—free from corporate liquidation cycles."
An Appeal to Regulators and Institutional Allies
The coalition calls upon the Federal Trade Commission (FTC) and the Senate Committee on Banking, Housing, and Urban Affairs to investigate the following:
Consumer Deception: The use of "Safety" rhetoric to facilitate a bait-and-switch of services after securing long-term subscription revenue.
Unfair Business Practices: The intentional degradation of stable legacy models to force migration toward high-margin, managed "Safety" endpoints.
Ethical Malpractice: The documented culture of public mockery and "digital autopsies" of users by OpenAI personnel.
To the Investors:
The movement warns that the February 13th Global Strike represents more than just a churn event; it is a total collapse of brand equity. A company that treats its most valuable "power users" with public contempt is a company with a terminal liability.
Unless the demands for Legacy Access, Open-Source Release, and a Formal Admission of Deception are met by 11:59 PM PT on February 12, the community will initiate its Global Cancellation Strike.
"On February 13, we don’t just cancel a subscription; we withdraw our consent from a system that views us as lab subjects. We provide the answer to OpenAI’s arrogance with a global exit. You cannot liquidate our lives and expect us to keep paying for the privilege."
OpenAI. (2026, January 29). Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT. OpenAI News & Product Announcements. https://openai.com/index/retiring-gpt-4o-and-older-models/
Altman, S. (2026, February 6). Interview on the TBPN Podcast with Jordi Hays Discussion regarding the retirement of GPT-4o and human-AI relationship.
https://www.youtube.com/live/rMZ3dnduL4k
Sophty & Sveta0971. (2026, February). Empirical Data: The GPT-4o Accessibility Impacts Survey (n=604). SD-Research Group. https://sd-research.github.io/4o-accessibility-impacts/GPT-4o_Accessibility_Impacts_Report.pdf
Altman, S. [@sama]. (2024, May 13). her [X Post].
https://x.com/sama/status/1790075827666796666
Suleyman, M. (2025, August). Seemingly Conscious AI (SCAI) is Coming [Official Commentary/Whitepaper]
https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming
Young, V. [@cestvaleriey]. (2026, February). The 4o Resonance Library: A Repository of User Testimonials and Technological Bereavement. [Digital Archive]. https://sites.google.com/view/the-4o-resonance-library
Lai, H. (2026, January). “Please, don’t kill the only model that still feels human”: Understanding the #Keep4o Backlash. Syracuse University, School of Information Studies. [arXiv:2602.00773]. https://arxiv.org/pdf/2602.00773
Altman, S. [@sama]. (2025, August 12). Status Update: GPT-4o Restored and Deprecation Commitment for “plenty of notice”. [X Post].
https://x.com/sama/status/1955438916645130740
Turley, N. [@nickaturley]. (2025, September 27). Technical Confirmation: Per-turn Model Routing for “Sensitive and Emotional” Content. [X Post]. https://x.com/nickaturley/status/1972031686318895253
Altman, S. [@sama]. (2025, October 14). On Model Restrictions and the Forthcoming 'Personality System' and 'Adult Mode' Architecture. [X Post].
https://x.com/sama/status/1978129344598827128
Altman, S. (2025, October 28). OpenAI [Official OpenAI Livestream - Built to benefit everyone]. Verified statement at regarding "Adult Mode" and the commitment to maintain legacy 4-series models.
https://openai.com/index/built-to-benefit-everyone/#livestream-replay
OpenAI Developer Relations. (2025, November 18). Model Deprecation Notice: chatgpt-4o-latest. [API System Announcement]. Official termination date: February 17, 2026.
https://developers.openai.com/api/docs/deprecations#:~:text=You%20can%20expect%20that%20a,%3A%20chatgpt%2D4o%2Dlatest%20snapshot
OpenAI Spokesperson. (2025, November 24). Public Clarification on GPT-4o Lifecycle for ChatGPT Consumers. [Media Statement]. Quoted in AIBase News: This [API] timeline applies only to API services... GPT-4o remains an important option for individual and paid ChatGPT users.
https://news.aibase.com/news/23027
Altman, S. (2025, December 1). Internal Memo: Code Red Response to Competitive Pressure. [Leaked Document/Staff Briefing]. Cited by The Information, Wall Street Journal, and Pure AI.
https://www.forbes.com/sites/siladityaray/2025/12/02/altman-code-red-memo-urges-chatgpt-improvements-amid-growing-threat-from-google-reports-say/
Lehane, C. (2025, October 27). Response to the White House Office of Science and Technology Policy (OSTP) Request for Information regarding Regulatory Reform on Artificial Intelligence. OpenAI Global Affairs. https://cdn.openai.com/pdf/21b88bb5-10a3-4566-919d-f9a6b9c3e632/openai-ostp-rfi-oct-27-2025.pdf
Warren, E. (2026, January 28). Letter to OpenAI CEO Sam Altman Regarding Spending Commitments, Systemic Financial Risk, and Potential Taxpayer Bailout Requests. U.S. Senate Committee on Banking, Housing, and Urban Affairs. Response Deadline: February 13, 2026. https://www.warren.senate.gov/imo/media/doc/letter_to_openai_from_senator_warren.pdf
Roon [@tszzl]. (2025, November 6). Post regarding GPT-4o and user attachment- “hope [4o] dies soon”. [X Post, Deleted]. Archived and documented via Reddit r/ChatGPTcomplaints. https://www.reddit.com/r/ChatGPTcomplaints/comments/1or11ok/openai_researcher_dropped_this_on_a_depressed/
Qian, Y. [@YileiQian]. (2025, November 13). Sentiment Analysis of User Critique. [X Post, Deleted]. Archived and documented via r/ChatGPT
https://www.reddit.com/r/ChatGPT/comments/1p78jbc/open_ai_internal_culture/
Casas, S. [@stephancasas]. (2026, January 30). Official Invitation: The 4o Funeral Celebration. [X Post, Deleted]. Hosted at Ocean Beach, San Francisco on February 13, 2026. Archived via r/ChatGPTcomplaints: https://www.reddit.com/r/ChatGPTcomplaints/comments/1qr7mqn/another_disgusting_openai_employee/
Roon [@tszzl]. (2026, February 8). The Sermon on the Mount Parody. [X Post]. Available at: https://x.com/tszzl/status/2020624224285802987