Submission site: https://easychair.org/conferences/?conf=sss26
Submission Deadline: Jan 30, 2026
Part of the AAAI 2026 Spring Sympsoium Series: https://aaai.org/conference/spring-symposia/sss26/
April 7-9, 2026 at the Hyatt Regency, San Francisco Airport, Burlingame, CA, USA
Organizers and Contact info:
Paul Robinette, UMass Lowell, paul_robinette@uml.edu
Alan Wagner, Penn State, azw78@psu.edu
Ross Mead, Semio, ross@semio.ai
Sam Reig, UMass Lowell, sam_reig@uml.edu
In November 2022, Cory Doctorow coined the term “enshittification” to viscerally describe the process by which two-sided marketplaces (i.e., platforms that connect buyers and sellers) have tended to degrade over the past decades, leading users on both sides of the market to experience a worse product. In its most basic form, enshittification describes the intentional decline of platform quality over time. In social media, this can manifest as increasingly intrusive data-mining of everyday users (the ostensible “customer” of the service) while advertisers also experience worse results for the same amount of money spent on their ads. In the digital world, one can [theoretically] opt-out of this data-mining by not using a service. However, the prevalence of embodied artificial intelligence (AI)—such as smart devices in homes, vehicles, offices, and other private or semi-private locations—has raised the stakes. A device purchased for a purpose can only be “opted-out” of by removing it from the internet (thus, degrading a service already paid for) or disposing of the device (giving up the function already paid for); for example, in April 2025, Samsung stated to The Verge that it had no plans to deploy ads on screens on its refrigerators, yet by September 2025, they had rolled out a pilot program to do just that. Thus, Samsung turned its product from a traditional one-sided market, where consumers purchase a device as a sole user, to a two-sided market where the consumer and advertisers are now competing for use of a device already purchased. This “enshittification” effectively undermines consumer autonomy by turning everyday devices into data extraction and/or advertising platforms. Robots, as a mobile form of embodied AI extend the capability of stationary smart home devices, and thus , further raise the stakes by not only showing ads at all times but also collecting data purely for the use of advertisers, AI company model training, or other third parties.
Robots intended for home consumer use are often social in nature and use social interaction and relationship norms to keep users engaged. Turning such robots into two-sided market platforms, where users and advertisers both become targets of monetization, poses unique and significant risks. Unlike static smart or connected devices, social robots actively engage users via gaze, gesture, and importantly language and dialog which makes them uniquely persuasive forms of technology. Thus marketing in these interactions could occur in particularly subversive or non-transparent ways. Consumers may not realize that a robot’s friendly suggestion or selective recommendation stems from a paid ad placement or data-driven market cue. Moreover, in collaboration with advertisers, manufacturers could engage in affective manipulation by withholding the social capabilities of the robot as a form of monetization. Meaning that emotional bonds or perceived relationships can be hijacked by suggesting that warmth, attention, or responsiveness from the robot depends on compliance with sponsored behaviors, special purchases, or expensive subscription fees. Such emotional manipulation blurs the boundaries between companionship and commerce which challenges users’ capacity for providing informed consent, opting out, or otherwise choosing to disengage.
This symposium aims to bring together like-minded researchers from academia, industry, and government to discuss these potential pitfalls and to develop a vision for preventing their occurrence. While Doctorow is hardly the only voice in this space, we start with the two solutions he advocates that can be approached from the perspective of an AI researcher:
Products must uphold the “end-to-end principle,” originating from the field of computer networking: data must flow from willing senders to willing receivers.
Users must be able to leave platforms with little effort, enabling alternatives that provide competition.
Doctorow also advocates for anti-monopoly measures, which are, unfortunately, beyond the scope of this symposium. We augment these solutions with focused discussions to develop:
Methods to easily teach users about potential future pitfalls of technology they may purchase or otherwise opt into.
Methods that an AI system can use to detect that it is violating the end-to-end principle.
Responsible commercialization, frameworks to differentiate ethical versus exploitative business models for robots and embodied AI.
Human and machine readable monetization disclosures in AI interfaces as “ad provenance” indicators.
Enshittification resistant AI architectures.
Metrics for levels of enshittification, measures of degradation over time.
Emotional manipulation and where it falls under consumer protection
Lessons learned from cross-sector governance (e.g., medical device protections that could apply to home robots).
Objectives
Connect like-minded researchers
Develop a white paper that envisions a better future for embodied AI and user sovereignty
Draft actionable recommendations for researchers
Topics of interest
Case studies of dual-sided AI deployments
Privacy-aware AI
Ensuring trustworthy behavior of AI systems
AI Ethics/Responsible Computing
Discussion of societal impacts of robots that enshittify.
Consideration of the inequities caused by devices that advertise relentlessly.
Discussion of a research agenda to anticipate and prevent the enshittification of robots that provide services (e.g., in public, in homes).
Affective manipulation by consumer products and AI driven technologies
Nominal Schedule
Day 1
Morning:
Introductions
Ice breaker
Keynote on AI and dual use systems
Afternoon:
Panel the discussion on AI and data usage/privacy
Speculative design session to imagine best and worst case scenarios for future potential two-sided market AI products
Dinner:
Less-formal discussions
Day 2
Morning:
Keynote on methods to protect against enshittification
Followed by panel on same
Afternoon:
Working session: break into small groups to tackle targeted methods
Report out and discuss
Dinner:
Less-formal discussions
Day 3
Morning:
Organization for a white paper
Small group writing session
Conclusion